id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
3958005
https://en.wikipedia.org/wiki/Availability%20factor
Availability factor
The availability factor of a power plant is the duration it achieves production of electricity divided by the duration that it was planned to produce electricity. In the field of reliability engineering, availability factor is known as operational availability, . The capacity factor of a plant includes numerous other factors which determine the durations the plant is planned to produce electricity. A solar photovoltaic plant is not planned to operate in the dark of a night, hence unplanned maintenance occurring whilst the sun is set does not impact the availability factor. Periods of generation where only partial generation of planned capacity occurs may or may not be deducted from the availability factor. An example of partial generation is a power plant with four installed turbines planned to be concurrently operational, but one of those turbines subsequently requires unplanned maintenance. Where deductions are made the metric is titled equivalent availability factor (EAF). The availability of a power plant varies greatly depending on the type of fuel, the design of the plant and how the plant is operated. Everything else being equal, plants that are run less frequently have higher availability factors because they require less maintenance and because more inspections and maintenance can be scheduled during idle time. Most thermal power stations, such as coal, geothermal and nuclear power plants, have availability factors between 70% and 90%. Newer plants tend to have significantly higher availability factors, but preventive maintenance is as important as improvements in design and technology. Gas turbines have relatively high availability factors, ranging from 80% to 99%. Gas turbines are commonly used for peaking power plants, co-generation plants and the first stage of combined cycle plants. Originally the term availability factor was used only for power plants that depended on an active, controlled supply of fuel, typically fossil or later also nuclear. The emergence of renewable energy such as hydro, wind and solar power, which operate without an active, controlled supply of fuel and which come to a standstill when their natural supply of energy ceases, requires a more careful distinction between the availability factor and the capacity factor. By convention, such zero production periods are counted against the capacity factor but not against the availability factor, which thus remains defined as depending on an active, controlled supply of fuel, along with factors concerning reliability and maintenance. A wind turbine cannot operate in wind speeds above a certain limit, which counts against its availability factor. With this definition, modern wind turbines which require very little maintenance, have very high availability factors, up to about 98%. Photovoltaic power stations which have few or no moving parts and which can undergo planned inspections and maintenance during night have an availability factor approaching or equal to 100% when the sun is shining.
Technology
Concepts
null
40083369
https://en.wikipedia.org/wiki/Land
Land
Land, also known as dry land, ground, or earth, is the solid terrestrial surface of Earth not submerged by the ocean or another body of water. It makes up 29.2% of Earth's surface and includes all continents and islands. Earth's land surface is almost entirely covered by regolith, a layer of rock, soil, and minerals that forms the outer part of the crust. Land plays an important role in Earth's climate system, being involved in the carbon cycle, nitrogen cycle, and water cycle. One-third of land is covered in trees, another third is used for agriculture, and one-tenth is covered in permanent snow and glaciers. The remainder consists of desert, savannah, and prairie. Land terrain varies greatly, consisting of mountains, deserts, plains, plateaus, glaciers, and other landforms. In physical geology, the land is divided into two major categories: Mountain ranges and relatively flat interiors called cratons. Both form over millions of years through plate tectonics. Streams – a major part of Earth's water cycle – shape the landscape, carve rocks, transport sediments, and replenish groundwater. At high elevations or latitudes, snow is compacted and recrystallized over hundreds or thousands of years to form glaciers, which can be so heavy that they warp the Earth's crust. About 30 percent of land has a dry climate, due to losing more water through evaporation than it gains from precipitation. Since warm air rises, this generates winds, though Earth's rotation and uneven sun distribution also play a part. Land is commonly defined as the solid, dry surface of Earth. It can also refer to the collective natural resources that the land holds, including rivers, lakes, and the biosphere. Human manipulation of the land, including agriculture and architecture, can also be considered part of land. Land is formed from the continental crust, the layer of rock on which soil. groundwater, and human and animal activity sits. Though modern terrestrial plants and animals evolved from aquatic creatures, Earth's first cellular life likely originated on land. Survival on land relies on fresh water from rivers, streams, lakes, and glaciers, which constitute only three percent of the water on Earth. The vast majority of human activity throughout history has occurred in habitable land areas supporting agriculture and various natural resources. In recent decades, scientists and policymakers have emphasized the need to manage land and its biosphere more sustainably, through measures such as restoring degraded soil, preserving biodiversity, protecting endangered species, and addressing climate change. Definition Land is often defined as the solid, dry surface of Earth. The word land may also collectively refer the collective natural resources of Earth, including its land cover, rivers, shallow lakes, its biosphere, the lowest layer of the atmosphere (troposphere), groundwater reserves, and the physical results of human activity on land, such as architecture and agriculture. The boundary between land and sea is called the shoreline. Etymology The word land is derived from Old English, from the Proto-Germanic word , "untilled land", and then the Proto-Indo-European , especially in northern regions that were home to languages like Proto-Celtic and Proto-Slavic. Examples include Old Irish land, "land, plot, church building" and Old Irish ithlann, "threshing floor", and Old East Slavic ljadina "wasteland, weeds". A country or nation may be referred to as the motherland, fatherland, or homeland of its people. Many countries and other places have names incorporating the suffix -land (e.g. England, Greenland, and New Zealand). The equivalent suffix -stan from Indo-Iranian, ultimately derived from the Proto-Indo-Iranian , is also present in many country and location names, such as Pakistan, Afghanistan, and others throughout Central Asia. The suffix is also used more generally, as in Persian () "place of sand, desert", () "place of flowers, garden", () "graveyard, cemetery", and Hindustân () "land of the Indo people". Physical science The study of land and its history in general is called geography. Mineralogy is the study of minerals, and petrology is the study of rocks. Soil science is the study of soils, encompassing the sub-disciplines of pedology, which focuses on soil formation, and edaphology, which focuses on the relationship between soil and life. Formation The earliest material found in the Solar System is dated to (billion years ago); therefore, Earth itself must have been formed by accretion around this time. The formation and evolution of the Solar System bodies occurred in tandem with the Sun. In theory, a solar nebula partitions a volume out of a molecular cloud by gravitational collapse, which begins to spin and flatten into a circumstellar disc, out of which the planets then grow (in tandem with the star). A nebula contains gas, ice grains and dust (including primordial nuclides). In the nebular hypothesis, planetesimals begin to form as particulate matter accumulates by cohesive clumping and then by gravity. The primordial Earth's assembly took 10–. By , the primordial Earth had formed. Earth's atmosphere and oceans were formed by volcanic activity and outgassing that included water vapour. The origin of the world's oceans was condensation augmented by water and ice delivered by asteroids, protoplanets, and comets. In this model, atmospheric "greenhouse gases" kept the oceans from freezing while the newly formed Sun was only at 70% luminosity. By , the Earth's magnetic field was established, which helped prevent the atmosphere from being stripped away by the solar wind. The atmosphere and oceans of the Earth continuously shape the land by eroding and transporting solids on the surface. Earth's crust formed when the molten outer layer of Planet Earth cooled to form a solid mass as the accumulated water vapour began to act in the atmosphere. Once land became capable of supporting life, biodiversity evolved over hundreds of millions of years, expanding continually except when punctuated by mass extinctions. The two models that explain land mass propose either a steady growth to the present-day forms or, more likely, a rapid growth early in Earth history followed by a long-term steady continental area. Continents are formed by plate tectonics, a process ultimately driven by the continuous loss of heat from the Earth's interior. On time scales lasting hundreds of millions of years, the supercontinents have formed and broken apart three times. Roughly (million years ago), one of the earliest known supercontinents, Rodinia, began to break apart. The continents later recombined to form Pannotia, 600–, then finally Pangaea, which also broke apart . Landmasses A continuous area of land surrounded by an ocean is called a landmass. Although it is most often written as one word to distinguish it from the usage "land mass"—the measure of land area—it may also be written as two words. There are four major continuous landmasses on Earth: Africa-Eurasia, America (landmass), Antarctica, and Australia (landmass), which are subdivided into continents. Up to seven geographical regions are commonly regarded as continents. Ordered from greatest to least land area, these continents are Asia, Africa, North America, South America, Antarctica, Europe, and Australia. Terrain Terrain refers to an area of land and its features. Terrain affects travel, mapmaking, ecosystems, and surface water flow and distribution. Over a large area, it can influence climate and weather patterns. The terrain of a region largely determines its suitability for human settlement: flatter alluvial plains tend to have better farming soils than steeper, rockier uplands. Elevation is defined as the vertical distance between an object and sea level, while altitude is defined as the vertical distance from an object to Earth's surface. The elevation of Earth's land surface varies from the low point of −418 m (−1,371 ft) at the Dead Sea, to a maximum altitude of 8,848 m (29,029 ft) at the top of Mount Everest. The mean height of land above sea level is about 797 m (2,615 ft), with 98.9% of dry land situated above sea level. Relief refers to the difference in elevation within a landscape; for example, flat terrain would have "low relief", while terrain with a large elevation difference between the highest and lowest points would be deemed "high relief". Most land has relatively low relief. The change in elevation between two points of the terrain is called a slope or gradient. A topographic map is a form of terrain cartography which depicts terrain in terms of its elevation, slope, and the orientation of its landforms. It has prominent contour lines, which connect points of similar elevation, while perpendicular slope lines point in the direction of the steepest slope. Hypsometric tints are colors placed between contour lines to indicate elevation relative to sea level. A difference between uplands, or highlands, and lowlands is drawn in several earth science fields. In river ecology, "upland" rivers are fast-moving and colder than "lowland" rivers, encouraging different species of fish and other aquatic wildlife to live in these habitats. For example, nutrients are more present in slow-moving lowland rivers, encouraging different species of macrophytes to grow there. The term "upland" is also used in wetland ecology, where "upland" plants indicate an area that is not a wetland. In addition, the term moorland refers to upland shrubland biomes with acidic soils, while heathlands are lowland shrublands with acidic soils. Geomorphology Geomorphology refers to the study of the natural processes that shape land's surface, creating landforms. Erosion and tectonics, volcanic eruptions, flooding, weathering, glaciation, the growth of coral reefs, and meteorite impacts are among the processes that constantly reshape Earth's surface over geological time. Erosion transports one part of land to another via natural processes, such as wind, water, ice, and gravity. In contrast, weathering wears away rock and other solid land without transporting the land somewhere else. Natural erosional processes usually take a long time to cause noticeable changes in the landscape—for example, the Grand Canyon was created over the past 70 million years by the Colorado River, which scientists estimate continues to erode the canyon at a rate of 0.3 meters (1 foot) every 200 years. However, humans have caused erosion to be 10–40 times faster than normal, causing half the topsoil of the surface of Earth's land to be lost within the past 150 years. Plate tectonics refers to the theory that Earth's lithosphere is divided into "tectonic plates" that move over the mantle. This results in continental drift, with continents moving relative to each other. The scientist Alfred Wegener first hypothesized the theory of continental drift in 1912. More researchers developed his idea throughout the 20th century into the now widely accepted theory of plate tectonics. Several key characteristics define the modern understanding of plate tectonics. The place where two tectonic plates meet is called a plate boundary, with different geological phenomena occurring across different kinds of boundaries. For example, at divergent plate boundaries, seafloor spreading is usually seen, in contrast with the subduction zones of convergent or transform plate boundaries. Earthquakes and volcanic activity are common in all types of boundaries. Volcanic activity refers to any rupture in Earth's surface where magma escapes, therefore becoming lava. The Ring of Fire, containing two-thirds of the world's volcanos, and over 70% of Earth's seismological activity, comprises plate boundaries surrounding the Pacific Ocean. Climate Earth's land interacts with and influences its climate heavily, since the land's surface heats up and cools down faster than air or water. Latitude, elevation, topography, reflectivity, and land use all have varying effects on climate. The latitude of the land will influence how much solar radiation reaches its surface. High latitudes receive less solar radiation than low latitudes. The land's topography is important in creating and transforming airflow and precipitation. Large landforms, such as mountain ranges, can divert wind energy and make air parcels less dense and therefore able to hold less heat. As air rises, this cooling effect causes condensation and precipitation. Different types of land cover will influence the land's albedo, a measure of the solar radiation that is reflected, rather than absorbed and transferred to Earth. Vegetation has a relatively low albedo, meaning that vegetated surfaces are good absorbers of the sun's energy. Forests have an albedo of 10–15 percent while grasslands have an albedo of 15–20 percent. In comparison, sandy deserts have an albedo of 25–40 percent. Land use by humans also plays a role in the regional and global climate. Densely populated cities are warmer and create urban heat islands that have effects on the precipitation, cloud cover, and temperature of the region. Features A landform is a natural or manmade land feature. Landforms together make up a given terrain, and their arrangement in the landscape is known as topography. Landforms include hills, mountains, canyons, and valleys, as well as shoreline features such as bays, capes, and peninsulas. Coasts and islands The shoreline is the interface between the land and the ocean. It migrates each day as tides rise and fall and moves over long periods of time as sea levels change. The shore extends from the low tide line to the highest elevation that can be reached by storm waves, and the coast stretches out inland until the point where ocean-related features are no longer found. When land is in contact with bodies of water, it can be eroded. The weathering of a coastline may be impacted by the tides, caused by changes in gravitational forces on larger bodies of water. Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbour important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas they harbor saltmarshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic species. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of 1–50 meters (3.3–164.0 feet). According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Because of their importance in society and high concentration of population, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries for commercial, recreational, and subsistence purposes, and aquaculture are major economic activities and provide jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate economic activity through tourism. Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, the coastal mangrove is the primary source of wood for fuel (e.g. charcoal) and building materials. Coastal ecosystems have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide. A subcontinental area of land surrounded by water is an island, and a chain of islands is an archipelago. The smaller the island, the larger the percentage of its land area will be adjacent to the water, and subsequently will be coast or beach. Islands can be formed by a variety of processes. The Hawaiian islands, for example, even though they are not near a plate boundary, formed from isolated volcanic activity. Atolls are ring-shaped islands made of coral, created when subsidence causes an island to sink beneath the ocean surface and leaves a ring of reefs around it. Mountains and plateaus Mountains are features that usually rise at least higher than the surrounding terrain. The formation of mountain belts is called orogenesis, and results from plate tectonics. For example, where a plate at a convergent plate boundary pushes one plate above the other, mountains could be formed by either collisional events, such that Earth's crust is pushed upwards, or subductional events, where Earth's crust is pushed into the mantle, causing the crust to melt, rise due to its low density, and solidify into hardened rock, thickening the crust. A plateau, also called a high plain or a tableland, is an area of a highland consisting of flat terrain that is raised sharply above the surrounding area on at least one side, creating steep cliffs or escarpments. Both volcanic activity such as the upwelling of magma and extrusion of lava, or erosion of mountains caused from water, glaciers, or aeolian processes, can create plateaus. Plateaus are classified according to their surrounding environment as intermontane, piedmont, or continental. A few plateaus may have a small flat top while others are wider. Buttes are smaller, with less extrusive and more intrusive igneous rock, while plateaus or highlands are the widest, and mesas are a general-sized plateau with horizontal bedrock strata. Plains and valleys Wide, flat areas of land are called plains, which cover more than one-third of Earth's land area. When they occur as lowered areas between mountains, they can create valleys, canyons or gorges, and ravines. A plateau can be thought of as an elevated plain. Plains are known to have fertile soils and be important for agriculture due to their flatness supporting grasses suitable for livestock and facilitating the harvest of crops. Floodplains provided agricultural land for some of the earliest civilizations. Erosion is often a main driver for the creation of plains and valleys, with rift valleys being a noticeable exception. Fjords are glacial valleys that can be thousands of meters deep, opening out to the sea. Caves and craters Any natural void in the ground which can be entered by a human can be considered a cave. They have been important to humans as a place of shelter since the dawn of humanity. Craters are depressions in the ground, but unlike caves, they do not provide shelter or extend underground. There are many kinds of craters, such as impact craters, volcanic calderas, and isostatic depressions. Karst processes can create both solution caves, the most frequent cave type, and craters, as seen in karst sinkholes. Layers The pedosphere is the outermost layer of Earth's continental surface and is composed of soil and subject to soil formation processes. Below it, the lithosphere encompasses both Earth's crust and the uppermost layer of the mantle. The lithosphere rests, or "floats", on top of the mantle below it via isostasy. Above the solid ground, the troposphere and humans' use of land can be considered layers of the land. Land cover Land cover refers to the material physically present on the land surface, for example, woody crops, herbaceous crops, barren land, and shrub-covered areas. Artificial surfaces (including cities) account for about a third of a percent of all land. Land use refers to human allocation of land for various purposes, including farming, ranching, and recreation (e.g. national parks); worldwide, there are an estimated of cropland, and of pastureland. Land cover change detection using remote sensing and geospatial data provides baseline information for assessing the climate change impacts on habitats and biodiversity, as well as natural resources, in the target areas. Land cover change detection and mapping is a key component of interdisciplinary land change science, which uses it to determine the consequences of land change on climate. Land change modeling is used to predict and analyze changes in land cover and use. Soil Soil is a mixture of organic matter, minerals, gases, liquids, and organisms that together support life. Soil consists of a solid phase of minerals and organic matter (the soil matrix), as well as a porous phase that holds gases (the soil atmosphere) and water (the soil solution). Accordingly, soil is a three-state system of solids, liquids, and gases. Soil is a product of several factors: the influence of climate, relief (elevation, orientation, and slope of terrain), organisms, and the soil's parent materials (original minerals) interacting over time. It continually undergoes development by way of numerous physical, chemical and biological processes, which include weathering and erosion. Given its complexity and strong internal connectedness, soil ecologists regard soil as an ecosystem. Soil acts as an engineering medium, a habitat for soil organisms, a recycling system for nutrients and organic wastes, a regulator of water quality, a modifier of atmospheric composition, and a medium for plant growth, making it a critically important provider of ecosystem services. Since soil has a tremendous range of available niches and habitats, it contains a prominent part of the Earth's genetic diversity. A gram of soil can contain billions of organisms, belonging to thousands of species, mostly microbial and largely still unexplored. Soil is a major component of the Earth's ecosystem. The world's ecosystems are impacted in far-reaching ways by the processes carried out in the soil, with effects ranging from ozone depletion and global warming to rainforest destruction and water pollution. With respect to Earth's carbon cycle, soil acts as an important carbon reservoir, and it is potentially one of the most reactive to human disturbance and climate change. As the planet warms, it has been predicted that soils will add carbon dioxide to the atmosphere due to increased biological activity at higher temperatures, a positive feedback (amplification). This prediction has, however, been questioned on consideration of more recent knowledge on soil carbon turnover. Continental crust Continental crust is the layer of igneous, sedimentary, and metamorphic rocks that forms the geological continents and the areas of shallow seabed close to their shores, known as continental shelves. This layer is sometimes called sial because its bulk composition is richer in aluminium silicate and has a lower density compared to the oceanic crust, called sima which is richer in magnesium silicate. Changes in seismic wave velocities have shown that at a certain depth (the Conrad discontinuity), there is a reasonably sharp contrast between the more felsic upper continental crust and the lower continental crust, which is more mafic in character. The composition of land is not uniform across the Earth, varying between locations and between strata within the same location. The most prominent components of upper continental crust include silicon dioxide, aluminium oxide, and magnesium. The continental crust consists of lower density material such as the igneous rocks granite and andesite. Less common is basalt, a denser volcanic rock that is the primary constituent of the ocean floors. Sedimentary rock is formed from the accumulation of sediment that becomes buried and compacted together. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form about 5% of the crust. The most abundant silicate minerals on Earth's surface include quartz, feldspars, amphibole, mica, pyroxene and olivine. Common carbonate minerals include calcite (found in limestone) and dolomite. The rock that makes up land is thicker than oceanic crust, and it is far more varied in terms of composition. About 31% of this continental crust is submerged in shallow water, forming continental shelves. Life science Land provides many ecosystem services, such as mitigating climate change, regulating water supply through drainage basins and river systems, and supporting food production. Land resources are finite, which has led to regulations intended to safeguard these ecosystem services, and a set of practices called sustainable land management. Land biomes A biome is an area "characterized by its vegetation, soil, climate, and wildlife." There are five major types of biomes on land: grasslands, forests, deserts, tundras, and freshwater. Other types of biomes include shrublands, wetlands, and polar ice caps. An ecosystem refers to the interaction between organisms within a particular environment, and a habitat refers to the environment where a given species or population of organisms lives. Biomes may span more than one continent, and contain a variety of ecosystems and habitats. Deserts have an arid climate, generally defined to mean that they receive less than of precipitation per year. They make up around one fifth of the Earth's land area, are found on every continent, and can be very hot or very cold (see polar desert). They are home to animals and plants which evolved to be tolerant of droughts. In deserts, most erosion is caused by running water, usually during violent thunderstorms, which cause flash floods. Deserts are expanding due to desertification, which is caused by excessive deforestation and overgrazing. Tundra is a biome where tree growth is hindered by frigid temperatures and short growing seasons. There are types of tundra associated with different regions: Arctic tundra, alpine tundra, and Antarctic tundra. A forest is an area of land dominated by trees. Many definitions of "forest" are used throughout the world, incorporating factors such as tree density, tree height, land use, legal standing, and ecological function. The United Nations' Food and Agriculture Organization (FAO) defines a forest as: "land spanning more than 0.5 hectares with trees higher than 5 meters and a canopy cover of more than 10 per cent, or trees able to reach these thresholds in situ. It does not include land that is predominantly under agricultural or urban use." Types of forests include rainforests, deciduous forests, and boreal forests. Grasslands are areas where the vegetation is dominated by grasses (Poaceae). However, sedge (Cyperaceae) and rush (Juncaceae) can also be found, along with variable proportions of legumes like clover and other herbs. Grasslands occur naturally on all continents except Antarctica and are found in most ecoregions of the Earth. Furthermore, grasslands are one of the largest biomes on earth and dominate the landscape worldwide. Types include natural, semi-natural, and agricultural grasslands. Savannas are grasslands with occasional, scattered trees. Fauna and flora Land plants evolved from green algae, and are called embryophytes. They include trees, shrubs, ferns, grass, moss, and flowers. Most plants are vascular plants, meaning that their tissues distribute water and minerals throughout the plant. Through photosynthesis, most plants nourish themselves from sunlight and water, breathing in carbon dioxide and breathing out oxygen. Between 20 and 50% of oxygen is produced by land vegetation. Unlike plants, terrestrial animals are not a monophyletic group—that is, a group including all terrestrial animals does not encompass all lineages from a common ancestor. This is because there are organisms, such as the whale, that evolved from terrestrial mammals back to an aquatic lifestyle. Many megafauna of the past, such as non-avian dinosaurs, have become extinct due to extinction events, e.g. the Quaternary extinction event. Humans and land Land is "deeply intertwined with human development." It is a crucial resource for human survival, humans depend on land for subsistence, and can develop strong symbolic attachments to it. Access to land can determine "survival and wealth," particularly in developing countries, giving rise to complex power relationships in production and consumption. Most of the world's philosophies and religions recognize a human duty of stewardship towards land and nature. Culture Many humans see land as a source of "spirituality, inspiration, and beauty." Many also derive a sense of belonging from land, especially if it also belonged to their ancestors. Various religions teach about a connection between humans and the land (such as veneration of Bhumi, a personification of the Earth in Hinduism, and the obligation to protect land as hima in Islam), and in almost every Indigenous group there are etiological stories about the land they live on. For Indigenous peoples, connection to the land is an important part of their identity and culture, and some religious groups consider a particular area of land to be sacred, such as the Holy Land in the Abrahamic religions. Creation myths in many religions involve stories of the creation of the world by a supernatural deity or deities, including accounts wherein the land is separated from the oceans and the air. The Earth itself has often been personified as a deity, in particular a goddess. In many cultures, the mother goddess is also portrayed as a fertility deity. To the Aztecs, Earth was called Tonantzin—"our mother"; to the Incas, Earth was called Pachamama—"mother earth". In Norse mythology, the Earth giantess Jörð was the mother of Thor and the daughter of Annar. Ancient Egyptian mythology is different from that of other cultures because Earth (Geb) is male and the sky (Nut) is female. Ancient Near Eastern cultures conceived of the world as a flat disk of land surrounded by ocean. The Pyramid Texts and Coffin Texts reveal that the ancient Egyptians believed Nun (the ocean) was a circular body surrounding nbwt (a term meaning "dry lands" or "islands"). The Hebrew Bible, drawing on other Near Eastern ideas, depicts the Earth as a flat disc floating on water, with another expanse of water above it. A similar model is found in the Homeric account of the 8th century BC in which "Okeanos, the personified body of water surrounding the circular surface of the Earth, is the begetter of all life and possibly of all gods." The spherical form of the Earth was suggested by early Greek philosophers, a belief espoused by Pythagoras. Contrary to popular belief, most educated people in the Middle Ages did not believe the Earth was flat: this misconception is often called the "Myth of the Flat Earth". As evidenced by thinkers such as Thomas Aquinas, the European belief in a spherical Earth was widespread by this point in time. Prior to circumnavigation of the planet and the introduction of space flight, belief in a spherical Earth was based on observations of the secondary effects of the Earth's shape and parallels drawn with the shape of other planets. Travel Humans have commonly traveled for business, pleasure, discovery, and adventure, all made easier in recent human history as a result of technologies like cars, trains, planes, and ships. Land navigation is an aspect of travel and refers to progressing through unfamiliar terrain using navigational tools like maps with references to terrain, a compass, or satellite navigation. Navigation on land is often facilitated by reference to landmarks – enduring and recognizable natural or artificial features that stand out from their nearby environment and are often visible from long distances. Natural landmarks can be characteristic features, such as mountains or plateaus, with examples including Table Mountain in South Africa, Mount Ararat in Turkey, the Grand Canyon in the United States, Uluru in Australia, and Mount Fuji in Japan. Two major eras of exploration occurred in human history: one of divergence, and one of convergence. The former saw humans moving out of Africa, settling in new lands, and developing distinct cultures in relative isolation. Early explorers settled in Europe and Asia; 14,000 years ago, some crossed the Ice Age land bridge from Siberia to Alaska and moved southbound to settle in the Americas. For the most part, these cultures were ignorant of each other's existence. The second period, occurring over roughly the last 10,000 years, saw increased cross-cultural exchange through trade and exploration, marking a new era of cultural intermingling. Trade Human trade has occurred since the prehistoric era. Peter Watson dates the history of long-distance commerce from c. 150,000 years ago. Major trade routes throughout history have existed on land, such as the Silk Road which linked East Asia with Europe and the Amber Road which was used to transfer amber from Northern Europe to the Mediterranean Sea. The Dark Ages led trade to collapse in the West, but it continued to flourish among the kingdoms of Africa, the Middle East, India, China, and Southeast Asia. During the Middle Ages, Central Asia was the economic centre of the world, and luxury goods were commonly traded in Europe. Physical money (either barter or precious metals) was dangerous to carry over a long distance. To address this, a burgeoning banking industry enabled the shift to movable wealth or capital, making it far easier and safer to trade across long distances. After the Age of Sail, international trade mostly occurred along sea routes, notably to prevent intermediary countries from being able to control trade routes and the flow of goods. In economics, land refers to a factor of production. It can be leased in exchange for rent, and use of its various raw material resources (trees, oil, metals). Land use For more than 10,000 years, humans have engaged in activities on land such as hunting, foraging, controlled burning, land clearing, and agriculture. Beginning with the Neolithic Revolution and the spread of agriculture around the world, human land use has significantly altered terrestrial ecosystems, with an essentially global transformation of Earth's landscape by 3000 years ago. From around 1750, human land use has increased at an accelerating rate due to the Industrial Revolution, which created a greater demand for natural resources and caused rapid population growth. Agriculture includes both crop farming and animal husbandry. A third of Earth's land surface is used for agriculture, with estimated of cropland and of pastureland. This has had significant impacts on Earth's ecosystems. When land is cleared to make way for agriculture, native flora and fauna are replaced with newly introduced crops and livestock. Excessively high agricultural land use is driven by poor management practices (which lead to lower food yields, necessitating more land use), food demand, food waste, and diets high in meat. Urbanization has led to greater population growth in urban areas in the last century. Although urban areas make up less than 3 percent of Earth's land area, the global population shifted from a majority living in rural areas to a majority living in urban areas in 2007. People living in urban areas depend on food produced in rural areas outside of their cities, which creates greater demand for agriculture and drives land use change well beyond city boundaries. Urbanization also displaces agricultural land because it mainly takes place on the most fertile land. Urban expansion in peri-urban areas fragments agricultural and natural lands, forcing agriculture to move to less fertile land elsewhere. Because this land is less fertile, more land is needed for the same output, which increases the total agricultural land use. Another form of land use is mining, whereby minerals are extracted from the ground using a variety of methods. Evidence of mining activity dates back to around 3000 BCE in Ancient Egypt. Important minerals include iron ore, mined for use as a raw material; coal, mined for energy production; and gemstones, mined for use in jewellery and currency. Law The phrase "the law of the land" first appeared in 1215 in Magna Carta, inspiring its later usage in the United States Constitution. The idea of common land also originated with medieval English law, and refers collective ownership of land, treating it as a common good. In environmental science, economics, and game theory, the tragedy of the commons refers to individuals' use of common spaces for their own gain, deteriorating the land overall by taking more than their fair share and not cooperating with others. The idea of common land suggests public ownership; but there is still some land that can be privatized as property for an individual, such as a landlord or king. In the developed world, land is expected to be privately owned by an individual with legal title, but in the developing world the right to use land is often divided, with the rights to land resources being given to different people at different times for the same area of land. Beginning in the late 20th century, the international community has begun to recognise Indigenous land rights in law, for example, the Treaty of Waitangi for Māori people, the Act on Greenland Self-Government for Inuit people, and the Indigenous Peoples Rights Act in the Philippines. Geopolitics Borders are geographical boundaries imposed either by geographic features (oceans, mountain ranges, rivers) or by political entities (governments, states, or subnational entities). Political borders can be established through warfare, colonization, or mutual agreements between the political entities that reside in those areas; the creation of these agreements is called boundary delimitation. Many wars and other conflicts have occurred in efforts by participants to expand the land under their control, or to assert control of a specific area of considered to hold strategic, historical, or cultural significance. The Mongol Empire of the 13th and 14th centuries became the largest contiguous land empire in history through war and conquest. In the 19th-century United States, a concept of manifest destiny was developed by various groups, asserting that American settlers were destined to expand across North America. This concept was used to justify military action against the indigenous peoples of North America and of Mexico. The aggression of Nazi Germany in World War II was motivated in part by the concept of Lebensraum ("living space"), which had first became a geopolitical goal of Imperial Germany in World War I (1914–1918) originally, as the core element of the of territorial expansion. The most extreme form of this ideology was supported by the Nazi Party (NSDAP). Lebensraum was one of the leading motivations Nazi Germany had in initiating World War II, and it would continue this policy until the end of World War II. Environmental issues Land degradation is "the reduction or loss of the biological or economic productivity and complexity" of land as a result of human activity. Land degradation is driven by many different activities, including agriculture, urbanization, energy production, and mining. Humans have altered more than three-quarters of ice-free land through habitation and other use, fundamentally changing ecosystems. Human activity is a major factor in the Holocene extinction, and human-caused climate change is causing rising sea levels and ecosystem loss. Environmental scientists study land's ecosystems, natural resources, biosphere (fauna and flora), troposphere, and the impact of human activity on these. Their recommendations have led to international action to prevent biodiversity loss and desertification, and encourage sustainable forest and waste management. The conservation movement lobbies for the protection of endangered species and the protection of natural areas, such as parks. International frameworks have focused on analyzing how humans can meet their needs while using land more efficiently and preserving its natural resources, notably under the United Nations' Sustainable Development Goals framework. Soil degradation Human land use can cause soil to degrade, both in quality and in quantity. Soil degradation can be caused by agrochemicals (such as fertilizers, pesticides, and herbicides), infrastructure development, and mining among other activities. There are several different processes that lead to soil degradation. Physical processes, such as erosion, sealing, and crusting, lead to the structural breakdown of the soil. This means water cannot penetrate the soil surface, causing surface runoff. Chemical processes, such as salinization, acidification, and toxication, lead to chemical imbalances in the soil. Salinization in particular is detrimental, as it makes land less productive for agriculture and affects at least 20% of all irrigated lands. Deliberate disruption of soil in the form of tillage can also alter biological processes in the soil, which leads to excessive mineralization and the loss of nutrients. Desertification is a type of land degradation in drylands in which fertile areas become increasingly arid as a result of natural processes or human activities, resulting in loss of biological productivity. This spread of arid areas can be influenced by a variety of human factors, such as deforestation, improper land management, overgrazing, anthropogenic climate change, and overexploitation of soil. Throughout geological history, desertification has occurred naturally, though in recent times it is greatly accelerated by human activity. Pollution Ground pollution is soil contamination via pollutants, such as hazardous waste or litter. Ground pollution can be prevented by properly monitoring and disposing of waste, along with reducing unnecessary chemical and plastic use. Unfortunately, proper disposal of waste often is not economically beneficial or technologically viable, leading to short-term solutions of waste disposal that pollute the earth. Examples include dumping harmful industrial byproducts, overusing agricultural fertilizers and other chemicals, and poorly maintaining landfills. Some landfills can be thousands of acres in size, such as the Apex Regional landfill in Las Vegas. Water pollution on land is the contamination of non-oceanic hydrological surface and underground water features such as lakes, ponds, rivers, streams, wetlands, aquifers, reservoirs, and groundwater as a result of human activities. It may be caused by toxic substances (e.g., oil, metals, plastics, pesticides, persistent organic pollutants, industrial waste products), stressful conditions (e.g., changes of pH, hypoxia or anoxia, increased temperatures, excessive turbidity, unpleasant taste or odor, and changes of salinity), or pathogenic organisms. Biodiversity loss The biodiversity of Earththe variety and variability of lifeis threatened by climate change, human activities, and invasive species. Due to an increase in the rate of extinction, biodiversity loss is increasing. Agriculture can cause biodiversity loss as land is converted for agricultural use at a very high rate, particularly in the tropics, which directly causes habitat loss. The use of pesticides and herbicides can also negatively impact the health of local species. Ecosystems can also be divided and degraded by infrastructure development outside of urban areas. Biodiversity loss can sometimes be reversed through ecological restoration or ecological resilience, such as through the restoration of abandoned agricultural areas; however, it may also be permanent (e.g. through land loss). The planet's ecosystem is quite sensitive: occasionally, minor changes from a healthy equilibrium can have dramatic influence on a food web or food chain, up to and including the coextinction of that entire food chain. Biodiversity loss leads to reduced ecosystem services, and can eventually threaten food security. Earth is currently undergoing its sixth mass extinction (the Holocene extinction) as a result of human activities which push beyond the planetary boundaries. So far, this extinction has proven irreversible. Resource depletion Although humans have used land for its natural resources since ancient times, demand for resources such as timber, minerals, and energy has grown exponentially since the Industrial Revolution due to population growth. When a natural resource is depleted to the point of diminishing returns, it is considered the overexploitation of that resource. Some natural resources, such as timber, are considered renewable, because with sustainable practices they replenish to their previous levels. Fossil fuels such as coal are not considered renewable, as they take millions of years to form, with the current supply of coal expected to peak in the middle of the 21st century. Economic materialism, or consumerism, has influenced destructive patterns of modern resource usage, in contrast with pre-industrial usage. Gallery Different varieties of landscapes:
Physical sciences
Geography
null
33264696
https://en.wikipedia.org/wiki/Stab%20wound
Stab wound
A stab wound is a specific form of penetrating trauma to the skin that results from a knife or a similar pointed object. While stab wounds are typically known to be caused by knives, they can also occur from a variety of implements, including broken bottles and ice picks. Most stabbings occur because of intentional violence or through self-infliction. The treatment is dependent on many different variables such as the anatomical location and the severity of the injury. Even though stab wounds are inflicted at a much greater rate than gunshot wounds, they account for less than 10% of all penetrating trauma deaths. Management Stab wounds can cause various internal and external injuries. They are generally caused by low-velocity weapons, meaning the injuries inflicted on a person are typically confined to the path it took internally, instead of causing damage to surrounding tissue, which is common of gunshot wounds. The abdomen is the most commonly injured area from a stab wound. Interventions that may be needed depending on severity of the injury include airway, intravenous access, and control of hemorrhage. The length and size of the knife blade, as well as the trajectory it followed, may be important in planning management as it can be a predictor of what structures were damaged. There are also special considerations to take into effect as given the nature of injuries, there is a higher likelihood that persons with these injuries might be under the influence of drugs which can make it harder to obtain a complete medical history. Special precautions should also be taken to prevent further injury from a perpetrator to the victim in a hospital setting. Similarly to treating shock, it is important to keep the systolic pressure above 90mmHg, maintain the person's core body temperature, and for prompt transport to a trauma center in severe cases. To determine if internal bleeding is present a focused assessment with sonography (FAST) or diagnostic peritoneal lavage (DPL) can be used. Other diagnostic tests such as a computed tomography scan or various contrast studies can be used to more definitively classify the injury in both severity and location. Local wound exploration is also another technique that may be utilized to determine how far the object penetrated. Observation can be used in place of surgery as it can substitute an unnecessary surgery, which makes it the preferred treatment of penetrating trauma secondary to a stab wound when hypovolemia or shock is not present. Laboratory diagnostic studies such as a hematocrit, white blood cell count and chemical tests such as liver function tests can also help to determine the efficiency of care. Surgery Surgical intervention may be required, but it depends on what organ systems are affected by the wound and the extent of the damage. It is important for care providers to thoroughly check the wound site in as much as a laceration of an artery often results in delayed complications sometimes leading to death. In cases where there is no suspicion of bleeding or infection, there is no known benefit of surgery to correct any present injuries. Typically a surgeon will track the path of the weapon to determine the anatomical structures that were damaged and repair any damage they deem necessary. Surgical packing of the wounds is generally not the favored technique to control bleeding as it can be less useful than fixing the directly affected organs. In severe cases when homeostasis cannot be maintained the use of damage control surgery may be utilized. Epidemiology Stab wounds are one of the most common forms of penetrating trauma globally, but account for a lower mortality compared to blunt injuries due to their more focused impact on a person. Stab wounds can result from self-infliction, accidental nail gun injuries, and stingray injuries, however, most stab wounds are caused by intentional violence, as the weapons used to inflict such wounds are readily available compared to guns. Stabbings are a relatively common cause of homicide in Canada and the United States. Typically death from stab wounds is due to organ failure or blood loss. They are the mechanism of approximately 2% of suicides. In Canada, homicides by stabbing and gunshot occur relatively equally (1,008 to 980 for the years 2005 to 2009). In the United States guns are a more common method of homicide (9,484 versus 1,897 for stabbing or cutting in 2008). Stab wounds occur four times more than gunshot wounds in the United Kingdom, but the mortality rate associated with stabbing has ranged from 0–4% as 85% of injuries sustained from stab wounds only affect subcutaneous tissue. In Belgium, most assaults resulting in a stab wound occur to and by men and persons of ethnic minorities. History Some of the first principles of wound care come from Hippocrates who promoted keeping wounds dry except for irrigation. Guy de Chauliac would promote removal of foreign bodies, rejoining of severed tissues, maintenance of tissue continuity, preservation of organ substance, and prevention of complications. The first successful operation on a person who was stabbed in the heart was performed in 1896 by Ludwig Rehn, in what is now considered the first case of heart surgery. In the late 1800s it was hard to treat stab wounds because of poor transportation of victims to health facilities and the low ability for surgeons to effectively repair organs. However, the use of laparotomy, which has been developed a few years earlier, had provided better patient outcomes than had been seen before. After its inception, the use of exploratory laparotomies was highly encouraged for "all deep stab wounds" in which surgeons were to stop active bleeding, repair damage, and remove "devitalized tissues". Because laparotomies were seen to benefit patients, they were used on most every person with an abdominal stab wound until the 1960s when doctors were encouraged to use them more selectivity in favor of observation. During the Korean War, a greater emphasis was put on the use of pressure dressings and tourniquets to initially control bleeding.
Biology and health sciences
Types
Health
24631368
https://en.wikipedia.org/wiki/Tunable%20metamaterial
Tunable metamaterial
A tunable metamaterial is a metamaterial with a variable response to an incident electromagnetic wave. This includes remotely controlling how an incident electromagnetic wave (EM wave) interacts with a metamaterial. This translates into the capability to determine whether the EM wave is transmitted, reflected, or absorbed. In general, the lattice structure of the tunable metamaterial is adjustable in real time, making it possible to reconfigure a metamaterial device during operation. It encompasses developments beyond the bandwidth limitations in left-handed materials by constructing various types of metamaterials. The ongoing research in this domain includes electromagnetic band gap metamaterials (EBG), also known as photonic band gap (PBG), and negative refractive index material (NIM). Overview Since natural materials exhibit very weak coupling through the magnetic component of the electromagnetic wave, artificial materials that exhibit a strong magnetic coupling are being researched and fabricated. These artificial materials are known as metamaterials. The first of these were fabricated (in the lab) with an inherent, limited, response to only a narrow frequency band at any given time. Its main purpose was to practically demonstrate metamaterials. The resonant nature of metamaterials results in frequency dispersion and narrow bandwidth operation where the center frequency is fixed by the geometry and dimensions of the rudimentary elements comprising the metamaterial composite. These were followed by demonstrations of metamaterials that were tunable only by changing the geometry and/or position of their components. These have been followed by metamaterials that are tunable in wider frequency ranges along with strategies for varying the frequencies of a single medium (metamaterial). This is in contrast to the fixed frequency metamaterial, which is determined by the imbued parameters during fabrication. Tuning strategies for split ring resonators Metamaterial-based devices could come to include filters, modulators, amplifiers, transistors, and resonators, among others. The usefulness of such a device could be extended tremendously if the metamaterial’s response characteristics can be dynamically tuned. Control of the effective electromagnetic parameters of a metamaterial is possible through externally tunable components. Single element control Studies have examined the ability to control the response of individual particles using tunable devices such as varactor diodes, semiconductor materials, and barium strontium titanate (BST) thin films. For example, H. T. Chen, in 2008, were able to fabricate a repeating split-ring resonator (SRR) cell with semiconductor material aligning the gaps. This initial step in metamaterial research expanded the spectral range of operation for a given, specific, metamaterial device. Also this opened the door for implementing new device concepts. The importance of incorporating the semiconductor material this way is noted because of the higher frequency ranges at which this metamaterial operates. It is suitable at terahertz (THz) and higher frequencies, where the entire metamaterial composite may have more than 104 unit cells, along with bulk-vertical integration of the tuning elements. Strategies employed for tuning at lower frequencies would not be possible because of the number of unit cells involved. The semiconductor material, such as silicon, is controlled by photoexcitation. This in turn controls, or alters, the effective size of the capacitor and tunes the capacitance. The whole structure is not just semiconductor material. This was termed a 'hybrid', because the semiconductor material was fused with dielectric material; a silicon-on-sapphire (SOS) wafer. Wafers were then stacked - fabricating a whole structure. A. Degiron et al., appear to have used a similar strategy in 2007. Multi-element control A multielement tunable magnetic medium was reported by Zhao et al. This structure immersed SRRs in liquid crystals, and achieved a 2% tunable range. BST-loaded SRRs comprising tunable metamaterial, encapsulates all of the tunability within the SRR circuit. In a section below, a research team reported a tunable negative index medium using copper wires and ferrite sheets. The negative permeability behavior appears to be dependent on the location and bandwidth of the ferrimagnetic resonance, a break from wholly non-magnetic materials, which produces a notable negative index band. A coil or permanent magnetic is needed to supply the magnetic field bias for tuning. Electrical tuning Electrical tuning for tunable metamaterials. Magnetostatic control Magnetostatic control for tunable metamaterials. Optical pumping Optical pumping for tunable metamaterials. Tunable NIMs using ferrite material Yttrium iron garnet (YIG) films allow for a continuously tunable negative permeability, which results in a tunable frequency range over the higher frequency side of the ferromagnetic resonance of the YIG. Complementary negative permittivity is achieved using a single periodic array of copper wires. Eight wires were spaced 1 mm apart and a ferromagnetic film of a multi-layered YIG at 400 mm thickness was placed in a K band waveguide. The YIG film was applied to both sides of a gadolinium gallium garnet substrate of 0.5 mm thickness. Ferromagnetic resonance was induced when the external H magnetic field was applied along the X axis. The external magnetic field was generated with an electromagnet. Pairs of E–H tuners were connected before and after the waveguide containing the NIM composite. The tunability was demonstrated from 18 to 23 GHz. Theoretical analysis, which followed, closely matched the experimental results. An air gap was built into the structure between the array of copper wires and the YIG. This reduces coupling with the ferrite, YIG material. When negative permeability is achieved across a range of frequencies, the interaction of the ferrite with the wires in close proximity, reduces the net current flow in the wires. This is the same as moving toward positive permittivity. This would be an undesired result as the material would no longer be a NIM. The separation also reduces the effective loss of the dielectric, induced by the interaction of the wire's self-field with permeability. Furthermore, there are two sources of conduction in the copper wire. First, the electric field in a (microwave) waveguide creates a current in the wire. Second, any arbitrary magnetic field created by the ferrite when it moves into a perpendicular configuration induces a current. Additionally, at frequencies where μ is negative, the induced microwave magnetic field is opposite to the field excited in a TE10 mode of propagation in a waveguide. Hence, the induced current is opposite to the current resulting from the electric field in a waveguide. Metamaterial phase shifter In aerospace applications (for example) negative index metamaterials are likely candidates for tunable, compact and lightweight phase shifters. Because the designated metamaterials can handle the appropriate power levels, have strong dispersion characteristics, and are tunable in the microwave range these show potential to be desirable phase shifters. The YIG negative index metamaterial is a composite which actually utilizes ferrite material. As a metamaterial, the ferrite produces a resonant, (real) magnetic permeability μ' that is large enough to be comparable to the conventional ferrite phase shifter. The advantage of using a ferrite NIM material for phase shifter application is that it allows use of a ferrite in the negative magnetic permeability region near the FMR (ferromagnetic resonance frequency) when is relatively high and still maintains low losses. Near the FMR frequency, the magnitude of μ' is larger than that at frequencies away from it. Assuming the loss factor to be about the same for the NIM and the conventional ferrite phase shifter, we would expect a much improved performance using the NIM composite, since the phase shifts would be significantly higher owing to higher differential μ'. Liquid crystal tuning for metamaterials Liquid crystal metamaterial tunable in the near-infrared Tuning in the near infrared range is accomplished by adjusting the permittivity of an attached nematic liquid crystal. The liquid crystal material appears to be used as both a substrate and a jacket for a negative index metamaterial. The metamaterial can be tuned from negative index values, to zero index, to positive index values. In addition, negative index values can be increased or decreased by this method. Tunability of wire-grid metamaterial immersed into nematic liquid crystal Sub-wavelength metal arrays, essentially another form of metamaterial, usually operate in the microwave and optical frequencies. A liquid crystal is both transparent and anisotropic at those frequencies. In addition, a liquid crystal has the inherent properties to be both intrinsically tunable and provide tuning for the metal arrays. This method of tuning a type of metamaterial can be readily used as electrodes for applying switching voltages. Tuning NIMs with liquid crystals Areas of active research in optical materials are metamaterials that are capable of negative values for index of refraction (NIMs), and metamaterials that are capable of zero index of refraction (ZIMs). Complicated steps required to fabricate these nano-scale metamaterials have led to the desire for fabricated, tunable structures capable of the prescribed spectral ranges or resonances. The most commonly applied scheme to achieve these effects is electro-optical tuning. Here the change in refractive index is proportional to either the applied electric field, or is proportional to the square modulus of the electric field. These are the Pockels effect and Kerr effect, respectively. However, to achieve these effects electrodes must be built-in during the fabrication process. This introduces problematic complexity into material formation techniques. Another alternative is to employ a nonlinear optical material as one of the constituents of this system, and depend on the optical field intensity to modify the refractive index, or magnetic parameters. Liquid crystal tuning of silicon-on-ring-resonators Ring resonators are optical devices designed to show resonance for specific wavelengths. In silicon-on-insulator layered structures, they can be very small, exhibit a high Q factor and have low losses that make them efficient wavelength-filters. The goal is to achieve a tunable refractive index over a larger bandwidth. Structural tunability in metamaterials A novel approach is proposed for efficient tuning of the transmission characteristics of metamaterials through a continuous adjustment of the lattice structure, and is confirmed experimentally in the microwave range. Hybrid metamaterial composites Metamaterials were originally researched as a passive response material. The passive response was and still is determined by the patterning of the metamaterial elements. In other words, the majority of research has focused on the passive properties of the novel transmission, e.g., the size and shape of the inclusions, the effects of metal film thickness, hole geometry, periodicity, with passive responses such as a negative electric response, negative index or gradient index etc. In addition, the resonant response can be significantly affected by depositing a dielectric layer on metal hole arrays and by doping a semiconductor substrate. The result is significant shifting of the resonance frequency. However, even these last two methods are part of the passive material research. Electromagnetic metamaterials can be viewed as structured composites with patterned metallic subwavelength inclusions. As mesoscopic physical systems, these are built starting from the unit cell level. These unit cells are designed to yield prescribed electromagnetic properties. A characteristic of this type of metamaterial is that the individual components have a resonant (coupling) response to the electric, magnetic or both components of the electromagnetic radiation of the source. The EM metamaterial as an artificially designed transmission medium, has so far delivered desired responses at frequencies from the microwave through to the near visible. The introduction of a natural semiconductor material within or as part of each metamaterial cell results in a new design flexibility. The incorporation, application, and location of semiconductor material is strategically planned so as to be strongly coupled at the resonance frequency of the metamaterial elements. The hybrid metamaterial composite is still a passive material. However, the coupling with the semiconductor material then allows for external stimulus and control of the hybrid system as a whole, which produces alterations in the passive metamaterial response. External excitation is produced in the form, for example, photoconductivity, nonlinearity, or gain in the semiconductor material. Tunable spectral range via electric field control Terahertz (THz) metamaterials can show a tunable spectral range, where the magnetic permeability reaches negative values. These values were established both theoretically and experimentally. The demonstrated principle represents a step forward toward a metamaterial with negative refractive index capable of covering continuously a broad range of THz frequencies and opens a path for the active manipulation of millimeter and submillimeter beams. Frequency selective surface based metamaterials Frequency selective surfaces (FSS) has become an alternative to the fixed frequency metamaterial where static geometries and spacings of unit cells determine the frequency response of a given metamaterial. Because arrayed unit cells maintain static positions throughout operation, a new set of geometrical shapes and spacings would have to be embedded in a newly fabricated material for each different radiated frequency and response. Instead, FSS based metamaterials allow for optional changes of frequencies in a single medium (metamaterial) rather than a restriction to a fixed frequency response. Frequency selective surfaces can be fabricated as planar 2-dimensional periodic arrays of metallic elements with specific geometrical shapes, or can be periodic apertures in a metallic screen. The transmission and reflection coefficients for these surfaces are dependent on the frequency of operation and may also depend on the polarization and the angle of the transmitted electromagnetic wave striking the material or angle of incidence. The versatility of these structures are shown when having frequency bands at which a given FSS is completely opaque (stopbands) and other bands at which the same surface allows wave transmission. An example of where this alternative is highly advantageous is in deep space or with a satellite or telescope in orbit. The expense of regular space missions to access a single piece of equipment for tuning and maintenance would be prohibitive. Remote tuning, in this case, is advantageous. FSS was first developed to control the transmission and reflection characteristics of an incident radiation wave. This has resulted in smaller cell size along with increases in bandwidth and the capability to shift frequencies in real time for artificial materials. This type of structure can be used to create a metamaterial surface with the intended application of artificial magnetic conductors or applications for boundary conditions. Another application is as stopband device for surface wave propagation along the interface. This is because surface waves are a created as a consequence of an interface between two media having dissimilar refractive indices. Depending on the application of the system that includes the two media, there may be a need to attenuate surface waves or utilize them. An FSS based metamaterial employs a (miniature) model of equivalent LC circuitry. At low frequencies the physics of the interactions is essentially defined by the LC model analysis and numerical simulation. This is also known as the static LC model. At higher frequencies the static LC concepts become unavailable. This is due to dependence on phasing. When the FSS is engineered for electromagnetic band-gap (EBG) characteristics, the FSS is designed to enlarge its stopband properties in relation to dispersive, surface wave (SW) frequencies (microwave and radio frequencies). Furthermore, as an EBG it is designed to reduce its dependence on the propagating direction of the surface wave traveling across the surface (interface). Artificial magnetic conductors and High impedance surfaces A type of FSS based metamaterial has the interchangeable nomenclature Artificial Magnetic Conductor (AMC) or High Impedance Surface (HIS). The HIS, or AMC, is an artificial, metallic, electromagnetic structure. The structure is designed to be selective in supporting surface wave currents, different from conventional metallic conductors. It has applications for microwave circuits and antennas. As an antenna ground plane it suppresses the propagation of surface waves, and deployed as an improvement over the flat metal sheet as a ground plane, or reflector. Hence, this strategy tends to upgrade the performance of the selected antenna. Strong surface waves of sufficient strength, which propagate on the metal ground plane will reach the edge and propagate into free space. This creates a multi-path interference. In contrast the HIS surface suppresses the propagation of surface waves. Furthermore, control of the radio frequency or microwave radiation pattern is efficiently increased, and mutual coupling between antennas is also reduced. When employing conventional ground planes as the experimental control, the HIS surface exhibits a smoother radiation pattern, an increase in the gain of the main lobe, a decrease in undesirable return radiation, and a decrease in mutual coupling. Description An HIS, or AMC, can be described as a type of electromagnetic band gap (EBG) material or a type of synthetic composite that is intentionally structured with a magnetic conductor surface for an allotted, but defined range of frequencies. AMC, or HIS structures often emerge from an engineered periodic dielectric base along with metallization patterns designed for microwave and radio frequencies. The metalization pattern is usually determined by the intended application of the AMC or HIS structure. Furthermore, two inherent notable properties, which cannot be found in natural materials, have led to a significant number of microwave circuit applications. First, AMC or HIS surfaces are designed to have an allotted set of frequencies over which electromagnetic surface waves and currents will not be allowed to propagate. These materials are then both beneficial and practical as antenna ground planes, small flat signal processing filters, or filters as part of waveguide structures. For example, AMC surfaces as antenna ground planes are able to effectively attenuate undesirable wave fluctuations, or undulations, while producing good radiation patterns. This is because the material can suppress surface wave propagation within the prescribed range of forbidden frequencies. Second, AMC surfaces have very high surface impedance within a specific frequency range, where the tangential magnetic field is small, even with a large electric field along the surface. Therefore, an AMC surface can have a reflection coefficient of +1. In addition, the reflection phase of incident light is part of the AMC and HIS tool box. The phase of the reflected electric field has normal incidence the same phase of the electric field impinging at the interface of the reflecting surface. The variation of the reflection phase is continuous between +180◦ and −180◦ relative to the frequency. Zero is crossed at one frequency, where resonance occurs. A notable characteristic is that the useful bandwidth of an AMC is generally defined as +90◦ to −90◦ on either side of the central frequency. Thus, due to this unusual boundary condition, in contrast to the case of a conventional metal ground plane, an AMC surface can function as a new type of ground plane for low-profile wire antennas (wireless communication systems). For example, even though a horizontal wire antenna is extremely close to an AMC surface, the current on the antenna and its image current on the ground plane are in-phase, rather than out-of phase, thereby strengthening the radiation. AMC as an FSS band gap Frequency selective surfaces (FSS) materials can be utilized as band gap material in the surface wave domain, at microwave and radio frequency wavelengths. Support of surface waves is a given property of metals. These are propagating electromagnetic waves that are bound to the interface between the metal surface and the air. Surface plasmons occur at optical frequencies, but at microwave frequencies, they are the normal currents that occur on any electrical conductor. At radio frequencies, the fields associated with surface waves can extend thousands of wavelengths into the surrounding space, and they are often best described as surface currents. They can be modeled from the viewpoint of an effective dielectric constant, or an effective surface impedance. For example, a flat metal sheet always has low surface impedance. However, by incorporating a special texture on a conducting surface, a specially designed geometry, it is possible to engineer a high surface impedance and alter its electromagnetic-radio-frequency properties. The protrusions are arranged in a two dimensional lattice structure, and can be visualized as thumbtacks protruding from the surface. Because the protrusions are fractionally smaller than the operating wavelength, the structure can be described using an effective medium model, and the electromagnetic properties can be described using lumped-circuit elements (capacitors and inductors). They behave as a network of parallel resonant LC circuits, which act as a two-dimensional electric filter to block the flow of currents along the sheet. This structure can then serve as an artificial magnetic conductor (AMC), because of its high surface impedance within a certain frequency range. In addition, as an artificial magnetic conductor it has a forbidden frequency band, over which surface waves and currents cannot propagate. Therefore, AMC surfaces have good radiation patterns without unwanted ripples based on suppressing the surface wave propagation within the band gap frequency range. The surface impedance is derived from the ratio of the electric field at the surface to the magnetic field at the surface, which extends far into the metal beyond the skin depth. When a texture is applied to the metal surface, the surface impedance is altered, and its surface wave properties are changed. At low frequencies, it is inductive, and supports transverse-magnetic (TM) waves. At high frequencies, it is capacitive, and supports transverse electric (TE) waves. Near the LC resonance frequency, the surface impedance is very high. In this region, waves are not bound to the surface. Instead, they radiate into the surrounding space. A high-impedance surface was fabricated as a printed circuit board. The structure consists of a triangular lattice of hexagonal metal plates, connected to a solid metal sheet by vertical conducting vias. Uniplanar compact photonic-bandgap The uniplanar compact photonic-bandgap (UC-PBG) is proposed, simulated, and then constructed in the lab to overcome elucidated limitations of planar circuit technology. Like photonic bandgap structures it is etched into the ground plane of the microstrip line. The geometry is square metal pads. Each metal pad has four connecting branches forming a distributed LC circuit.
Physical sciences
Basics_3
Physics
1466489
https://en.wikipedia.org/wiki/Neonatology
Neonatology
Neonatology is a subspecialty of pediatrics that consists of the medical care of newborn infants, especially the ill or premature newborn. It is a hospital-based specialty and is usually practised in neonatal intensive care units (NICUs). The principal patients of neonatologists are newborn infants who are ill or require special medical care due to prematurity, low birth weight, intrauterine growth restriction, congenital malformations (birth defects), sepsis, pulmonary hypoplasia, or birth asphyxia. Historical developments Though high infant mortality rates were recognized by the medical community at least as early as the 1860s, advances in modern neonatal intensive care have led to a significant decline in infant mortality in the modern era. This has been achieved through a combination of technological advances, enhanced understanding of newborn physiology, improved sanitation practices, and development of specialized units for neonatal intensive care. Around the mid-19th century, the care of newborns was in its infancy and was led mainly by obstetricians; however, the early 1900s, pediatricians began to assume a more direct role in caring for neonates. The term neonatology was coined by Dr. Alexander Schaffer in 1960. The American Board of Pediatrics established an official sub-board certification for neonatology in 1975. In 1835, the Russian physician Georg von Ruehl developed a rudimentary incubator made from two nestled metal tubs enclosing a layer of warm water. By the mid-1850s, these "warming tubs" were in regular use at the Moscow Foundling Hospital for the support of premature infants. 1857, Jean-Louis-Paul Denuce was the first to publish a description of his own similar incubator design, and was the first physician to describe its utility in the support of premature infants in medical literature. By 1931, Dr. A Robert Bauer added more sophisticated upgrades to the incubator which allowed for humidity control and oxygen delivery in addition to heating capabilities, further contributing to improved survival in newborns. The 1950s brought a rapid escalation in neonatal services with the advent of mechanical ventilation of the newborn, allowing for survival at an increasingly smaller birth weight. In 1952, the anesthesiologist Dr. Virginia Apgar developed the Apgar score, used for standardized assessment of infants immediately upon delivery, to guide further steps in resuscitation if necessary. The first dedicated neonatal intensive care unit (NICU) was established at Yale-Newhaven Hospital in Connecticut in 1965. Prior to the development of the NICU, premature and critically ill infants were attended to in nurseries without specialized resuscitation equipment. In 1968, Dr. Jerold Lucey demonstrated that hyperbilirubinemia of prematurity (a form of neonatal jaundice) could be successfully treated through exposure to artificial blue light. This led to widespread use of phototherapy, which has now become a mainstay of treatment of neonatal jaundice. In the 1980s, the development of pulmonary surfactant replacement therapy further improved survival of extremely premature infants and decreased chronic lung disease, one of the complications of mechanical ventilation, among less severely premature infants. Academic training In the United States, a neonatologist is a physician (MD or DO) practicing neonatology. To become a neonatologist, the physician initially receives training as a pediatrician, then completes an additional training called a fellowship (for 3 years in the US) in neonatology. In the United States of America most, but not all neonatologists, are board certified in the specialty of Pediatrics by the American Board of Pediatrics or the American Osteopathic Board of Pediatrics and in the sub-specialty of Neonatal-Perinatal Medicine also by the American Board of Pediatrics or American Osteopathic Board of Pediatrics. Most countries now run similar programs for post-graduate training in Neonatology, as a subspecialisation of pediatrics. In the United Kingdom, after graduation from medical school and completing the two-year foundation programme, a physician wishing to become a neonatologist would enroll in an eight-year paediatric specialty training programme. The last two to three years of this would be devoted to training in neonatology as a subspecialty. Neonatal nursing is subspecialty of nursing that specialize in neonatal care. Spectrum of care Rather than focusing on a particular organ system, neonatologists focus on the care of newborns who require hospitalization in the Neonatal Intensive Care Unit (NICU). They may also act as general pediatricians, providing well newborn evaluation and care in the hospital where they are based. Some neonatologists, particularly those in academic settings who perform clinical and basic science research, may follow infants for months or even years after hospital discharge to better assess the long-term outcomes. The infant is undergoing many adaptations to extrauterine life, and its physiological systems, such as the immune system, are far from fully developed. Diseases of concern during the neonatal period include: Anemia of prematurity Apnea of prematurity Atrial septal defect Atrioventricular septal defect Benign neonatal hemangiomatosis Brachial plexus injury Bronchopulmonary dysplasia Cerebral palsy CHARGE syndrome Cleft palate Coarctation of the aorta Congenital adrenal hyperplasia Congenital diaphragmatic hernia Congenital heart disease Diffuse neonatal hemangiomatosis DiGeorge syndrome Encephalocele Gastroschisis Hemolytic disease of the newborn Hirschsprung disease Hypoplastic left heart syndrome Hypoxic ischemic encephalopathy Inborn errors of metabolism Intraventricular hemorrhage Lissencephaly Meconium aspiration syndrome Necrotizing enterocolitis Neonatal abstinence syndrome Neonatal cancer Neonatal jaundice Neonatal respiratory distress syndrome Neonatal lupus erythematosus Neonatal conjunctivitis Neonatal pneumonia Neonatal tetanus Neonatal sepsis Neonatal bowel obstruction Neonatal stroke Neonatal diabetes mellitus Neonatal alloimmune thrombocytopenia Neonatal herpes simplex Neonatal hemochromatosis Neonatal meningitis Neonatal hepatitis Neonatal hypoglycemia Neonatal cholestasis Neonatal seizure Omphalocele Patent ductus arteriosus Perinatal asphyxia Periventricular leukomalacia Persistent pulmonary hypertension of the newborn Persistent truncus arteriosus Pulmonary hypoplasia Retinopathy of prematurity Spina bifida Spinal muscular atrophy Supraventricular tachycardia Tetralogy of Fallot Total (or partial) anomalous pulmonary venous connection Tracheoesophageal fistula Transient tachypnea of the newborn Transposition of the great vessels Tricuspid atresia Trisomy 13/18/21 VACTERL/VATER association Ventricular septal defect Vertically transmitted infections Compensation Neonatologists earn significantly more than general pediatricians. In 2018, a typical pediatrician salary in the United States ranged from $221,000 to $264,000, whereas the average salary for a neonatologist was about $299,000 to $355,000. Hospital costs Premature birth is one of the most common reasons for hospitalization. The average hospital costs from 2003 to 2011 for the maternal and neonatal surgical services were the lowest hospital costs in the U.S. In 2012, maternal or neonatal hospital stays constituted the largest proportion of hospitalizations among infants, adults aged 18–44, and those covered by Medicaid. Between 2000 and 2012, the number of neonatal stays (births) in the United States fluctuated around 4.0 million stays, reaching a high of 4.3 million in 2006. Maternal and neonatal stays constituted 27 percent of hospital stays in the United States in 2012. However, the mean hospital costs remained the lowest of the three types of hospital stay (medical, surgical, or maternal and neonatal). The mean hospital cost for a maternal/neonatal stay was $4,300 in 2012 (as opposed to $8,500 for medical stays and $21,200 for surgical stays in 2012). Encouragingly, an increasing number of programs focused on collaboration of newborn care are now being established all over the world. The International Neonatal Consortium, Newborn Care International, and the Global Newborn Society are some notable examples. The goal is to organize and standardize newborn care, and coordinate research efforts.
Biology and health sciences
Fields of medicine
Health
1466591
https://en.wikipedia.org/wiki/Rotary%20switch
Rotary switch
A rotary switch is a switch operated by rotation. These are often chosen when more than 2 positions are needed, such as a three-speed fan or a CB radio with multiple frequencies of reception or "channels". A rotary switch consists of a spindle or "rotor" that has a contact arm or "spoke" which projects from its surface like a cam. It has an array of terminals, arranged in a circle around the rotor, each of which serves as a contact for the "spoke" through which any one of a number of different electrical circuits can be connected to the rotor. The switch is layered to allow the use of multiple poles; each layer is equivalent to one pole. Alternatively the rotation can be limited to a fraction (half; third etc.) of a circle and then each layer can have multiple (two; three etc.) poles. Usually, such a switch has a detent mechanism so it "clicks" from one active position to another rather than stalls in an intermediate position. Thus a rotary switch provides greater pole and throw capabilities than simpler switches do. Rotary switches were used as channel selectors on television receivers until the early 1970s, as range selectors on electrical metering equipment, as band selectors on multi-band radios, etc. Modern rotary switches use a "star wheel" mechanism to provide the switching positions, such as at every 30, 45, 60, or 90 degrees. Nylon cams are then mounted behind this mechanism and spring-loaded electrical contacts slide around these cams. The cams are notched or cut where the contact should close to complete an electrical circuit. Some rotary switches are user-configurable in relation to the number of positions. A special toothed washer that sits below the holding nut can be positioned so that the tooth is inserted into one of a number of slots in a way that limits the number of positions available for selection. For example, if only four positions are required on a twelve position switch, the washer can be positioned so that only four switching positions can be selected when in use. Gallery
Technology
Components
null
1467948
https://en.wikipedia.org/wiki/Problem%20solving
Problem solving
Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue, whereas the latter is complex problem solving (CPS) with multiple interrelated obstacles. Another classification of problem-solving tasks is into well-defined problems with specific obstacles and goals, and ill-defined problems in which the current situation is troublesome but it is not clear what kind of resolution to aim for. Similarly, one may distinguish formal or fact-based problems requiring psychometric intelligence, versus socio-emotional problems which depend on the changeable emotions of individuals or groups, such as tactful behavior, fashion, or gift choices. Solutions require sufficient resources and knowledge to attain the goal. Professionals such as lawyers, doctors, programmers, and consultants are largely problem solvers for issues that require technical skills and knowledge beyond general competence. Many businesses have found profitable markets by recognizing a problem and creating a solution: the more widespread and inconvenient the problem, the greater the opportunity to develop a scalable solution. There are many specialized problem-solving techniques and methods in fields such as science, engineering, business, medicine, mathematics, computer science, philosophy, and social organization. The mental techniques to identify, analyze, and solve problems are studied in psychology and cognitive sciences. Also widely researched are the mental obstacles that prevent people from finding solutions; problem-solving impediments include confirmation bias, mental set, and functional fixedness. Definition The term problem solving has a slightly different meaning depending on the discipline. For instance, it is a mental process in psychology and a computerized process in computer science. There are two different types of problems: ill-defined and well-defined; different approaches are used for each. Well-defined problems have specific end goals and clearly expected solutions, while ill-defined problems do not. Well-defined problems allow for more initial planning than ill-defined problems. Solving problems sometimes involves dealing with pragmatics (the way that context contributes to meaning) and semantics (the interpretation of the problem). The ability to understand what the end goal of the problem is, and what rules could be applied, represents the key to solving the problem. Sometimes a problem requires abstract thinking or coming up with a creative solution. Problem solving has two major domains: mathematical problem solving and personal problem solving. Each concerns some difficulty or barrier that is encountered. Psychology Problem solving in psychology refers to the process of finding solutions to problems encountered in life. Solutions to these problems are usually situation- or context-specific. The process starts with problem finding and problem shaping, in which the problem is discovered and simplified. The next step is to generate possible solutions and evaluate them. Finally a solution is selected to be implemented and verified. Problems have an end goal to be reached; how you get there depends upon problem orientation (problem-solving coping style and skills) and systematic analysis. Mental health professionals study the human problem-solving processes using methods such as introspection, behaviorism, simulation, computer modeling, and experiment. Social psychologists look into the person-environment relationship aspect of the problem and independent and interdependent problem-solving methods. Problem solving has been defined as a higher-order cognitive process and intellectual function that requires the modulation and control of more routine or fundamental skills. Empirical research shows many different strategies and factors influence everyday problem solving. Rehabilitation psychologists studying people with frontal lobe injuries have found that deficits in emotional control and reasoning can be re-mediated with effective rehabilitation and could improve the capacity of injured persons to resolve everyday problems. Interpersonal everyday problem solving is dependent upon personal motivational and contextual components. One such component is the emotional valence of "real-world" problems, which can either impede or aid problem-solving performance. Researchers have focused on the role of emotions in problem solving, demonstrating that poor emotional control can disrupt focus on the target task, impede problem resolution, and lead to negative outcomes such as fatigue, depression, and inertia. human problem solving consists of two related processes: problem orientation, and the motivational/attitudinal/affective approach to problematic situations and problem-solving skills. People's strategies cohere with their goals and stem from the process of comparing oneself with others. Cognitive sciences Among the first experimental psychologists to study problem solving were the Gestaltists in Germany, such as Karl Duncker in The Psychology of Productive Thinking (1935). Perhaps best known is the work of Allen Newell and Herbert A. Simon. Experiments in the 1960s and early 1970s asked participants to solve relatively simple, well-defined, but not previously seen laboratory tasks. These simple problems, such as the Tower of Hanoi, admitted optimal solutions that could be found quickly, allowing researchers to observe the full problem-solving process. Researchers assumed that these model problems would elicit the characteristic cognitive processes by which more complex "real world" problems are solved. An outstanding problem-solving technique found by this research is the principle of decomposition. Computer science Much of computer science and artificial intelligence involves designing automated systems to solve a specified type of problem: to accept input data and calculate a correct or adequate response, reasonably quickly. Algorithms are recipes or instructions that direct such systems, written into computer programs. Steps for designing such systems include problem determination, heuristics, root cause analysis, de-duplication, analysis, diagnosis, and repair. Analytic techniques include linear and nonlinear programming, queuing systems, and simulation. A large, perennial obstacle is to find and fix errors in computer programs: debugging. Logic Formal logic concerns issues like validity, truth, inference, argumentation, and proof. In a problem-solving context, it can be used to formally represent a problem as a theorem to be proved, and to represent the knowledge needed to solve the problem as the premises to be used in a proof that the problem has a solution. The use of computers to prove mathematical theorems using formal logic emerged as the field of automated theorem proving in the 1950s. It included the use of heuristic methods designed to simulate human problem solving, as in the Logic Theory Machine, developed by Allen Newell, Herbert A. Simon and J. C. Shaw, as well as algorithmic methods such as the resolution principle developed by John Alan Robinson. In addition to its use for finding proofs of mathematical theorems, automated theorem-proving has also been used for program verification in computer science. In 1958, John McCarthy proposed the advice taker, to represent information in formal logic and to derive answers to questions using automated theorem-proving. An important step in this direction was made by Cordell Green in 1969, who used a resolution theorem prover for question-answering and for such other applications in artificial intelligence as robot planning. The resolution theorem-prover used by Cordell Green bore little resemblance to human problem solving methods. In response to criticism of that approach from researchers at MIT, Robert Kowalski developed logic programming and SLD resolution, which solves problems by problem decomposition. He has advocated logic for both computer and human problem solving and computational logic to improve human thinking. Engineering When products or processes fail, problem solving techniques can be used to develop corrective actions that can be taken to prevent further failures. Such techniques can also be applied to a product or process prior to an actual failure event—to predict, analyze, and mitigate a potential problem in advance. Techniques such as failure mode and effects analysis can proactively reduce the likelihood of problems. In either the reactive or the proactive case, it is necessary to build a causal explanation through a process of diagnosis. In deriving an explanation of effects in terms of causes, abduction generates new ideas or hypotheses (asking "how?"); deduction evaluates and refines hypotheses based on other plausible premises (asking "why?"); and induction justifies a hypothesis with empirical data (asking "how much?"). The objective of abduction is to determine which hypothesis or proposition to test, not which one to adopt or assert. In the Peircean logical system, the logic of abduction and deduction contribute to our conceptual understanding of a phenomenon, while the logic of induction adds quantitative details (empirical substantiation) to our conceptual knowledge. Forensic engineering is an important technique of failure analysis that involves tracing product defects and flaws. Corrective action can then be taken to prevent further failures. Reverse engineering attempts to discover the original problem-solving logic used in developing a product by disassembling the product and developing a plausible pathway to creating and assembling its parts. Military science In military science, problem solving is linked to the concept of "end-states", the conditions or situations which are the aims of the strategy. Ability to solve problems is important at any military rank, but is essential at the command and control level. It results from deep qualitative and quantitative understanding of possible scenarios. Effectiveness in this context is an evaluation of results: to what extent the end states were accomplished. Planning is the process of determining how to effect those end states. Processes Some models of problem solving involve identifying a goal and then a sequence of subgoals towards achieving this goal. Andersson, who introduced the ACT-R model of cognition, modelled this collection of goals and subgoals as a goal stack in which the mind contains a stack of goals and subgoals to be completed, and a single task being carried out at any time. Knowledge of how to solve one problem can be applied to another problem, in a process known as transfer. Problem-solving strategies Problem-solving strategies are steps to overcoming the obstacles to achieving a goal. The iteration of such strategies over the course of solving a problem is the "problem-solving cycle". Common steps in this cycle include recognizing the problem, defining it, developing a strategy to fix it, organizing knowledge and resources available, monitoring progress, and evaluating the effectiveness of the solution. Once a solution is achieved, another problem usually arises, and the cycle starts again. Insight is the sudden aha! solution to a problem, the birth of a new idea to simplify a complex situation. Solutions found through insight are often more incisive than those from step-by-step analysis. A quick solution process requires insight to select productive moves at different stages of the problem-solving cycle. Unlike Newell and Simon's formal definition of a move problem, there is no consensus definition of an insight problem. Some problem-solving strategies include: Abstraction solving the problem in a tractable model system to gain insight into the real system Analogy adapting the solution to a previous problem which has similar features or mechanisms Brainstorming (especially among groups of people) suggesting a large number of solutions or ideas and combining and developing them until an optimum solution is found Bypasses transform the problem into another problem that is easier to solve, bypassing the barrier, then transform that solution back to a solution to the original problem. Critical thinking analysis of available evidence and arguments to form a judgement via rational, skeptical, and unbiased evaluation Divide and conquer breaking down a large, complex problem into smaller, solvable problems Help-seeking obtaining external assistance to deal with obstacles Hypothesis testing assuming a possible explanation to the problem and trying to prove (or, in some contexts, disprove) the assumption Lateral thinking approaching solutions indirectly and creatively Means-ends analysis choosing an action at each step to move closer to the goal Morphological analysis assessing the output and interactions of an entire system Observation / Question in the natural sciences an observation is an act or instance of noticing or perceiving and the acquisition of information from a primary source. A question is an utterance which serves as a request for information. Proof of impossibility try to prove that the problem cannot be solved. The point where the proof fails will be the starting point for solving it Reduction transforming the problem into another problem for which solutions exist Research employing existing ideas or adapting existing solutions to similar problems Root cause analysis identifying the cause of a problem Trial-and-error testing possible solutions until the right one is found Problem-solving methods Scientific method – is an empirical method for acquiring knowledge that has characterized the development of science. Common barriers Common barriers to problem solving include mental constructs that impede an efficient search for solutions. Five of the most common identified by researchers are: confirmation bias, mental set, functional fixedness, unnecessary constraints, and irrelevant information. Confirmation bias Confirmation bias is an unintentional tendency to collect and use data which favors preconceived notions. Such notions may be incidental rather than motivated by important personal beliefs: the desire to be right may be sufficient motivation. Scientific and technical professionals also experience confirmation bias. One online experiment, for example, suggested that professionals within the field of psychological research are likely to view scientific studies that agree with their preconceived notions more favorably than clashing studies. According to Raymond Nickerson, one can see the consequences of confirmation bias in real-life situations, which range in severity from inefficient government policies to genocide. Nickerson argued that those who killed people accused of witchcraft demonstrated confirmation bias with motivation. Researcher Michael Allen found evidence for confirmation bias with motivation in school children who worked to manipulate their science experiments to produce favorable results. However, confirmation bias does not necessarily require motivation. In 1960, Peter Cathcart Wason conducted an experiment in which participants first viewed three numbers and then created a hypothesis in the form of a rule that could have been used to create that triplet of numbers. When testing their hypotheses, participants tended to only create additional triplets of numbers that would confirm their hypotheses, and tended not to create triplets that would negate or disprove their hypotheses. Mental set Mental set is the inclination to re-use a previously successful solution, rather than search for new and better solutions. It is a reliance on habit. It was first articulated by Abraham S. Luchins in the 1940s with his well-known water jug experiments. Participants were asked to fill one jug with a specific amount of water by using other jugs with different maximum capacities. After Luchins gave a set of jug problems that could all be solved by a single technique, he then introduced a problem that could be solved by the same technique, but also by a novel and simpler method. His participants tended to use the accustomed technique, oblivious of the simpler alternative. This was again demonstrated in Norman Maier's 1931 experiment, which challenged participants to solve a problem by using a familiar tool (pliers) in an unconventional manner. Participants were often unable to view the object in a way that strayed from its typical use, a type of mental set known as functional fixedness (see the following section). Rigidly clinging to a mental set is called fixation, which can deepen to an obsession or preoccupation with attempted strategies that are repeatedly unsuccessful. In the late 1990s, researcher Jennifer Wiley found that professional expertise in a field can create a mental set, perhaps leading to fixation. Groupthink, in which each individual takes on the mindset of the rest of the group, can produce and exacerbate mental set. Social pressure leads to everybody thinking the same thing and reaching the same conclusions. Functional fixedness Functional fixedness is the tendency to view an object as having only one function, and to be unable to conceive of any novel use, as in the Maier pliers experiment described above. Functional fixedness is a specific form of mental set, and is one of the most common forms of cognitive bias in daily life. As an example, imagine a man wants to kill a bug in his house, but the only thing at hand is a can of air freshener. He may start searching for something to kill the bug instead of squashing it with the can, thinking only of its main function of deodorizing. Tim German and Clark Barrett describe this barrier: "subjects become 'fixed' on the design function of the objects, and problem solving suffers relative to control conditions in which the object's function is not demonstrated." Their research found that young children's limited knowledge of an object's intended function reduces this barrier Research has also discovered functional fixedness in educational contexts, as an obstacle to understanding: "functional fixedness may be found in learning concepts as well as in solving chemistry problems." There are several hypotheses in regards to how functional fixedness relates to problem solving. It may waste time, delaying or entirely preventing the correct use of a tool. Unnecessary constraints Unnecessary constraints are arbitrary boundaries imposed unconsciously on the task at hand, which foreclose a productive avenue of solution. The solver may become fixated on only one type of solution, as if it were an inevitable requirement of the problem. Typically, this combines with mental set—clinging to a previously successful method. Visual problems can also produce mentally invented constraints. A famous example is the dot problem: nine dots arranged in a three-by-three grid pattern must be connected by drawing four straight line segments, without lifting pen from paper or backtracking along a line. The subject typically assumes the pen must stay within the outer square of dots, but the solution requires lines continuing beyond this frame, and researchers have found a 0% solution rate within a brief allotted time. This problem has produced the expression "think outside the box". Such problems are typically solved via a sudden insight which leaps over the mental barriers, often after long toil against them. This can be difficult depending on how the subject has structured the problem in their mind, how they draw on past experiences, and how well they juggle this information in their working memory. In the example, envisioning the dots connected outside the framing square requires visualizing an unconventional arrangement, which is a strain on working memory. Irrelevant information Irrelevant information is a specification or data presented in a problem that is unrelated to the solution. If the solver assumes that all information presented needs to be used, this often derails the problem solving process, making relatively simple problems much harder. For example: "Fifteen percent of the people in Topeka have unlisted telephone numbers. You select 200 names at random from the Topeka phone book. How many of these people have unlisted phone numbers?" The "obvious" answer is 15%, but in fact none of the unlisted people would be listed among the 200. This kind of "trick question" is often used in aptitude tests or cognitive evaluations. Though not inherently difficult, they require independent thinking that is not necessarily common. Mathematical word problems often include irrelevant qualitative or numerical information as an extra challenge. Avoiding barriers by changing problem representation The disruption caused by the above cognitive biases can depend on how the information is represented: visually, verbally, or mathematically. A classic example is the Buddhist monk problem: The problem cannot be addressed in a verbal context, trying to describe the monk's progress on each day. It becomes much easier when the paragraph is represented mathematically by a function: one visualizes a graph whose horizontal axis is time of day, and whose vertical axis shows the monk's position (or altitude) on the path at each time. Superimposing the two journey curves, which traverse opposite diagonals of a rectangle, one sees they must cross each other somewhere. The visual representation by graphing has resolved the difficulty. Similar strategies can often improve problem solving on tests. Other barriers for individuals People who are engaged in problem solving tend to overlook subtractive changes, even those that are critical elements of efficient solutions. This tendency to solve by first, only, or mostly creating or adding elements, rather than by subtracting elements or processes is shown to intensify with higher cognitive loads such as information overload. Dreaming: problem solving without waking consciousness People can also solve problems while they are asleep. There are many reports of scientists and engineers who solved problems in their dreams. For example, Elias Howe, inventor of the sewing machine, figured out the structure of the bobbin from a dream. The chemist August Kekulé was considering how benzene arranged its six carbon and hydrogen atoms. Thinking about the problem, he dozed off, and dreamt of dancing atoms that fell into a snakelike pattern, which led him to discover the benzene ring. As Kekulé wrote in his diary, There also are empirical studies of how people can think consciously about a problem before going to sleep, and then solve the problem with a dream image. Dream researcher William C. Dement told his undergraduate class of 500 students that he wanted them to think about an infinite series, whose first elements were OTTFF, to see if they could deduce the principle behind it and to say what the next elements of the series would be. He asked them to think about this problem every night for 15 minutes before going to sleep and to write down any dreams that they then had. They were instructed to think about the problem again for 15 minutes when they awakened in the morning. The sequence OTTFF is the first letters of the numbers: one, two, three, four, five. The next five elements of the series are SSENT (six, seven, eight, nine, ten). Some of the students solved the puzzle by reflecting on their dreams. One example was a student who reported the following dream: With more than 500 undergraduate students, 87 dreams were judged to be related to the problems students were assigned (53 directly related and 34 indirectly related). Yet of the people who had dreams that apparently solved the problem, only seven were actually able to consciously know the solution. The rest (46 out of 53) thought they did not know the solution. Mark Blechner conducted this experiment and obtained results similar to Dement's. He found that while trying to solve the problem, people had dreams in which the solution appeared to be obvious from the dream, but it was rare for the dreamers to realize how their dreams had solved the puzzle. Coaxing or hints did not get them to realize it, although once they heard the solution, they recognized how their dream had solved it. For example, one person in that OTTFF experiment dreamed: In the dream, the person counted out the next elements of the series—six, seven, eight, nine, ten, eleven, twelve—yet he did not realize that this was the solution of the problem. His sleeping mindbrain solved the problem, but his waking mindbrain was not aware how. Albert Einstein believed that much problem solving goes on unconsciously, and the person must then figure out and formulate consciously what the mindbrain has already solved. He believed this was his process in formulating the theory of relativity: "The creator of the problem possesses the solution." Einstein said that he did his problem solving without words, mostly in images. "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined." Cognitive sciences: two schools Problem-solving processes differ across knowledge domains and across levels of expertise. For this reason, cognitive sciences findings obtained in the laboratory cannot necessarily generalize to problem-solving situations outside the laboratory. This has led to a research emphasis on real-world problem solving, since the 1990s. This emphasis has been expressed quite differently in North America and Europe, however. Whereas North American research has typically concentrated on studying problem solving in separate, natural knowledge domains, much of the European research has focused on novel, complex problems, and has been performed with computerized scenarios. Europe In Europe, two main approaches have surfaced, one initiated by Donald Broadbent in the United Kingdom and the other one by Dietrich Dörner in Germany. The two approaches share an emphasis on relatively complex, semantically rich, computerized laboratory tasks, constructed to resemble real-life problems. The approaches differ somewhat in their theoretical goals and methodology. The tradition initiated by Broadbent emphasizes the distinction between cognitive problem-solving processes that operate under awareness versus outside of awareness, and typically employs mathematically well-defined computerized systems. The tradition initiated by Dörner, on the other hand, has an interest in the interplay of the cognitive, motivational, and social components of problem solving, and utilizes very complex computerized scenarios that contain up to 2,000 highly interconnected variables. North America In North America, initiated by the work of Herbert A. Simon on "learning by doing" in semantically rich domains, researchers began to investigate problem solving separately in different natural knowledge domains—such as physics, writing, or chess playing—rather than attempt to extract a global theory of problem solving. These researchers have focused on the development of problem solving within certain domains, that is on the development of expertise. Areas that have attracted rather intensive attention in North America include: calculation computer skills game playing lawyers' reasoning managerial problem solving mathematical problem solving mechanical problem solving personal problem solving political decision making problem solving in electronics problem solving for innovations and inventions: TRIZ reading social problem solving writing Characteristics of complex problems Complex problem solving (CPS) is distinguishable from simple problem solving (SPS). In SPS there is a singular and simple obstacle. In CPS there may be multiple simultaneous obstacles. For example, a surgeon at work has far more complex problems than an individual deciding what shoes to wear. As elucidated by Dietrich Dörner, and later expanded upon by Joachim Funke, complex problems have some typical characteristics, which include: complexity (large numbers of items, interrelations, and decisions) enumerability heterogeneity connectivity (hierarchy relation, communication relation, allocation relation) dynamics (time considerations) temporal constraints temporal sensitivity phase effects dynamic unpredictability intransparency (lack of clarity of the situation) commencement opacity continuation opacity polytely (multiple goals) inexpressivenes opposition transience Collective problem solving People solve problems on many different levels—from the individual to the civilizational. Collective problem solving refers to problem solving performed collectively. Social issues and global issues can typically only be solved collectively. The complexity of contemporary problems exceeds the cognitive capacity of any individual and requires different but complementary varieties of expertise and collective problem solving ability. Collective intelligence is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals. In collaborative problem solving people work together to solve real-world problems. Members of problem-solving groups share a common concern, a similar passion, and/or a commitment to their work. Members can ask questions, wonder, and try to understand common issues. They share expertise, experiences, tools, and methods. Groups may be fluid based on need, may only occur temporarily to finish an assigned task, or may be more permanent depending on the nature of the problems. For example, in the educational context, members of a group may all have input into the decision-making process and a role in the learning process. Members may be responsible for the thinking, teaching, and monitoring of all members in the group. Group work may be coordinated among members so that each member makes an equal contribution to the whole work. Members can identify and build on their individual strengths so that everyone can make a significant contribution to the task. Collaborative group work has the ability to promote critical thinking skills, problem solving skills, social skills, and self-esteem. By using collaboration and communication, members often learn from one another and construct meaningful knowledge that often leads to better learning outcomes than individual work. Collaborative groups require joint intellectual efforts between the members and involve social interactions to solve problems together. The knowledge shared during these interactions is acquired during communication, negotiation, and production of materials. Members actively seek information from others by asking questions. The capacity to use questions to acquire new information increases understanding and the ability to solve problems. In a 1962 research report, Douglas Engelbart linked collective intelligence to organizational effectiveness, and predicted that proactively "augmenting human intellect" would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone". Henry Jenkins, a theorist of new media and media convergence, draws on the theory that collective intelligence can be attributed to media convergence and participatory culture. He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contributes to the development of such skills. Collective impact is the commitment of a group of actors from different sectors to a common agenda for solving a specific social problem, using a structured form of collaboration. After World War II the UN, the Bretton Woods organization, and the WTO were created. Collective problem solving on the international level crystallized around these three types of organization from the 1980s onward. As these global institutions remain state-like or state-centric it is unsurprising that they perpetuate state-like or state-centric approaches to collective problem solving rather than alternative ones. Crowdsourcing is a process of accumulating ideas, thoughts, or information from many independent participants, with aim of finding the best solution for a given challenge. Modern information technologies allow for many people to be involved and facilitate managing their suggestions in ways that provide good results. The Internet allows for a new capacity of collective (including planetary-scale) problem solving.
Technology
General
null
1467960
https://en.wikipedia.org/wiki/Cobaltite
Cobaltite
Cobaltite is an arsenide and sulfide mineral with the mineral formula CoAsS. It is the naming mineral of the cobaltite group of minerals, whose members structurally resemble pyrite (FeS2). History Cobaltite was first described in 1797 by Klaproth. Its name stems from the contained element cobalt, whose name is attributed to the German term Kobold, referring to an "underground spirit" or "goblin". The notion of "bewitched" minerals stems from cobaltite and other cobalt ores withstanding the smelting methods of the medieval period, often producing foul-smelling, poisonous fumes in the process. Properties Cobaltite naturally appears in the form of a tetartoid, a form of dodecahedron with chiral tetrahedral symmetry. Its impurities may contain up to 10% iron and variable amounts of nickel. Cobaltite can be separated from other minerals by selective, pH controlled, flotation methods, where cobalt recovery usually involves hydrometallurgy. It can also be processed with pyrometallurgical methods, such as flash smelting. Occurences Although rare, it is mined as a significant source of the strategically important metal cobalt. It occurs in high-temperature hydrothermal deposits and contact metamorphic rocks. It occurs in association with magnetite, sphalerite, chalcopyrite, skutterudite, allanite, zoisite, scapolite, titanite, and calcite along with numerous other Co–Ni sulfides and arsenides. It is found chiefly in Sweden, Norway, Germany, Cornwall, England, Canada, La Cobaltera, Chile, Australia, the Democratic Republic of the Congo, and Morocco. Crystals have also been found at Khetri in Rajasthan, and under the name sehta the mineral was used by Indian jewellers for producing a blue enamel on gold and silver ornaments. Secondary weathering incrustations of erythrite, hydrated cobalt arsenate, are common. A variety containing much iron replacing cobalt, and known as ferrocobaltite (), was found at Siegen in Westphalia.
Physical sciences
Minerals
Earth science
1468845
https://en.wikipedia.org/wiki/Mud%20dauber
Mud dauber
Mud dauber (or "mud wasp") is a name commonly applied to a number of wasps from either the family Sphecidae or Crabronidae which build their nests from mud; this excludes members of the family Vespidae (especially the subfamily Eumeninae), which are instead referred to as "potter wasps". Mud daubers are variable in appearance. Most are long, slender wasps about in length. The name refers to the nests that are made by the female wasps, which consist of mud molded into place by the wasp's mandibles. Mud daubers are not normally aggressive, but can become belligerent when threatened. Stings are uncommon. Nests The organ pipe mud dauber, one of many mud daubers in the family Crabronidae, builds nests in the shape of a cylindrical tube resembling an organ pipe or pan flute. Common sites include vertical or horizontal faces of walls, cliffs, bridges, overhangs and shelter caves or other structures. The nest of a black and yellow mud dauber species Sceliphron caementarium is a simple, one, two or sometimes three celled, cigar-shaped mass that is attached to crevices, cracks and corners. Each cell contains one egg. Usually several cells are clumped together and covered in mud. The blue mud dauber species Chalybion californicum, another sphecid, builds mud nests, but occasionally refurbishes the abandoned nests of other species; it preys primarily on spiders. The two species commonly occupy the same barns, porches, or other nest sites. All mud daubers may occupy the same sites year after year, creating large numbers of nests in protected locations; such sites are often used as nest sites by other kinds of wasps and bees, as well as other types of insects. One disadvantage to making nests is that most, if not all, of the nest-maker’s offspring are concentrated in one place, making them highly vulnerable to predation. Once a predator finds a nest, it can plunder it cell by cell. A variety of parasitic wasps, ranging from extremely tiny chalcidoid wasps to larger, bright green chrysidid wasps, attack mud dauber nests. They pirate provisions and offspring as food for their own offspring. Food Like most other solitary wasps, mud daubers are parasitoids, but unlike the majority of parasitoids, they actively capture and paralyze the prey upon which they lay their eggs. The females build the nests, and hunt to provision them. Males of pipe-organ mud daubers have been observed bringing spiders to the nest, and nest guarding, an extremely rare appearance of male parental care, otherwise virtually unknown among Hymenoptera. Black and yellow mud daubers primarily prey on relatively small, colorful spiders, such as crab spiders (and related groups), orb weavers and some jumping spiders. They usually find them in and around vegetation. Blue mud daubers are the main predator of the black and brown widow spiders. Adults of both sexes frequently drink flower nectar, but they stock their nests with spiders, which serve as food for their offspring. Mud daubers prefer particular kinds and sizes of spiders for their larders. Instead of stocking a nest cell with one or two large spiders, mud daubers cram as many as two dozen small spiders into a nest cell. To capture a spider, the wasp grabs it and stings it. The venom from the sting does not kill the spider, but paralyzes and preserves it so it can be transported and stored in the nest cell until consumed by the larva. A mud dauber usually lays its egg on the prey item and then seals it into the nest cell with a mud cap. It then builds another cell or nest. The young survive the winter inside the nest. Airplane incidents Florida Commuter Airlines Flight 65 On September 12, 1980, Florida Commuter Airlines Flight 65 crashed on a flight from Palm Beach International Airport to Grand Bahama International Airport, killing all 34 people on board. Before the flight, a mud dauber's nest was discovered in a pitot tube of the airplane which was cleaned by maintenance using an unapproved method. Although the NTSB could not determine the cause of the accident, one of the possible factors was the improper cleaning of the mud dauber nest from the pitot tube. Birgenair Flight 301 On February 6, 1996, Birgenair Flight 301, a 757 jet flying from Puerto Plata in the Dominican Republic, crashed into the Atlantic Ocean. All 13 crew members and 176 passengers were killed. A key part of the accident was a blocked pitot tube. Although the tubes were never recovered from the ocean floor, the plane had been sitting on the tarmac for 25 days with uncovered pitot tubes. Investigators believe a black and yellow mud dauber, Sceliphron caementarium, got into the tube and built its cylindrical nest inside, causing faulty air-speed readings that were a large part of the crash. Gulfstream N450KK On April 10, 2015, about 18:45 Eastern Daylight Time, a Gulfstream Aerospace G-IV, N450KK, was substantially damaged during a cabin over-pressurization event over the Caribbean Sea while en route to Fort Lauderdale, Florida. An initial examination of the fuselage revealed that the outflow valve safety port, located on the outer fuselage, was completely plugged with a foreign material resembling dried soil from a mud dauber.
Biology and health sciences
Hymenoptera
Animals
1470034
https://en.wikipedia.org/wiki/Snap%20pea
Snap pea
The snap pea, also known as the sugar snap pea, is an edible-pod pea with rounded pods and thick pod walls, in contrast to snow pea pods, which are flat with thin walls. The name mangetout (French for "eat all") can apply to snap peas and snow peas. A snap pea named "butter pea" was described in French literature in the 19th century, but the old snap pea was lost in cultivation by the mid-20th century. The present snap pea originated from Calvin Lamborn's cross between a shelling pea mutant found in 1952 by Dr. M.C. Parker and a snow pea cultivar. Researchers at Twin Falls, Idaho hoped that the cross might counteract twisting and buckling seen in varieties at the time. With this cross, snap pea was recreated and the first new snap pea was released in 1979 under the name 'Sugar Snap'. Snap peas, like all other peas, are pod fruits. An edible-podded pea is similar to a garden, or English, pea, but the pod is less fibrous, and is edible when young. Pods of the edible-podded pea, including snap peas, do not have a membrane and do not open when ripe. At maturity, the pods grow to around in length. Pods contain three to nine peas. The plants are climbing, and pea sticks or a trellis or other support system is required for optimal growth. Some cultivars are capable of climbing to high but plants are more commonly around high, for ease of harvest and cultivation. Cultivation The snap pea is a cool season legume. It may be planted in spring as early as the soil can be worked. Seeds should be planted apart and deep in a band. It tolerates light frost when young; it also has a wider adaptation and tolerance of higher temperatures than some other pea cultivars. Snap peas may grow to or more, but more typically are about . They have a vining habit and require a trellis or similar support structure. They should get 4–6 hours of sunlight each day. Plant pea seeds in soil with a pH of between 5.8 and 7.0 for best results. Cultivars Below is a list of several snap pea cultivars currently available, ordered by days to maturity. Days to maturity is from germination to edible pod stage; add about 7 days to estimate shell pea stage. Amish Snap is the only true heirloom snap pea. PMR indicates some degree of powdery mildew resistance; afila types, also called semi-leafless, maintain an erect, interlocked, plant habit that allows good air movement through the canopy and reduces risk from lodging and mold. Production Commercial snap peas for export are produced in Peru, Guatemala, Colombia, Zimbabwe, Kenya and China. Uses Culinary Snap peas are often served in salads or eaten whole. They may also be stir-fried or steamed. Before being eaten, mature snap pea pods may need to be "stringed," which means the membranous string running along the top of the pod from base to tip is removed. Over-cooking the pods will make them come apart.
Biology and health sciences
Pulses
Plants
1470503
https://en.wikipedia.org/wiki/Paleoarchean
Paleoarchean
The Paleoarchean ( ), also spelled Palaeoarchaean (formerly known as the early Archean), is a geologic era within the Archean Eon. The name derives from Greek "Palaios" ancient. It spans the period of time . The era is defined chronometrically and is not referenced to a specific level of a rock section on Earth. The earliest confirmed evidence of life comes from this era, and Vaalbara, one of Earth's earliest supercontinents, may have formed during this era. Early life The geological record from the Paleoarchean era is very limited. Due to deformation and metamorphism, most rocks from the Paleoarchean era cannot provide any useful information. There are only two locations in the world containing rock formations that are intact enough to preserve evidence of early life: the Kaapvaal Craton in Southern Africa and the Pilbara Craton in Western Australia. The Dresser Formation is located in the Pilbara Craton, and contains sedimentary rock from the Paleoarchean Era. It is estimated to be 3.48 billion years old. The Dresser Formation includes a great variety of structures caused by ancient life including stromatolites and MISS once formed by microbial mats. Such microbial mats belong to the oldest ascertained life form and may include fossilized bacteria. The Strelley Pool Chert, also located in the Pilbara Craton, contains stromatolites that may have been created by bacteria 3.4 billion years ago. However, it is possible that these stromatolies are abiogenic and were actually formed through evaporitic precipitation then deposited on the sea floor. The Barberton Greenstone Belt, located in the Kaapvaal Craton, also contains evidence of life. It was created around 3.26 Ga when a large asteroid, about wide, collided with the Earth. The Buck Reef chert and the Josefsdal chert, two rock formations in the Barberton Greenstone Belt, both contain microbial mats with fossilized bacteria from the Paleoarchean era. The Kromberg Formation, near the top of the Onverwacht Group which itself is a part of the Barberton Greenstone Belt, dates back to approximately 3.416–3.334 Ga and contains evidence of microbial life reproducing via multiple fission and binary fission. Continental development Similarities between the Barberton Greenstone Belt in the Kaapvaal Craton and the eastern part of the Pilbara Craton indicate that the two formations were once joined as part of the supercontinent Vaalbara, one of Earth's earliest supercontinents. Both cratons formed at the beginning of the Paleoarchean era. While some paleomagnetic data suggests that they were connected during the Paleoarchean era, it is possible that Vaalbara did not form until the Mesoarchean or Neoarchean eras. It is also unclear whether there was any exposed land during the Paleoarchean era. Although several Paleoarchean formations such as the Dresser Formation, the Josefsdal Chert, and the Mendon Formation show some evidence of being above the surface, over 90 percent of Archean continental crust has been destroyed, making the existence of exposed land practically impossible to confirm or deny. It is likely that during the Paleoarchean era, there was a large amount of continental crust, but it was still underwater and would not emerge until later in the Archean era. Hotspot islands may have been the only exposed land at the time. Due to a much hotter mantle and an elevated oceanic geothermal gradient compared to the present day, plate tectonics in its modern form did not exist during the Paleoarchean. Instead, a model of "flake tectonics" has been proposed for this era of geologic time. According to this model, instead of normal subduction of oceanic plates, extensively silicified upper oceanic crust delaminated from lower oceanic crust and was deposited in a manner similar to ophiolites from the later Proterozoic and Phanerozoic eons. Meteoric impact Researchers from Harvard, Stanford, and ETH Zürich reckon that the S2 meteorite impact that occurred in this era was from 50 to 200 times the size of the meteorite impact that largely caused the Cretaceous–Paleogene extinction event. It occurred approximately 3.26 billion years ago. The impact immediately redistributed iron(II) (Fe2+) from the lower oceanic chemocline through tsunamis that probably continued for days. In the next years and decades, several things occurred. Dust from the bolide containing phosporous and iron fell on land and into the sea. Weathering and erosion brought new material ("fallback") into the sea, including new crystallite (also called grains) pseudomorphs. The heat generated through the collision continuously boiled the upper layers of water, which concentrated the Fe2+, organic carbon, and various nutrients. Over thousands of years, these processes created iron(III) hydroxide () in both sea and sediment that would benefit iron-favoring bacteria and archaea. This meant that these Paleoarchean life forms would have recovered rapidly.
Physical sciences
Geological timescale
Earth science
2127679
https://en.wikipedia.org/wiki/Number%20density
Number density
The number density (symbol: n or ρN) is an intensive quantity used to describe the degree of concentration of countable objects (particles, molecules, phonons, cells, galaxies, etc.) in physical space: three-dimensional volumetric number density, two-dimensional areal number density, or one-dimensional linear number density. Population density is an example of areal number density. The term number concentration (symbol: lowercase n, or C, to avoid confusion with amount of substance indicated by uppercase N) is sometimes used in chemistry for the same quantity, particularly when comparing with other concentrations. Definition Volume number density is the number of specified objects per unit volume: where N is the total number of objects in a volume V. Here it is assumed that N is large enough that rounding of the count to the nearest integer does not introduce much of an error, however V is chosen to be small enough that the resulting n does not depend much on the size or shape of the volume V because of large-scale features. Area number density is the number of specified objects per unit area, A: Similarly, linear number density is the number of specified objects per unit length, L: Column number density is a kind of areal density, the number or count of a substance per unit area, obtained integrating volumetric number density along a vertical path: It's related to column mass density, with the volumetric number density replaced by the volume mass density. Units In SI units, number density is measured in m−3, although cm−3 is often used. However, these units are not quite practical when dealing with atoms or molecules of gases, liquids or solids at room temperature and atmospheric pressure, because the resulting numbers are extremely large (on the order of 1020). Using the number density of an ideal gas at and as a yardstick: is often introduced as a unit of number density, for any substances at any conditions (not necessarily limited to an ideal gas at and ). Usage Using the number density as a function of spatial coordinates, the total number of objects N in the entire volume V can be calculated as where dV = dx dy dz is a volume element. If each object possesses the same mass m0, the total mass m of all the objects in the volume V can be expressed as Similar expressions are valid for electric charge or any other extensive quantity associated with countable objects. For example, replacing m with q (total charge) and m0 with q0 (charge of each object) in the above equation will lead to a correct expression for charge. The number density of solute molecules in a solvent is sometimes called concentration, although usually concentration is expressed as a number of moles per unit volume (and thus called molar concentration). Relation to other quantities Molar concentration For any substance, the number density can be expressed in terms of its amount concentration c (in mol/m3) as where is the Avogadro constant. This is still true if the spatial dimension unit, metre, in both n and c is consistently replaced by any other spatial dimension unit, e.g. if n is in cm−3 and c is in mol/cm3, or if n is in L−1 and c is in mol/L, etc. Mass density For atoms or molecules of a well-defined molar mass M (in kg/mol), the number density can sometimes be expressed in terms of their mass density ρm (in kg/m3) as Note that the ratio M/NA is the mass of a single atom or molecule in kg. Examples The following table lists common examples of number densities at and , unless otherwise noted.
Physical sciences
Mixture
Chemistry
2128113
https://en.wikipedia.org/wiki/Australopithecus%20garhi
Australopithecus garhi
Australopithecus garhi is a species of australopithecine from the Bouri Formation in the Afar Region of Ethiopia 2.6–2.5 million years ago (mya) during the Early Pleistocene. The first remains were described in 1999 based on several skeletal elements uncovered in the three years preceding. A. garhi was originally considered to have been a direct ancestor to Homo and the human line, but is now thought to have been an offshoot. Like other australopithecines, A. garhi had a brain volume of ; a jaw which jutted out (prognathism); relatively large molars and premolars; adaptations for both walking on two legs (bipedalism) and grasping while climbing (arboreality); and it is possible that, though unclear if, males were larger than females (exhibited sexual dimorphism). One individual, presumed female based on size, may have been tall. A. garhi is the first pre-Homo hominin postulated to have manufactured tools—using them in butchering—and may be counted among a growing body of evidence for pre-Homo stone tool industries (the ability to manufacture tools was previously believed to have separated Homo from predecessors.) A. garhi possibly produced the Oldowan industry which was previously considered to have been invented by the later H. habilis, though this may have instead been produced by contemporary Homo. Anatomy Like other australopithecines, A. garhi had a brain volume of about , a sagittal crest running along the midline of the skull, and a prognathic jaw (the jaw jutted out). Relatively, the postcanine teeth, the molars and premolars, are massive (post-canine megadontia), similar to or greater than those of other australopithecines and of the large-toothed Paranthropus robustus. Like the earlier A. afarensis from the same region, A. garhi had a humanlike humerus to femur ratio, and an apelike brachial index (lower to upper arm ratio) as well as curved phalanges of the hand. This is generally interpreted as adaptations for both walking on two legs (habitual bipedalism) as well as for grasping while climbing in trees (arboreality). The BOU-VP-35/1 humerus specimen is notably larger than the humerus of the BOU-VP-12/1 specimen, which could potentially indicate size-specific sexual dimorphism with males larger than females to a similar degree to what is postulated in A. afarensis, but it is unclear if this does not represent normal size variation of the same sex as this is based on only two specimens. Nonetheless, on the basis of size, BOU-VP-12/130 is considered male and BOU-VP-17/1 female. Contemporary hominins from Kenya are about the same size as A. garhi. BOU-VP-17/1 may have been about tall. Australopithecus are thought to have had fast, apelike growth rates, lacking an extended childhood typical of modern humans. However, the legs of A. garhi are elongated, unlike those of other Australopithecus, and, in humans, elongated limbs develop during the delayed adolescent growth spurt. This could mean that A. garhi, compared to other Australopithecus, either had a slower overall growth rate, or a more rapid leg growth rate. Taxonomy The Ethiopian Australopithecus garhi was first described in 1999 by palaeoanthropologists Berhane Asfaw, Tim D. White, Owen Lovejoy, Bruce Latimer, Scott Simpson, and Gen Suwa based on fossils discovered in the Hatayae Beds of the Bouri Formation in Middle Awash, Afar Region, Ethiopia. The first hominin remains were discovered here in 1990—a partial parietal bone (GAM-VP-1/2), left jawbone (GAM-VP-1/1), and left humerus (MAT-VP-1/1)—which are unassignable to a specific genus. The first identifiable Australopithecus fossils–an adult ulna (BOU-VP-11/1)–were found on 17 November 1996 by T. Assebework. A partial skeleton (BOU-VP-12/1) was discovered 13 days later by White, comprising a mostly complete left femur, right humerus, radius, and ulna, and a partial fibula, foot, and jawbone. The holotype specimen, a partial skull (BOU-VP-12/130), was discovered on 20 November 1997 by Ethiopian palaeoanthropologist Yohannes Haile-Selassie. More skull fragments (BOU-VP-12/87) were recovered south of BOU-VP-12/1. On 17 November 1997, French palaeoanthropologist Alban Defleur discovered a complete mandible (BOU-VP-17/1) about north in the Esa Dibo locality of the formation, and American palaeoanthropologist David DeGusta discovered a humerus (BOU-VP-35/1) north of BOU-VP-17/1. However, BOU-VP-11, -12, and -35 cannot conclusively be attributed to A. garhi. The remains are dated to about 2.5 million years ago (mya) based on argon–argon dating. When they were discovered, human evolution was obscured due to a paucity of remains from 3 to 2 mya, with the only hominins from this timespan being identified from South Africa (A. africanus) and Lake Turkana, Kenya (Paranthropus aethiopicus). Likewise, the classification of australopithecines and pre-Homo erectus hominins has been the subject of much debate. The original describers considered A. garhi to be a descendant of the earlier A. afarensis which inhabited the same region, based mainly on dental similarities. Though they assigned the species to Australopithecus, the original describers believed it could represent an ancestor to Homo, which, if the case, would possibly lead to reclassification as H. garhi. Because the characteristics of A. garhi are unexpected for a human ancestor at this stage, the specific name, garhi, means "surprise" in the local Afar language. In 1999, American palaeoanthropologists David Strait and Frederick E. Grine concluded that A. garhi was instead an offshoot of the human line instead of an ancestor because A. garhi and Homo share no synapomorphies (traits unique to only them). In 2015, Homo was recorded from 2.8 mya, much earlier than A. garhi. Palaeoecology The large teeth of Australopithecus species have historically been interpreted as having been adaptations for a diet of hard foods, but the durable teeth may instead have only served an important function during leaner times for harder fallback foods. That is, dental anatomy may not accurately portray normal Australopithecus diet, rather abnormal diet during times of famine. Though it was not found with any tools, mammalian bones associated with the A. garhi remains exhibit cut and percussion marks made from stone tools: the left mandible of an alcelaphine bovid with three successive, unambiguous cut marks presumably made while removing the tongue; a bovid tibia with cut marks, chop marks, and impact scars from a hammerstone, possibly inflicted to harvest the bone marrow; and a Hipparion (a horse) femur with cut marks consistent with dismemberment and filleting. As to why stone tools were not present, because the Hatayae locality was likely a featureless, grassy lake margin with so few raw materials for making stone tools, it is possible these hominins were creating and carrying tools some ways with them to butchering sites, intending to use them many times before discarding. It was previously believed that only Homo could manufacture tools; but it is also possible that the butcherers were not manufacturing tools and simply used naturally sharp rocks. At the nearby Gona site, where there is an abundance of raw materials, several Oldowan tools (an industry previously believed to have been invented by H. habilis) were recovered from 1992 to 1994. The tools date to around 2.6–2.5 mya, the oldest evidence of manufacturing at the time, and since A. garhi was the only species identified in the vicinity at the time, this species was the best candidate for authorship. However, in 2015, the earliest remains of Homo, LD 350-1, were discovered in Ledi-Geraru, also in the Afar Region, dating to 2.8–2.75 mya. More stone tools were found in 2019 dating to about 2.6 mya in Ledi-Geraru, predating the Gona artifacts, and these may be attributed to Homo; the invention of sharp-edged Oldowan tools could actually be due to specific adaptations characteristic of Homo. Nonetheless, other australopithecines have been associated with stone tool manufacturing, such as the 2010 discovery of cut marks dating to 3.4 mya attributed to A. afarensis, and the 2015 discovery of the Lomekwi culture from Lake Turkana dating to 3.3 mya possibly attributed to Kenyanthropus.
Biology and health sciences
Australopithecines
Biology
29247528
https://en.wikipedia.org/wiki/Earth%27s%20shadow
Earth's shadow
Earth's shadow (or Earth shadow) is the shadow that Earth itself casts through its atmosphere and into outer space, toward the antisolar point. During the twilight period (both early dusk and late dawn), the shadow's visible fringe – sometimes called the dark segment or twilight wedge – appears as a dark and diffuse band just above the horizon, most distinct when the sky is clear. Since the angular sizes of the Sun and the Moon, visible from the surface of the Earth, are almost the same, the ratio of the length of the Earth's shadow to the distance between the Earth and the Moon will be almost equal to the ratio of the sizes of the Earth and the Moon. Since Earth's diameter is 3.7 times the Moon's, the length of the planet's umbra is correspondingly 3.7 times the average distance from the Moon to the Earth: roughly . The width of the Earth's shadow at the distance of the lunar orbit is approximately 9000 km (~ 2.6 lunar diameters), which allows people of the Earth to observe total lunar eclipses. Appearance Earth's shadow cast onto the atmosphere can be viewed during the "civil" stage of twilight, assuming the sky is clear and the horizon is relatively unobstructed. The shadow's fringe appears as a dark bluish to purplish band that stretches over 180° of the horizon opposite the Sun, i.e. in the eastern sky at dusk and in the western sky at dawn. Before sunrise, Earth's shadow appears to recede as the Sun rises; after sunset, the shadow appears to rise as the Sun sets. Earth's shadow is best seen when the horizon is low, such as over the sea, and when the sky conditions are clear. In addition, the higher the observer's elevation is to view the horizon, the sharper the shadow appears. Belt of Venus A related phenomenon in the same part of the sky is the Belt of Venus, or anti-twilight arch, a pinkish band visible above the bluish shade of Earth's shadow, named after the planet Venus which, when visible, is typically located in this region of the sky. No defined line divides the Earth's shadow and the Belt of Venus; one colored band blends into the other in the sky. The Belt of Venus is quite a different phenomenon from the afterglow, which appears in the geometrically opposite part of the sky. Color When the Sun is near the horizon around sunset or sunrise, the sunlight appears reddish. This is because the light rays are penetrating an especially thick layer of the atmosphere, which works as a filter, scattering all but the longer (redder) wavelengths. From the observer's perspective, the red sunlight directly illuminates small particles in the lower atmosphere in the sky opposite of the Sun. The red light is backscattered to the observer, which is the reason why the Belt of Venus appears pink. The lower the setting Sun descends, the less defined the boundary between Earth's shadow and the Belt of Venus appears. This is because the setting Sun now illuminates a thinner part of the upper atmosphere. There the red light is not scattered because fewer particles are present, and the eye only sees the "normal" (usual) blue sky, which is due to Rayleigh scattering from air molecules. Eventually, both Earth's shadow and the Belt of Venus dissolve into the darkness of the night sky. Color of lunar eclipses Earth's shadow is as curved as the planet is, and its umbra extends into outer space. (The antumbra, however, extends indefinitely.) When the Sun, Earth, and the Moon are aligned perfectly (or nearly so), with Earth between the Sun and the Moon, Earth's shadow falls onto the lunar surface facing the night side of the planet, such that the shadow gradually darkens the full Moon, causing a lunar eclipse. Even during a total lunar eclipse, a small amount of sunlight however still reaches the Moon. This indirect sunlight has been refracted as it passed through Earth's atmosphere. The air molecules and particulates in Earth's atmosphere scatter the shorter wavelengths of this sunlight; thus, the longer wavelengths of reddish light reaches the Moon, in the same way that light at sunset or sunrise appears reddish. This weak red illumination gives the eclipsed Moon a dimly reddish or copper color.
Physical sciences
Solar System
Astronomy
3966982
https://en.wikipedia.org/wiki/Noise%20%28electronics%29
Noise (electronics)
In electronics, noise is an unwanted disturbance in an electrical signal. Noise generated by electronic devices varies greatly as it is produced by several different effects. In particular, noise is inherent in physics and central to thermodynamics. Any conductor with electrical resistance will generate thermal noise inherently. The final elimination of thermal noise in electronics can only be achieved cryogenically, and even then quantum noise would remain inherent. Electronic noise is a common component of noise in signal processing. In communication systems, noise is an error or undesired random disturbance of a useful information signal in a communication channel. The noise is a summation of unwanted or disturbing energy from natural and sometimes man-made sources. Noise is, however, typically distinguished from interference, for example in the signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-to-noise plus interference ratio (SNIR) measures. Noise is also typically distinguished from distortion, which is an unwanted systematic alteration of the signal waveform by the communication equipment, for example in signal-to-noise and distortion ratio (SINAD) and total harmonic distortion plus noise (THD+N) measures. While noise is generally unwanted, it can serve a useful purpose in some applications, such as random number generation or dither. Uncorrelated noise sources add according to the sum of their powers. Noise types Different types of noise are generated by different devices and different processes. Thermal noise is unavoidable at non-zero temperature (see fluctuation-dissipation theorem), while other types depend mostly on device type (such as shot noise, which needs a steep potential barrier) or manufacturing quality and semiconductor defects, such as conductance fluctuations, including 1/f noise. Thermal noise Johnson–Nyquist noise (more often thermal noise) is unavoidable, and generated by the random thermal motion of charge carriers (usually electrons), inside an electrical conductor, which happens regardless of any applied voltage. Thermal noise is approximately white, meaning that its power spectral density is nearly equal throughout the frequency spectrum. The amplitude of the signal has very nearly a Gaussian probability density function. A communication system affected by thermal noise is often modelled as an additive white Gaussian noise (AWGN) channel. Shot noise Shot noise in electronic devices results from unavoidable random statistical fluctuations of the electric current when the charge carriers (such as electrons) traverse a gap. If electrons flow across a barrier, then they have discrete arrival times. Those discrete arrivals exhibit shot noise. Typically, the barrier in a diode is used. Shot noise is similar to the noise created by rain falling on a tin roof. The flow of rain may be relatively constant, but the individual raindrops arrive discretely. The root-mean-square value of the shot noise current in is given by the Schottky formula. where I is the DC current, q is the charge of an electron, and ΔB is the bandwidth in hertz. The Schottky formula assumes independent arrivals. Vacuum tubes exhibit shot noise because the electrons randomly leave the cathode and arrive at the anode (plate). A tube may not exhibit the full shot noise effect: the presence of a space charge tends to smooth out the arrival times (and thus reduce the randomness of the current). Pentodes and screen-grid tetrodes exhibit more noise than triodes because the cathode current splits randomly between the screen grid and the anode. Conductors and resistors typically do not exhibit shot noise because the electrons thermalize and move diffusively within the material; the electrons do not have discrete arrival times. Shot noise has been demonstrated in mesoscopic resistors when the size of the resistive element becomes shorter than the electron–phonon scattering length. Partition noise Where current divides between two (or more) paths, noise occurs as a result of random fluctuations that occur during this division. For this reason, a transistor will have more noise than the combined shot noise from its two PN junctions. Flicker noise Flicker noise, also known as 1/f noise, is a signal or process with a frequency spectrum that falls off steadily into the higher frequencies, with a pink spectrum. It occurs in almost all electronic devices and results from a variety of effects. Burst noise Burst noise consists of sudden step-like transitions between two or more discrete voltage or current levels, as high as several hundred microvolts, at random and unpredictable times. Each shift in offset voltage or current lasts for several milliseconds to seconds. It is also known as popcorn noise for the popping or crackling sounds it produces in audio circuits. Transit-time noise If the time taken by the electrons to travel from emitter to collector in a transistor becomes comparable to the period of the signal being amplified, that is, at frequencies above VHF and beyond, the transit-time effect takes place and the noise input impedance of the transistor decreases. From the frequency at which this effect becomes significant, it increases with frequency and quickly dominates other sources of noise. Coupled noise While noise may be generated in the electronic circuit itself, additional noise energy can be coupled into a circuit from the external environment, by inductive coupling or capacitive coupling, or through the antenna of a radio receiver. Sources Intermodulation noise Caused when signals of different frequencies share the same non-linear medium. Crosstalk Phenomenon in which a signal transmitted in one circuit or channel of a transmission system creates undesired interference onto a signal in another channel. Interference Modification or disruption of a signal travelling along a medium Atmospheric noise Also called static noise, it is caused by lightning discharges in thunderstorms and other electrical disturbances occurring in nature, such as corona discharge. Industrial noise Sources such as automobiles, aircraft, ignition electric motors and switching gear, High voltage wires and fluorescent lamps cause industrial noise. These noises are produced by the discharge present in all these operations. Solar noise Noise that originates from the Sun is called solar noise. Under normal conditions, there is approximately constant radiation from the Sun due to its high temperature, but solar storms can cause a variety of electrical disturbances. The intensity of solar noise varies over time in a solar cycle. Cosmic noise Distant stars generate noise called cosmic noise. While these stars are too far away to individually affect terrestrial communications systems, their large number leads to appreciable collective effects. Cosmic noise has been observed in a range from 8 MHz to 1.43 GHz, the latter frequency corresponding to the 21-cm hydrogen line. Apart from man-made noise, it is the strongest component over the range of about 20 to 120 MHz. Little cosmic noise below 20MHz penetrates the ionosphere, while its eventual disappearance at frequencies in excess of 1.5 GHz is probably governed by the mechanisms generating it and its absorption by hydrogen in interstellar space. Mitigation In many cases noise found on a signal in a circuit is unwanted. There are many different noise reduction techniques that can reduce the noise picked up by a circuit. Faraday cage – A Faraday cage enclosing a circuit can be used to isolate the circuit from external noise sources. A Faraday cage cannot address noise sources that originate in the circuit itself or those carried in on its inputs, including the power supply. Capacitive coupling – Capacitive coupling allows an AC signal from one part of the circuit to be picked up in another part through the interaction of electric fields. Where coupling is unintended, the effects can be addressed through improved circuit layout and grounding. Ground loops – When grounding a circuit, it is important to avoid ground loops. Ground loops occur when there is a voltage difference between two ground connections. A good way to fix this is to bring all the ground wires to the same potential in a ground bus. Shielding cables – A shielded cable can be thought of as a Faraday cage for wiring and can protect the wires from unwanted noise in a sensitive circuit. The shield must be grounded to be effective. Grounding the shield at only one end can avoid a ground loop on the shield. Twisted pair wiring – Twisting wires in a circuit will reduce electromagnetic noise. Twisting the wires decreases the loop size in which a magnetic field can run through to produce a current between the wires. Small loops may exist between wires twisted together, but the magnetic field going through these loops induces a current flowing in opposite directions in alternate loops on each wire and so there is no net noise current. Notch filters – Notch filters or band-rejection filters are useful for eliminating a specific noise frequency. For example, power lines within a building run at 50 or 60 Hz line frequency. A sensitive circuit will pick up this frequency as noise. A notch filter tuned to the line frequency can remove the noise. Thermal noise can be reduced by cooling of circuits - this is typically only employed in high accuracy high-value applications such as radio telescopes. Quantification The noise level in an electronic system is typically measured as an electrical power N in watts or dBm, a root mean square (RMS) voltage (identical to the noise standard deviation) in volts, dBμV or a mean squared error (MSE) in volts squared. Examples of electrical noise-level measurement units are dBu, dBm0, dBrn, dBrnC, and dBrn(f1 − f2), dBrn(144-line). Noise may also be characterized by its probability distribution and noise spectral density N0(f) in watts per hertz. A noise signal is typically considered as a linear addition to a useful information signal. Typical signal quality measures involving noise are signal-to-noise ratio (SNR or S/N), signal-to-quantization noise ratio (SQNR) in analog-to-digital conversion and compression, peak signal-to-noise ratio (PSNR) in image and video coding and noise figure in cascaded amplifiers. In a carrier-modulated passband analogue communication system, a certain carrier-to-noise ratio (CNR) at the radio receiver input would result in a certain signal-to-noise ratio in the detected message signal. In a digital communications system, a certain Eb/N0 (normalized signal-to-noise ratio) would result in a certain bit error rate. Telecommunication systems strive to increase the ratio of signal level to noise level in order to effectively transfer data. Noise in telecommunication systems is a product of both internal and external sources to the system. Noise is a random process, characterized by stochastic properties such as its variance, distribution, and spectral density. The spectral distribution of noise can vary with frequency, so its power density is measured in watts per hertz (W/Hz). Since the power in a resistive element is proportional to the square of the voltage across it, noise voltage (density) can be described by taking the square root of the noise power density, resulting in volts per root hertz (). Integrated circuit devices, such as operational amplifiers commonly quote equivalent input noise level in these terms (at room temperature). Dither If the noise source is correlated with the signal, such as in the case of quantisation error, the intentional introduction of additional noise, called dither, can reduce overall noise in the bandwidth of interest. This technique allows retrieval of signals below the nominal detection threshold of an instrument. This is an example of stochastic resonance.
Technology
Signal processing
null
24644691
https://en.wikipedia.org/wiki/Freshwater%20snail
Freshwater snail
Freshwater snails are gastropod mollusks that live in fresh water. There are many different families. They are found throughout the world in various habitats, ranging from ephemeral pools to the largest lakes, and from small seeps and springs to major rivers. The great majority of freshwater gastropods have a shell, with very few exceptions. Some groups of snails that live in freshwater respire using gills, whereas other groups need to reach the surface to breathe air. In addition, some are amphibious and have both gills and a lung (e.g. Ampullariidae). Most feed on algae, but many are detritivores and some are filter feeders. Freshwater snails are indirectly among the deadliest animals to humans, as they carry parasitic worms that cause schistosomiasis, a disease estimated to kill between 10,000 and 200,000 people annually. There are thousands of known species, and at least 33–38 independent lineages of gastropods have successfully colonized freshwater environments. It is not possible to quantify the exact number of these lineages yet, because they have yet to be clarified within the Cerithioidea. From six to eight of these independent lineages occur in North America. Taxonomy According to a 2008 review of the taxonomy, there are about 4,000 species of freshwater gastropods (3,795–3,972). As of 2023, there are 5182 known species of fossil freshwater gastropods. The following cladogram is an overview of the main clades of gastropods based on the taxonomy of Bouchet & Rocroi (2005), modified after Jörger et al. (2010) and simplified with families that contain freshwater species marked in boldface: (Marine gastropods (Siphonarioidea, Sacoglossa, Amphiboloidea, Pyramidelloidea) are not depicted within Panpulmonata for simplification. Some of these highlighted families consist entirely of freshwater species, but some of them also contain, or even mainly consist of, marine species.) Neritimorpha The Neritimorpha are a group of primitive "prosobranch" gilled snails which have a shelly operculum. Neritiliidae - 5 extant freshwater species Neritidae - largely confined to the tropics, also the rivers of Europe, family includes the marine "nerites". There are about 110 extant freshwater species. Caenogastropoda The Caenogastropoda are a large group of gilled operculate snails, which are largely marine. In freshwater habitats there are ten major families of caenogastropods, as well as several other families of lesser importance: Architaenioglossa Ampullariidae - an exclusively freshwater family that is largely tropical and includes the large "apple snails" kept in aquaria. About 105–170 species. Viviparidae - medium to large snails, live-bearing, commonly referred to as "mystery snails". Worldwide except South America, and everywhere confined to fresh waters. About 125–150 species. Sorbeoconcha Melanopsidae - family native to rivers draining to the Mediterranean, also Middle East, and some South Pacific islands. About 25–50 species. Pachychilidae - 165–225 species. native to South and Central America. Formerly included with the Pleuroceridae by many authors. Paludomidae - about 100 species in south Asia, diverse in African Lakes, and Sri Lanka. Formerly classified with the Pleuroceridae by some authors. Pleuroceridae - abundant and diverse in eastern North America, largely high-spired snails of small to large size. About 150 species. Semisulcospiridae - primarily eastern Asia, Japan, also the Juga snails of northwestern North America. Formerly included with the Pleuroceridae. About 50 species. Thiaridae - high-spired parthenogenic snails of the tropics, includes those referred to as "trumpet snails" in aquaria. About 110 species. Littorinimorpha Littorinidae - 9 species in the genus Cremnoconchus are freshwater living in streams and waterfalls. Other species are marine. Amnicolidae - about 200 species. Assimineidae - about 20 freshwater species, other are marine Bithyniidae - small snails, native to Eastern Hemisphere. About 130 species. Cochliopidae - about 246 species. Helicostoidae - the only species Helicostoa sinensis lives in China. Hydrobiidae - small to very small snails found worldwide. About 1250 freshwater species other are marine. Lithoglyphidae - about 100 species. Moitessieriidae - about 55 species. Pomatiopsidae - small amphibious snails scattered worldwide, most diverse in eastern and Southeast Asia. About 170 species. Stenothyridae - about 60 freshwater species, others are marine. Neogastropoda Nassariidae - 8–10 freshwater species in the genus Anentome and Clea, native to Southeast Asia. Other Nassariidae are marine. Marginellidae - 2 freshwater species in the genus Rivomarginella, native to Southeast Asia. Other Marginellidae are marine. Heterobranchia Lower Heterobranchia Glacidorbidae - 20 species. Valvatidae - small low-spired snails referred to as "valve snails". 71 species. Acochlidiacea Acochlidiidae (including synonym Strubelliidae) - 5 shell-less species: Acochlidium amboinense, Acochlidium bayerfehlmanni, Acochlidium fijiiensis, Palliohedyle sutteri and Strubellia paradoxa Tantulidae - there is only one species which is shell-less Tantulum elegans. Pulmonata, Basommatophora Basommatophorans are pulmonate or air-breathing aquatic snails, characterized by having their eyes located at the base of their tentacles, rather than at the tips, as in the true land snails Stylommatophora. The majority of basommatophorans have shells that are thin, translucent, and relatively colorless, and all five freshwater basommatophoran families lack an operculum. Chilinidae - small to medium-sized snails confined to temperate and cold South America. About 15 species. Latiidae - small limpet-like snails confined to New Zealand. One or three species. Acroloxidae - about 40 species. Lymnaeidae - found worldwide, but are most numerous in temperate and northern regions. These are the dextral (right-handed) pond snails. About 100 species. Planorbidae - "rams horn" snails, with a worldwide distribution. About 250 species. Physidae - left-handed (sinistral) "pouch snails", native to Europe, Asia, North America. About 80 species. Sexual reproduction and self-fertilization The freshwater snail Physa acuta is in the subclass Heterobranchia and the family Physidae. P. acuta is a self-fertile snail that can undergo either sexual reproduction or self-fertilization. Noel et al. experimentally tested whether accumulation of deleterious mutations is avoided either by inbreeding populations of the snail (undergoing self-fertilization), or in outbreeding populations undergoing sexual reproduction. Inbreeding promotes the homozygous expression of deleterious recessive mutations in progeny that then exposes these mutations to selective elimination because of their deleterious affects on progeny. Outbreeding sexual reproduction allows females to choose male mating partners with smaller mutation loads that then also leads to a reduction of deleterious mutations in progeny. On the basis of their findings, Noel et al. concluded that both outbred and inbred populations of P. acuta can efficiently eliminate deleterious mutations. As human food Several different freshwater snail species are eaten in Asian cuisine. Archaeological investigations in Guatemala have revealed that the diet of the Maya of the Classic Period (AD 250–900) included freshwater snails. Aquarium snails Freshwater snails are commonly found in aquaria along with tropical fish. Species available vary in different parts of the world. In the United States, commonly available species include ramshorn snails such as Planorbella duryi, apple snails such as Pomacea bridgesii, the high-spired thiarid Malaysian trumpet snail, Melanoides tuberculata, and several Neritina species. Parasitology Freshwater snails are widely known to be hosts in the lifecycles of a variety of human and animal parasites, particularly trematodes (or "flukes"). Some of these relations for prosobranch snails include Oncomelania in the family Pomatiopsidae as hosts of Schistosoma, and Bithynia, Parafossarulus and Amnicola as hosts of Opisthorchis. Thiara and Semisulcospira may host Paragonimus. Juga plicifera may host Nanophyetus salmincola. Basommatophoran snails are even more widely infected, with many Biomphalaria (Planorbidae) serving as hosts for Schistosoma mansoni, Fasciolopsis and other parasitic groups. The tiny Bulinus snails are hosts for Schistosoma haematobium. Lymnaeid snails (Lymnaeidae) serve as hosts for Fasciola and the cerceriae causing swimmer's itch. The term "neglected tropical diseases" applies to all snail-borne infections, including schistosomiasis, fascioliasis, fasciolopsiasis, paragonimiasis, opisthorchiasis, clonorchiasis, and angiostrongyliasis.
Biology and health sciences
Gastropods
Animals
26051975
https://en.wikipedia.org/wiki/Cattle
Cattle
Cattle (Bos taurus) are large, domesticated, bovid ungulates widely kept as livestock. They are prominent modern members of the subfamily Bovinae and the most widespread species of the genus Bos. Mature female cattle are called cows and mature male cattle are bulls. Young female cattle are called heifers, young male cattle are oxen or bullocks, and castrated male cattle are known as steers. Cattle are commonly raised for meat, for dairy products, and for leather. As draft animals, they pull carts and farm implements. In India, cattle are sacred animals within Hinduism, and may not be killed. Small breeds such as the miniature Zebu are kept as pets. Taurine cattle are widely distributed across Europe and temperate areas of Asia, the Americas, and Australia. Zebus are found mainly in India and tropical areas of Asia, America, and Australia. Sanga cattle are found primarily in sub-Saharan Africa. These types, sometimes classified as separate species or subspecies, are further divided into over 1,000 recognized breeds. Around 10,500 years ago, taurine cattle were domesticated from wild aurochs progenitors in central Anatolia, the Levant and Western Iran. A separate domestication event occurred in the Indian subcontinent, which gave rise to zebu. There were over 940 million cattle in the world by 2022. Cattle are responsible for around 7% of global greenhouse gas emissions. They were one of the first domesticated animals to have a fully-mapped genome. Etymology The term cattle was borrowed from Anglo-Norman (replacing native Old English terms like , now considered archaic, poetic, or dialectal), itself from Medieval Latin 'principal sum of money, capital', itself derived in turn from Latin 'head'. Cattle originally meant movable personal property, especially livestock of any kind, as opposed to real property (the land, which also included wild or small free-roaming animals such as chickens—they were sold as part of the land). The word is a variant of chattel (a unit of personal property) and closely related to capital in the economic sense. The word cow came via Anglo-Saxon (plural ), from Common Indo-European (genitive ) 'a bovine animal', cf. , . In older English sources such as the King James Version of the Bible, cattle often means livestock, as opposed to deer, which are wild. Characteristics Description Cattle are large artiodactyls, mammals with cloven hooves, meaning that they walk on two toes, the third and fourth digits. Like all bovid species, they can have horns, which are unbranched and are not shed annually. Coloration varies with breed; common colors are black, white, and red/brown, and some breeds are spotted or have mixed colors. Bulls are larger than cows of the same breed by up to a few hundred kilograms. British Hereford cows, for example, weigh , while the bulls weigh . Before 1790, beef cattle averaged only net. Thereafter, weights climbed steadily. Cattle breeds vary widely in size; the tallest and heaviest is the Chianina, where a mature bull may be up to at the shoulder, and may reach in weight. The natural life of domestic cattle is some 25–30 years. Beef cattle go to slaughter at around 18 months, and dairy cows at about five years. Digestive system Cattle are ruminants, meaning their digestive system is highly specialized for processing plant material such as grass rich in cellulose, a tough carbohydrate polymer which many animals cannot digest. They do this in symbiosis with micro-organisms – bacteria, fungi, and protozoa – that possess cellulases, enzymes that split cellulose into its constituent sugars. Among the many bacteria that contribute are Fibrobacter succinogenes, Ruminococcus flavefaciens, and Ruminococcus albus. Cellulolytic fungi include several species of Neocallimastix, while the protozoa include the ciliates Eudiplodinium maggie and Ostracodinium album. If the animal's feed changes over time, the composition of this microbiome changes in response. Cattle have one large stomach with four compartments; the rumen, reticulum, omasum, and abomasum. The rumen is the largest compartment and it harbours the most important parts of the microbiome. The reticulum, the smallest compartment, is known as the "honeycomb". The omasum's main function is to absorb water and nutrients from the digestible feed. The abomasum has a similar function to the human stomach. Cattle regurgitate and re-chew their food in the process of chewing the cud, like most ruminants. While feeding, cows swallow their food without chewing; it goes into the rumen for storage. Later, the food is regurgitated to the mouth, a mouthful at a time, where the cud is chewed by the molars, grinding down the coarse vegetation to small particles. The cud is then swallowed again and further digested by the micro-organisms in the cow's stomach. Reproduction The gestation period for a cow is about nine months long. The ratio of male to female offspring at birth is approximately 52:48. A cow's udder has two pairs of mammary glands or teats. Farms often use artificial insemination, the artificial deposition of semen in the female's genital tract; this allows farmers to choose from a wide range of bulls to breed their cattle. Estrus too may be artificially induced to facilitate the process. Copulation lasts several seconds and consists of a single pelvic thrust. Cows seek secluded areas for calving. Semi-wild Highland cattle heifers first give birth at 2 or 3 years of age, and the timing of birth is synchronized with increases in natural food quality. Average calving interval is 391 days, and calving mortality within the first year of life is 5%. Beef calves suckle an average of 5 times per day, spending some 46 minutes suckling. There is a diurnal rhythm in suckling, peaking at roughly 6am, 11:30am, and 7pm. Under natural conditions, calves stay with their mother until weaning at 8 to 11 months. Heifer and bull calves are equally attached to their mothers in the first few months of life. Cognition Cattle have a variety of cognitive abilities. They can memorize the locations of multiple food sources, and can retain memories for at least 48 days. Young cattle learn more quickly than adults, and calves are capable of discrimination learning, distinguishing familiar and unfamiliar animals, and between humans, using faces and other cues. Calves prefer their own mother's vocalizations to those of an unfamiliar cow. Vocalizations provide information on the age, sex, dominance status and reproductive status of the caller, and may indicate estrus in cows and competitive display in bulls. Cows can categorize images as familiar and unfamiliar individuals. Cloned calves from the same donor form subgroups, suggesting that kin discrimination may be a basis of grouping behaviour. Cattle use visual/brain lateralisation when scanning novel and familiar stimuli. They prefer to view novel stimuli with the left eye (using the right brain hemisphere), but the right eye for familiar stimuli. Individual cattle have also been observed to display different personality traits, such as fearfulness and sociability. Senses Vision is the dominant sense; cattle obtain almost half of their information visually. Being prey animals, cattle evolved to look out for predators almost all around, with eyes that are on the sides of their head rather than the front. This gives them a field of view of 330°, but limits binocular vision (and therefore stereopsis) to some 30° to 50°, compared to 140° in humans. They are dichromatic, like most mammals. Cattle avoid bitter-tasting foods, selecting sweet foods for energy. Their sensitivity to sour-tasting foods helps them to maintain optimal ruminal pH. They seek out salty foods by taste and smell to maintain their electrolyte balance. Their hearing is better than that of horses, but worse at localising sounds than goats, and much worse than dogs or humans. They can distinguish between live and recorded human speech. Olfaction probably plays a large role in their social life, indicating social and reproductive status. Cattle can tell when other animals are stressed by smelling the alarm chemicals in their urine. Cattle can be trained to recognise conspecific individuals using olfaction only. Behavior Dominance hierarchy Cattle live in a dominance hierarchy. This is maintained in several ways. Cattle often engage in mock fights where they test each other's strength in a non-aggressive way. Licking is primarily performed by subordinates and received by dominant animals. Mounting is a playful behavior shown by calves of both sexes and by bulls and sometimes by cows in estrus, however, this is not a dominance related behavior as has been found in other species. Dominance-associated aggressiveness does not correlate with rank position, but is closely related to rank distance between individuals. The horns of cattle are honest signals used in mate selection. Horned cattle attempt to keep greater distances between themselves and have fewer physical interactions than hornless cattle, resulting in more stable social relationships. In calves, agonistic behavior becomes less frequent as space allowance increases, but not as group size changes, whereas in adults, the number of agonistic encounters increases with group size. Dominance relationships in semi-wild highland cattle are very firm, with few overt aggressive conflicts: most disputes are settled by agonistic (non-aggressive, competitive) behaviors with no physical contact between opponents, reducing the risk of injury. Dominance status depends on age and sex, with older animals usually dominant to young ones and males dominant to females. Young bulls gain superior dominance status over adult cows when they reach about 2 years of age. Grazing behavior Cattle eat mixed diets, but prefer to eat approximately 70% clover and 30% grass. This preference has a diurnal pattern, with a stronger preference for clover in the morning, and the proportion of grass increasing towards the evening. When grazing, cattle vary several aspects of their bite, i.e. tongue and jaw movements, depending on characteristics of the plant they are eating. Bite area decreases with the density of the plants but increases with their height. Bite area is determined by the sweep of the tongue; in one study observing steers, bite area reached a maximum of approximately . Bite depth increases with the height of the plants. By adjusting their behavior, cattle obtain heavier bites in swards that are tall and sparse compared with short, dense swards of equal mass/area. Cattle adjust other aspects of their grazing behavior in relation to the available food; foraging velocity decreases and intake rate increases in areas of abundant palatable forage. Cattle avoid grazing areas contaminated by the faeces of other cattle more strongly than they avoid areas contaminated by sheep, but they do not avoid pasture contaminated by rabbits. Temperament and emotions In cattle, temperament or behavioral disposition can affect productivity, overall health, and reproduction. Five underlying categories of temperament traits have been proposed: shyness–boldness, exploration–avoidance, activity, aggressiveness, and sociability. There are many indicators of emotion in cattle. Holstein–Friesian heifers that had made clear improvements in a learning experiment had higher heart rates, indicating an emotional reaction to their own learning. After separation from their mothers, Holstein calves react, indicating low mood. Similarly, after hot-iron dehorning, calves react to the post-operative pain. The position of the ears has been used as an indicator of emotional state. Cattle can tell when other cattle are stressed by the chemicals in their urine. Cattle are gregarious, and even short-term isolation causes psychological stress. When heifers are isolated, vocalizations, heart rate and plasma cortisol all increase. When visual contact is re-instated, vocalizations rapidly decline; heart rate decreases more rapidly if the returning cattle are familiar to the previously isolated individual. Mirrors have been used to reduce stress in isolated cattle. Sleep The average sleep time of a domestic cow is about 4 hours a day. Cattle do have a stay apparatus, but do not sleep standing up; they lie down to sleep deeply. Genetics In 2009, the National Institutes of Health and the US Department of Agriculture reported having mapped the bovine genome. Cattle have some 22,000 genes, of which 80% are shared with humans; they have about 1000 genes that they share with dogs and rodents, but not with humans. Using this bovine "HapMap", researchers can track the differences between breeds that affect meat and milk yields. Early research focused on Hereford genetic sequences; a wider study mapped a further 4.2% of the cattle genome. Behavioral traits of cattle can be as heritable as some production traits, and often, the two can be related. The heritability of temperament (response to isolation during handling) has been calculated as 0.36 and 0.46 for habituation to handling. Rangeland assessments show that the heritability of aggressiveness in cattle is around 0.36. Quantitative trait loci have been found for a range of production and behavioral characteristics for both dairy and beef cattle. Evolution Phylogeny Cattle have played a key role in human history, having been domesticated since at least the early neolithic age. Archaeozoological and genetic data indicate that cattle were first domesticated from wild aurochs (Bos primigenius) approximately 10,500 years ago. There were two major areas of domestication: one in central Anatolia, the Levant and Western Iran, giving rise to the taurine line, and a second in the area that is now Pakistan, resulting in the indicine line. Modern mitochondrial DNA variation indicates the taurine line may have arisen from as few as 80 aurochs tamed in the upper reaches of Mesopotamia near the villages of Çayönü Tepesi in what is now southeastern Turkey, and Dja'de el-Mughara in what is now northern Syria. Although European cattle are largely descended from the taurine lineage, gene flow from African cattle (partially of indicine origin) contributed substantial genomic components to both southern European cattle breeds and their New World descendants. A study on 134 breeds showed that modern taurine cattle originated from Africa, Asia, North and South America, Australia, and Europe. Some researchers have suggested that African taurine cattle are derived from a third independent domestication from the North African aurochs. Whether there have been two or three domestications, European, African, and Asian cattle share much of their genomes both through their species ancestry and through repeated migrations of livestock and genetic material between species, as shown in the diagram. Taxonomy Cattle were originally identified as three separate species: Bos taurus, the European or "taurine" cattle (including similar types from Africa and Asia); Bos indicus, the Indicine or "zebu"; and the extinct Bos primigenius, the aurochs. The aurochs is ancestral to both zebu and taurine cattle. They were later reclassified as one species, Bos taurus, with the aurochs (B. t. primigenius), zebu (B. t. indicus), and taurine (B. t. taurus) cattle as subspecies. However, this taxonomy is contentious, and authorities such as the American Society of Mammalogists treat these taxa as separate species. Complicating the matter is the ability of cattle to interbreed with other closely related species. Hybrid individuals and even breeds exist, not only between taurine cattle and zebu (such as the sanga cattle (Bos taurus africanus x Bos indicus), but also between one or both of these and some other members of the genus Bos – yaks (the dzo or yattle), banteng, and gaur. Hybrids such as the beefalo breed can even occur between taurine cattle and either species of bison, leading some authors to consider them part of the genus Bos, as well. The hybrid origin of some types may not be obvious – for example, genetic testing of the Dwarf Lulu breed, the only taurine-type cattle in Nepal, found them to be a mix of taurine cattle, zebu, and yak. The aurochs originally ranged throughout Europe, North Africa, and much of Asia. In historical times, its range became restricted to Europe, and the last known individual died in Mazovia, Poland, around 1627. Breeders have attempted to recreate a similar appearance to the aurochs by crossing traditional types of domesticated cattle, producing the Heck breed. A group of taurine-type cattle exist in Africa; they either represent an independent domestication event or were the result of crossing taurines domesticated elsewhere with local aurochs, but they are genetically distinct; some authors name them as a separate subspecies, Bos taurus africanus. The only pure African taurine breeds remaining are the N'Dama, Kuri and some varieties of the West African Shorthorn. Feral cattle are those that have been allowed to go wild. Populations exist in many parts of the world, sometimes on small islands. Some, such as Amsterdam Island cattle, Chillingham cattle, and Aleutian wild cattle have become sufficiently distinct to be described as breeds. Husbandry Practices Cattle are often raised by allowing herds to graze on the grasses of large tracts of rangeland. Raising cattle extensively in this manner allows the use of land that might be unsuitable for growing crops. The most common interactions with cattle involve daily feeding, cleaning and milking. Many routine husbandry practices involve ear tagging, dehorning, loading, medical operations, artificial insemination, vaccinations and hoof care, as well as training for agricultural shows and preparations. Around the world, Fulani husbandry rests on behavioural techniques, whereas in Europe, cattle are controlled primarily by physical means, such as fences. Breeders use cattle husbandry to reduce tuberculosis susceptibility by selective breeding and maintaining herd health to avoid concurrent disease. In the United States, many cattle are raised intensively, kept in concentrated animal feeding operations, meaning there are at least 700 mature dairy cows or at least 1000 other cattle stabled or confined in a feedlot for "45 days or more in a 12-month period". Population Historically, the cattle population of Britain rose from 9.8 million in 1878 to 11.7 million in 1908, but beef consumption rose much faster. Britain became the "stud farm of the world" exporting livestock to countries where there were no indigenous cattle. In 1929 80% of the meat trade of the world was products of what were originally English breeds. There were nearly 70 million cattle in the US by the early 1930s. Cattle have the largest biomass of any animal species on Earth, at roughly 400 million tonnes, followed closely by Antarctic krill at 379 million tonnes and humans at 373 million tonnes. In 2023, the countries with the most cattle were India with 307.5 million (32.6% of the total), Brazil with 194.4 million, and China with 101.5 million, out of a total of 942.6 million in the world. Economy Cattle are kept on farms to produce meat, milk, and leather, and sometimes to pull carts or farm implements. Meat The meat of adult cattle is known as beef, and that of calves as veal. Other body parts are used as food products, including blood, liver, kidney, heart and oxtail. Approximately 300 million cattle, including dairy animals, are slaughtered each year for food. About a quarter of the world's meat comes from cattle. World cattle meat production in 2021 was 72.3 million tons. Dairy Certain breeds of cattle, such as the Holstein-Friesian, are used to produce milk, much of which is processed into dairy products such as butter, cheese, and yogurt. Dairy cattle are usually kept on specialized dairy farms designed for milk production. Most cows are milked twice per day, with milk processed at a dairy, which may be onsite at the farm or the milk may be shipped to a dairy plant for eventual sale of a dairy product. Lactation is induced in heifers and spayed cows by a combination of physical and psychological stimulation, by drugs, or by a combination of those methods. For mother cows to continue producing milk, they give birth to one calf per year. If the calf is male, it is generally slaughtered at a young age to produce veal. Cows produce milk until three weeks before birth. Over the last fifty years, dairy farming has become more intensive to increase the yield of milk produced by each cow. The Holstein-Friesian is the breed of dairy cow most common in the UK, Europe and the United States. It has been bred selectively to produce the highest yields of milk of any cow. The average in the UK is around 22 litres per day. Dairy is a large industry worldwide. In 2023, the 27 European Union countries produced 143 million tons of cow's milk; the United States 104.1 million tons; and India 99.5 million tons. India further produces 94.4 million tons of buffalo milk, making it (in 2023) the world's largest milk producer; its dairy industry employs some 80 million people. Draft animals Oxen are cattle trained as draft animals. Oxen can pull heavier loads and for a longer period of time than horses. Oxen are used worldwide, especially in developing countries. There are some 11 million draft oxen in sub-Saharan Africa, while in 1998 India had over 65 million oxen. At the start of the 21st century, about half the world's crop production depended on land preparation by draft animals. Hides Cattle are not often kept solely for hides, and they are usually a by-product of beef production. Hides are used mainly for leather products such as shoes. In 2012, India was the world's largest producer of cattle hides. Cattle hides account for around 65% of the world's leather production. Health Pests and diseases Cattle are subject to pests including arthropod parasites such as ticks (which can in turn transmit diseases caused by bacteria and protozoa), and diseases caused by pathogens including bacteria and viruses. Some viral diseases are spread by insects - i.e. bluetongue disease is spread by midges. Psoroptic mange is a disabling skin condition caused by mites. Bovine tuberculosis is caused by a bacterium; it causes disease in humans and in wild animals such as deer and badgers. Foot-and-mouth disease is caused by a virus, affects a range of hoofed livestock and is highly contagious. Bovine spongiform encephalopathy is a neurodegenerative disease spread by a prion, a misfolded brain protein, in contaminated meat. Among the intestinal parasites of cattle are Paramphistomum flukes, affecting the rumen, and hookworms in the small intestine. Role of climate change Climate change is expected to exacerbate heat stress in cattle, and for longer periods. Heat-stressed cattle may experience accelerated breakdown of adipose tissue by the liver, causing lipidosis. Cattle eat less when heat stressed, resulting in ruminal acidosis, which can lead to laminitis. Cattle can attempt to deal with higher temperatures by panting more often; this rapidly decreases carbon dioxide concentrations at the price of increasing pH, respiratory alkalosis. To deal with this, cattle are forced to shed bicarbonate through urination, at the expense of rumen buffering. These two pathologies can both cause lameness. Another specific risk is mastitis. This worsens as Calliphora blowflies increase in number with continued warming, spreading mastitis-causing bacteria. Ticks too are likely to increase in temperate zones as the climate warms, increasing the risk of tick-borne diseases. Both beef and milk production are likely to experience declines due to climate change. Impact of cattle husbandry On public health Cattle health is at once a veterinary issue (for animal welfare and productivity), a public health issue (to limit the spread of disease), and a food safety issue (to ensure meat and dairy products are safe to eat). These concerns are reflected in farming regulations. These rules can become political matters, as when it was proposed in the UK in 2011 that milk from tuberculosis-infected cattle should be allowed to enter the food chain. Cattle disease attracted attention in the 1980s and 1990s when bovine spongiform encephalopathy (mad cow disease) broke out in the United Kingdom. BSE can cross into humans as the deadly variant Creutzfeldt–Jakob disease; 178 people in the UK had died from it by 2010. On the environment The gut flora of cattle produce methane, a powerful greenhouse gas, as a byproduct of enteric fermentation, with each cow belching out 100kg a year. Additional methane is produced by anaerobic fermentation of stored manure. The FAO estimates that in 2015 around 7% of global greenhouse gas emissions were due to cattle, but this is uncertain. Reducing methane emissions quickly helps limit climate change. Concentrated animal feeding operations in particular produce substantial amounts of wastewater and manure, which can cause environmental harms such as soil erosion, human and animal exposure to toxic chemicals, development of antibiotic resistant bacteria and an increase in E. coli contamination. In many world regions, overgrazing by cattle has reduced biodiversity of the grazed plants and of animals at different trophic levels in the ecosystem. A well documented consequence of overgrazing is woody plant encroachment in rangelands, which significantly reduces the carrying capacity of the land over time. On animal welfare Cattle husbandry practices including branding, castration, dehorning, ear tagging, nose ringing, restraint, tail docking, the use of veal crates, and cattle prods have raised welfare concerns. Stocking density is the number of animals within a specified area. High stocking density can affect cattle health, welfare, productivity, and feeding behaviour. Densely-stocked cattle feed more rapidly and lie down sooner, increasing the risk of teat infection, mastitis, and embryo loss. The stress and negative health impacts induced by high stocking density such as in concentrated animal feeding operations or feedlots, auctions, and transport may be detrimental to cattle welfare. To produce milk from dairy cattle, most calves are separated from their mothers soon after birth and fed milk replacement in order to retain the cows' milk for human consumption. Animal welfare advocates are critical of this practice, stating that this breaks the natural bond between the mother and her calf. The welfare of veal calves is also a concern. Two sports involving cattle are thought to be cruel by animal welfare groups: rodeos and bullfighting. Such groups oppose rodeo activities including bull riding, calf roping and steer roping, stating that rodeos are unnecessary and cause stress, injury, and death to the animals. In Spain, the Running of the bulls faces opposition due to the stress and injuries incurred by the bulls during the event. In culture From early in civilisation, cattle have been used in barter. Cattle play a part in several religions. Veneration of the cow is a symbol of Hindu community identity. Slaughter of cows is forbidden by law in several states of the Indian Union. The ox is one of the 12-year cycle of animals which appear in the Chinese zodiac. The astrological sign Taurus is represented as a bull in the Western zodiac.
Biology and health sciences
Biology
null
2129702
https://en.wikipedia.org/wiki/Phototube
Phototube
A phototube or photoelectric cell is a type of gas-filled or vacuum tube that is sensitive to light. Such a tube is more correctly called a 'photoemissive cell' to distinguish it from photovoltaic or photoconductive cells. Phototubes were previously more widely used but are now replaced in many applications by solid state photodetectors. The photomultiplier tube is one of the most sensitive light detectors, and is still widely used in physics research. Operating principles Phototubes operate according to the photoelectric effect: Incoming photons strike a photocathode, knocking electrons out of its surface, which are attracted to an anode. Thus current is dependent on the frequency and intensity of incoming photons. Unlike photomultiplier tubes, no amplification takes place, so the current through the device is typically of the order of a few microamperes. The light wavelength range over which the device is sensitive depends on the material used for the photoemissive cathode. A caesium-antimony cathode gives a device that is very sensitive in the violet to ultra-violet region with sensitivity falling off to blindness to red light. Caesium on oxidised silver gives a cathode that is most sensitive to infra-red to red light, falling off towards blue, where the sensitivity is low but not zero. Vacuum devices have a near constant anode current for a given level of illumination relative to anode voltage. Gas-filled devices are more sensitive, but the frequency response to modulated illumination falls off at lower frequencies compared to the vacuum devices. The frequency response of vacuum devices is generally limited by the transit time of the electrons from cathode to anode. Applications One major application of the phototube was the reading of optical sound tracks for projected films. Phototubes were used in a variety of light-sensing applications until some were superseded by photoresistors and photodiodes.
Technology
Components
null
2133664
https://en.wikipedia.org/wiki/Rogun%20Dam
Rogun Dam
The Roghun Dam (; ) is an embankment dam under construction on the Vakhsh River in southern Tajikistan. The dam is situated 110 km from Dushanbe. It is one of the planned hydroelectric power plants of Vakhsh Cascade. Construction of the dam began in the Soviet era, in 1976, but was abandoned in 1993 after the collapse of the Soviet Union. Over three decades only preliminary construction has been carried out on the dam. Due to its controversial state, construction was suspended in August 2012 pending World Bank reports. The project was restarted by the Tajik government in 2016. The power plant's first unit was commissioned in November 2018 and second in September 2019, both on lower hydraulic head. The dam has drawn complaints from neighboring Uzbekistan, which fears it will negatively impact its lucrative cotton crops. The dispute over the project has contributed significantly to bitter relations between the two former Soviet republics. History The Roghun Dam was first proposed in 1959 and a technical scheme was developed by 1965. Construction began in 1976, however the project stalled after the collapse of the Soviet Union. An agreement on finishing the construction was signed between Tajikistan and Russia in 1994. Since the agreement was not implemented, it was denounced by Tajikistan parliament. In October 2004, an agreement was signed with RUSAL in which RUSAL agreed to complete the Rogun facility, to build a new aluminum plant and to rebuild the Tursunzade Aluminum Smelter. In February 2007, a new partnership between Russia and Tajikistan to complete the dam was announced, but later was refused by Russia because of disagreement concerning the controlling stake in the project. In May 2008, Tajikistan announced that construction of the dam had resumed. By December 2010, one of the river diversion tunnels was renovated and the second expected to commence in June or July 2011. Construction of the dam was suspended in August 2012 pending the World Bank assessment. In 2010, Tajikistan launched an IPO to raise US$1.4billion to finish construction of the dam. By April 26 of that year the Tajik government had raised just US$184 million, enough for two years of construction. On July 1, 2016 the state commission in charge of the project had chosen the Italian company Salini Impregilo to carry out the construction for $3.9 billion. The project is broken down into four components, with the most expensive one involving the building of a 335-meter-high rockfill dam which will entail costs of around $1.95 billion. On October 29, 2016 Tajik president Emomali Rahmon officially launched the construction of the dam. At the ceremony, the river's flow was ceremonially diverted through the reconstructed diversion tunnels. The power plant's first unit was commissioned in November 2018 and second in September 2019. In mid-July 2022 concrete pouring commenced on the main dam core. Technical description Rogun was listed as the highest dam in the world — high — but this is a projected height. In reality the dam was only circa high until 1993 when it was destroyed in a flood. three projects are under consideration: the original, , and two alternatives, and , all having their advantages and drawbacks. The hydroelectric power plant is expected to have six turbines with combined capacity of 3600 MW. When complete, it is expected to produce 17.1 TWh of electrical work per year. Impact assessment In response to the request of the bordering countries and especially Uzbekistan, the World Bank has financed the Techno-Economic Assessment Study (TEAS) conducted by consortium of Coyne et Bellier, Electroconsult and IPA Energy + Water Economics, and Environmental and Social Impact Assessment (ESIA) conducted by Pöyry. The reports, originally slated to be released in February 2012, were delayed until mid-2014. The ESIA was published on 16 June 2014 and the TEAS in July 2014. Overall, the ESIA stated that "Most impacts are rather small and easily mitigated, if mitigation is required at all." and that "There is no impact of the category "strong negative, mitigation not possible", which would have to be considered as a no-go for the project." All parties, including Central Asian states met in Almaty in July 2014 for the 5th Riparian Meeting to discuss findings within the TEAS and ESIA. International tensions The project has raised tensions with Uzbekistan over a decrease in the downstream water flow the country needs for its irrigated agriculture (particularly cotton). In February 2010, Uzbek Prime Minister Shavkat Mirziyoyev sent a letter to his Tajik counterpart demanding an independent examination of the possible consequences of the dam. During October 2010, Uzbek President Islam Karimov called the Rogun hydropower plants a "stupid project." However, in 2018 Uzbekistan dropped its opposition to the Rogun Dam. "Go ahead and build it, but we hold to certain guarantees in accordance with these conventions that have been signed by you," Uzbek Foreign Minister Abdulaziz Komilov said in a televised appearance on July 5, 2018.
Technology
Dams
null
23170058
https://en.wikipedia.org/wiki/Samsung%20Galaxy
Samsung Galaxy
Samsung Galaxy (; stylized as SΛMSUNG Galaxy since 2015 (except Japan where it omitted the Samsung branding up until 2023), previously stylized as Samsung GALAXY; abbreviated as SG) is a series of computing, Android mobile computing and wearable devices that are designed, manufactured and marketed by Samsung Electronics since 29 June 2009. The product line includes the Samsung Galaxy S series of high-end phones, Galaxy Z series of high-end foldables, Galaxy A series, Samsung Galaxy Ace, Galaxy F series and Galaxy M series of mid-range phones, the Galaxy Book of laptops, the Samsung Galaxy Tab series, the Samsung Galaxy Watch series, the Galaxy Buds series and the Galaxy Fit, and the now historical Galaxy Note series of pioneering phablets. Samsung Galaxy devices with a user interface called One UI (with previous versions being known as Samsung Experience and TouchWiz). However, the Galaxy TabPro S is the first Galaxy-branded Windows 10 device that was announced in CES 2016. The Samsung Galaxy series is noteworthy for its pioneering role in bringing Android into mainstream popularity beginning in the early 2010s. The Galaxy Watch is the first Galaxy-branded smartwatch since the release of later iterations of the Gear smartwatch from 2014 to 2017. In 2020, Samsung added the Galaxy Chromebook 2-in-1 laptop running ChromeOS to the Galaxy branding lineup. The follow-on Galaxy Chromebook 2 was released in 2021. Definitions Categories Current series Galaxy Z series: high-end foldable phones/devices. Galaxy S series: flagship phones Galaxy A series: mid-range phones, also includes entry-level phones, budget phones and formerly premium mid-range phones. Galaxy M series: a slightly more budget-friendly, online-exclusive alternative from the Galaxy A series. Galaxy F series: also a Galaxy A series alternative but sold in developing countries. Galaxy C series: premium mid-range versions of the Galaxy A series sold in Asian markets. They succeed the premium mid-range Galaxy A series devices. Galaxy Xcover series: rugged business phones with low specifications. Samsung Galaxy Tab series: Divided into three series, Tab S: high-end to mid-range tablets; Tab A: mid-range to low-end tablets; and Tab Active: mid-range rugged tablets. Samsung Galaxy Watch series: Divided into three series, Galaxy Watch: base models; Galaxy Watch Classic: premium smartwatches; Galaxy Watch FE: affordable smartwatches based on older Galaxy Watch models. Galaxy Fit: fitness bands and activity trackers Galaxy Buds: wireless headphones Galaxy Book: lineup of laptops and 2-in-1 PCs running Windows Galaxy Chromebook: lineup of Chromebooks made by Samsung Discontinued series Galaxy Note series: devices with a large screen and a built-in stylus, discontinued in 2022 and merged with the Z series and S Ultra models starting from the S22 Ultra (While the S21 Ultra and Z series support S-Pen input, the S-Pen is not included in packaging since they do not have a dedicated slot to put the pen inside, making the S-Pen available as an accessory for those devices) Galaxy Quantum series: premium mid-range devices based on Galaxy A8x models sold primarily in South Korea, discontinued in 2022. Galaxy Watch Active: Variants of base model Galaxy Watches that focused more on fitness and wellness, merged with the base models in 2021 starting with the Galaxy Watch 4 Galaxy On series: online-exclusive mid-range phones, discontinued in 2019 in favor for the Galaxy M series. Galaxy Gear series: lineup of wearable devices that included headsets, earbuds, smartwatches and activity trackers, headsets discontinued in 2017 and the rest of the product lineup was discontinued in 2019 in favor for the Galaxy Watch, Galaxy Buds and Galaxy Fit. Galaxy J series: entry-level devices, discontinued in 2019 and merged with the Galaxy A series. Galaxy S Active: rugged versions of base-model S series, discontinued in 2018. Galaxy Neo models: refreshed versions of older Galaxy models with newer hardware but lower specs compared to the original models, discontinued in 2017 and succeeded by the Galaxy S10 Lite, Galaxy Note 10 Lite and Fan Edition models beginning with the Galaxy Note 7 and Galaxy S20 FE. Previous Galaxy Tab devices: Galaxy Tab series sold before 2017 that included the Tab 7.0, Tab 8.9, Tab 10.1, Tab Education, Tab Pro, Tab Note and Tab E. These devices were replaced by the Galaxy Book, Galaxy Tab A and Galaxy Tab S. Galaxy Grand series: mid-range devices, discontinued in 2016 as original models but continued production as rebranded models of Galaxy J series until 2019. Galaxy E series: low-end to midrange devices, discontinued in 2016 in favor for the Galaxy J series and Galaxy A series. Galaxy Core series: low-end devices, discontinued in 2016 in favor of Galaxy J series. Galaxy Trend series: low-end devices, discontinued in 2016 in favor for the Galaxy J series. Galaxy Mega series: large phablets that lacked an S-Pen, replaced by Plus models of the Galaxy S series starting with the Galaxy S6 edge+. Galaxy Ace series: low-end to mid-range devices, discontinued in 2015 in favor for the Galaxy Alpha and later the Galaxy A series. Galaxy Y series: small devices, discontinued in 2015. Galaxy Pocket: also small devices, discontinued in 2015. Galaxy Camera: camera phones, discontinued in 2014 and replaced by the Galaxy S4 Zoom, Galaxy S5 Zoom and Galaxy K Zoom. Galaxy Beam: phones with built-in projectors, discontinued in 2014. Galaxy Mini: small devices, discontinued in 2013 and replaced by Galaxy Pocket series and Galaxy Y series. Galaxy R: mid-range devices, discontinued in 2012. Hierarchy The main 2024 lineup of Galaxy smartphone models looks like this: Galaxy S24 and Galaxy S24+ Galaxy S24 FE and Galaxy S24 Ultra Galaxy C55 and Galaxy F55 Galaxy A55 and Galaxy M55 Galaxy A35 and Galaxy M35 Galaxy A16 and Galaxy A06 Galaxy A15 and Galaxy M15 Galaxy A05 and Galaxy A05s Galaxy F05 and Galaxy M05 Galaxy Z Fold 6 and Galaxy Z Flip 6 Galaxy Tab S10 Ultra and Galaxy Tab S10+ Model numbers Since September 2013, model numbers of devices in the Samsung Galaxy series are in the "SM-ABCDE" format (excluding the Galaxy J SC-02F, Galaxy Centura SCH-S738C, and SGH-N075T), where A is the model series, B is the device class, C is the generation, D is the device type, and E is the country/region that is made for (if applicable). Previously, from 2009 until September 2013, the model numbers were in the "GT-XXXXX" format. Phones SM-Sxxx – S series model from S22 and later SM-Fxxx – Z series model, and some older F series model SM-Gxxx – S series (S5 - S21), XCover model, and some J series Prime model SM-Nxxx – Note model SM-Jxxx – J series model SM-Axxx – A series model SM-Mxxx – M series model SM-Exxx – F series model GT-Sxxx2/ SM-Gxxx/DS / SM-Gxxx/DD SM-Gxxx2 Dual-SIM "Galaxy Duos" model GT-Nxxx0/GT-Nxxx5 – Galaxy Note 1 and 2 (International 3G/4G, respectively) GT-Nxxx3 – Unlocked Galaxy Note 1 and 2 (US/Canada) GT-Ixxx0/GT-Ixxx5 – Galaxy S4 and earlier models (International 3G/4G LTE, respectively) GT-Ixxx3 – Unlocked Galaxy S4 and earlier models (US/Canada) SGH – GSM handset SPH – Sprint handset SCH – Verizon/US Cellular handset SHV/SHW – Korean handset Tablets SM-Xxxx – Tab A and S models from A8, Active 5, S8 and later SM-Txx0/1/5/6 – mainstream Tab model (Tab 3 to Tab A7 Lite/Active4/S7) SM-Pxx0/5 – mainstream Tab with built-in S Pen stylus model (Note 10.1 2014, Tab A 10.1, etc.) SM-Wxxx – Microsoft Windows model (i.e., Galaxy Book) GT-Nxx00/GT-Pxx20 – older mainstream Tab with built-in S Pen stylus model (Note 8.0 and 10.1, 3G/4G LTE respectively) GT-Nxx13 – older mainstream Tab with built-in S Pen stylus model (Note 8.0 and 10.1, US/Canada Wi-Fi) GT-Nxx10 – older mainstream Tab with built-in S Pen stylus model (Note 8.0 and 10.1, International Wi-Fi) GT-Pxx00/GT-Pxx20 – older mainstream Tab model (Tab 1 to Tab 3, 3G/4G LTE respectively) GT-Pxx13 – older mainstream Tab model (Tab 1 to Tab 3, US/Canada Wi-Fi) GT-Pxx10 – older mainstream Tab model (Tab 1 to Tab 3, International Wi-Fi) GT-Snnn5/GT-Nnnn5/GT-Pnnn5/GT-Innn5/SM-NnnnF/SM-Tnn5/SM-GnnnF – 4G/LTE model Regions A: AT&T P: Sprint R4: US Cellular T: T-Mobile V: Verizon U: USA carrier locked U1: USA factory unlocked N: Korea W: Canada 0: China mainland (phones) C: China mainland (tablets) B: International 5G F: International 4G/LTE H: International 3G Duos or Dual SIM models end with the /DS suffix. Firmware numbering The following is a list of known firmware regions. Korea KS: Korea (phones) KO: Korea (cellular tablets) XX: All Wi-Fi tablets India IN: India (all phones) Americas SQ: USA (carrier locked phones) UE: USA (carrier unlocked phones and Wi-Fi tablets), Canada (Wi-Fi tablets) VL: Canada (all variants except Wi-Fi tablets) UB: Latin America & Caribbean XX: All Wi-Fi tablets China ZC: China mainland (all devices) ZH: Hong Kong/Taiwan (all phones) XX: Hong Kong/Taiwan (all tablets) Background The original Samsung Galaxy was launched in June 2009 as Samsung's first Android powered device. At the time, the brand's flagship smartphone was the Samsung Omnia and its successor, powered by Windows Mobile. Omnia had been the second full-touch Samsung device running the TouchWiz user interface (after the Tocco), but the Galaxy had an unmodified Google Android interface; the TouchWiz UI made its way to the Galaxy series with the Galaxy S. The Galaxy S and its successor Galaxy S II became very successful, eclipsing the company's other lines and operating systems. During the decade, the Galaxy phones "became the company's most-praised products [and] also were among the best-selling smartphones in the world." Devices Phones Samsung Galaxy S series The Galaxy S series is Samsung's flagship line of high-end smartphones. The latest models, the Galaxy S24, S24+, and S24 Ultra, were released in January 2024. Samsung Galaxy Z series The Galaxy Z series is Samsung's line of high-end foldable smartphones, which debuted in 2019 with the Galaxy Z Fold. The latest models, the Galaxy Z Flip6 and the Galaxy Z Fold6, were released in July 2024. Samsung Galaxy A series The Galaxy A series is Samsung's line of low- to mid-range smartphones, providing a more affordable alternative to the high-end Galaxy S series with reduced specifications and features. The latest models, the Galaxy A06 and A16, were released in October 2024. Samsung Galaxy C series The Galaxy C series is a line of mid-range devices for specific markets. The latest device released under this line is the Samsung Galaxy C55. Samsung Galaxy M series The Galaxy M series is an online-exclusive, low- to mid-range smartphones, considered the successor to the Galaxy J and Galaxy On series. Samsung Galaxy F series The Galaxy F series is an online-exclusive, low- to mid-range smartphones sold alongside the M series. Galaxy XCover series The Galaxy XCover series is a line of rugged "business" phones, which have low-end specifications but with stronger build quality and durability. The latest in common is the Galaxy Xcover 7. Discontinued lines Samsung released multiple series of smartphones, often overlapping with each other. Most of these series were dropped. The Galaxy Note series was a line of high-end devices primarily oriented towards pen computing using Samsung's S Pen. The line was replaced by the Galaxy S Ultra series and Z series starting in 2021 (with the Galaxy S21 Ultra, with the S Pen being an accessory, and the Galaxy S22 Ultra, which brought back the integrated pen and pen slot). The Galaxy Core/Grand series is a line of mid-range devices released between 2013 and 2015. The line was replaced by the J and A series. The Galaxy J series was a line of entry-range phones, replaced by the Galaxy A series in 2019. The Galaxy Mega series was last updated in 2014 with the Galaxy Mega 2. The Galaxy On series was a line of online-exclusive phones. The series was replaced by the Galaxy M series. The Galaxy Pocket series was last updated in 2014 with the Galaxy Pocket 2. The Galaxy Mini series was last updated in 2012 with the Galaxy Mini 2. The Galaxy Trend series was last updated in 2015 with the Galaxy Trend 2 Lite. The Galaxy Ace series was last updated in 2014 with the Galaxy Ace 4. The Galaxy R series was last updated in 2012 with the Galaxy R Style. The Galaxy Young series is a low-end line. It was last updated in 2014 with the Galaxy Young 2. The Galaxy E series was a more affordable alternative to the 2015 A series last updated in 2015 Other phones Tablets Samsung Galaxy Tab series The Galaxy Tab series is a line of Android-powered tablets that debuted in 2010. There are two sub-categories currently under this series: The Galaxy Tab S is a line of mid-range to high-end tablets, with a focus on productivity and pen computing. The Galaxy Tab S10+ and S10 Ultra are the latest devices, released in September 2024. The Galaxy Tab Active is a line of mid-range rugged tablets, with a focus on durability and use in extreme environments. The Galaxy Tab Active 5 is the latest 8" model, released in January 2024. The Galaxy Tab A is a line of low-end tablets. The Galaxy Tab A9 and A9+ are the latest models, released in 2023. Wearables Smartwatches Samsung announced the Galaxy Gear, a smartwatch running Android 4.3, on 4 September 2013. The Galaxy Gear was Samsung's only smartwatch to feature Galaxy branding; later Samsung smartwatches use the Samsung Gear branding. The Gear series was later succeeded by the Galaxy Watch series. In a software update in May 2014, Samsung replaced the operating system of the Galaxy Gear from Android to Tizen. Samsung's One UI, which is running on newer Galaxy devices released after 2019, is available to Galaxy Watch on 20 May 2019. After the release of the Galaxy Watch Series, Samsung transitioned from Tizen to Google's WearOS with their Galaxy Watch4. The latest release of Galaxy Watch is the Galaxy Watch Ultra and Galaxy Watch7. Activity trackers Samsung announced the Galaxy Fit, an activity tracker positioned below the Galaxy Watch line. The first iteration was released on 2019. Samsung later announced the Galaxy Fit2, which is a follow-up of their first tracker from 2019. In 2024, Samsung would once again announce a new device in the Fit Line, known as the Fit3. Wireless earbuds Samsung announced the Galaxy Buds, which is the new replacement to the Gear IconX.. The first iteration was released on 20 February 2019. Subsequent Galaxy Buds iterations will be revealed during the Galaxy Unpacked event annually. Laptops and convertibles The Galaxy brand is also extended in laptops, notebooks, and 2-in-1 convertibles. The Galaxy Book consists of products based on Microsoft Windows, while the Galaxy Chromebook line is based on ChromeOS. Other Media player Samsung Galaxy Player Cameras Samsung Galaxy Camera Samsung Galaxy Camera 2 Samsung Galaxy NX Projectors Samsung Galaxy Beam i8520 Samsung Galaxy Beam i8530 Software Samsung Galaxy smartphones run the Android operating system under the Google Mobile Services platform, however Samsung and third-parties have bundled various other software in them too. The TouchWiz interface was used until 2017, replaced by Samsung Experience. This was then replaced by One UI in 2019. The company has created many apps and services under the Galaxy brand specifically for these devices - many of which come preloaded - including the Galaxy Store which provides apps and customizations. Since late 2019, several Microsoft apps like Outlook also come preloaded on Galaxy as a result of a Samsung-Microsoft partnership. Interoperability Samsung have made several tools for making various Galaxy devices like phones, tablets and watches, work closer together. Samsung Flow is a feature allowing content to be synced with a PC, such as notifications, replying to messages and authenticating from a PC, and sharing content. It was announced in November 2014, released in a preview form in May 2015 and final released in May 2016. Microsoft's Phone Link also comes on Galaxy smartphones since 2019. Another feature named Multi Control allows controlling of a Galaxy smartphone with a Galaxy Book keyboard and mouse, and drag and drop files between them. Device Control is another feature in the quick panel that can control SmartThings and other devices. Release history The following is a table showing the full initial release history of every Galaxy device since 2009. Region locking and CSC codes Starting from the Galaxy Note 3, Samsung phones and tablets contained a warning label stating that it would only operate with SIM cards from the region the phone was sold in. A spokesperson clarified the policy, stating that it was intended to prevent grey-market reselling, and that it only applied to the first SIM card inserted. For devices to use a SIM card from other regions, one of the following actions totaling five minutes or longer in length must first be performed with the SIM card from the local region: Make calls on the phone or watch from the Samsung Phone app Use the Call and Text on Other Devices feature to make calls With the launch of the Galaxy S8 series in 2017, that process has changed. Due to the fact that many variants use a Multi-CSC, it will only work with SIM cards from the same CSC group. For example, an AT&T SIM card will not work on cellular-based Galaxy devices sold in Europe and other countries. Over the Horizon "Over the Horizon" is the trademark sound for Samsung smartphone devices, first introduced in 2011 on the Galaxy S II. It was composed by Joong-sam Yun and appears as music in the music library of most Samsung phones released since 2011. Prior to 2011, "Beyond Samsung" served as Samsung's trademark music track, while "Samsung Tune" was used as the default ringtone. The sound appears as the default ringtone, as well as the sound when the phone turns on or off (a snippet is used), and as a notification sound. While the basic composition of the six-note tune has not changed since its inception, various versions of different genres have been introduced as the product line evolved. While the first two versions were created in-house at Samsung, later versions were outsourced to external musicians. The sound has been covered by various popular artists who have released their own arrangements and remixes of the song, such as Quincy Jones, Icona Pop, Suga of BTS, and various K-pop artists. In Samsung's U.S. registration of the trademark for the sound, it is described as "the sound of a bell playing a B4 dotted eighth note, a B4 sixteenth note, an F#5 sixteenth note, a B5 sixteenth note, an A#5 eighth note, and an F#5 half note".
Technology
Specific hardware
null
23173149
https://en.wikipedia.org/wiki/Staphylococcus
Staphylococcus
Staphylococcus is a genus of Gram-positive bacteria in the family Staphylococcaceae from the order Bacillales. Under the microscope, they appear spherical (cocci), and form in grape-like clusters. Staphylococcus species are facultative anaerobic organisms (capable of growth both aerobically and anaerobically). The name was coined in 1880 by Scottish surgeon and bacteriologist Alexander Ogston (1844–1929), following the pattern established five years earlier with the naming of Streptococcus. It combines the prefix "staphylo-" (from ), and suffixed by the (from ). Staphylococcus was one of the leading infections in hospitals and many strains of this bacterium have become antibiotic resistant. Despite strong attempts to get rid of them, staphylococcus bacteria stay present in hospitals, where they can infect people who are most at risk of infection. Staphylococcus includes at least 44 species. Of these, nine have two subspecies, one has three subspecies, and one has four subspecies. Many species cannot cause disease and reside normally on the skin and mucous membranes of humans and other animals. Staphylococcus species have been found to be nectar-inhabiting microbes. They are also a small component of the soil microbiome. Taxonomy The taxonomy is based on 16s rRNA sequences, and most of the staphylococcal species fall into 11 clusters: S. aureus group – S. argenteus, S. aureus, S. schweitzeri, S. simiae S. auricularis group – S. auricularis S. carnosus group – S. carnosus, S. condimenti, S. debuckii, S. massiliensis, S. piscifermentans, S. simulans S. epidermidis group – S. capitis, S. caprae, S. epidermidis, S. saccharolyticus S. haemolyticus group – S. borealis, S. devriesei, S. haemolyticus, S. hominis S. hyicus-intermedius group – S. agnetis, S. chromogenes, S. cornubiensis, S. felis, S. delphini, S. hyicus, S. intermedius, S. lutrae, S. microti, S. muscae, S. pseudintermedius, S. rostri, S. schleiferi S. lugdunensis group – S. lugdunensis S. saprophyticus group – S. arlettae, S. caeli, S. cohnii, S. equorum, S. gallinarum, S. kloosii, S. leei, S. nepalensis, S. saprophyticus, S. succinus, S. xylosus S. sciuri group – S. fleurettii, S. lentus, S. sciuri, S. stepanovicii, S. vitulinus S. simulans group – S. simulans S. warneri group – S. pasteuri, S. warneri A twelfth group – that of S. caseolyticus – has now been removed to a new genus, Macrococcus, the species of which are currently the closest known relatives of Staphylococcus. Two species were described in 2015 – Staphylococcus argenteus and Staphylococcus schweitzeri – both of which were previously considered variants of S. aureus. A new coagulase negative species – Staphylococcus edaphicus – has been isolated from Antarctica. This species is probably a member of the S. saprophyticus group. Groups Based on an analysis of orthologous gene content three groups (A, B and C) have been proposed. Group A includes S. aureus, S. borealis, S. capitis, S. epidermidis, S. haemolyticus, S. hominis, S. lugdunensis, S. pettenkoferi, S. simiae and S. warneri. Group B includes S. arlettae, S. cohnii, S. equorum, S. saprophyticus and S. xylosus. Group C includes S. delphini, S. intermedius and S. pseudintermedius.
Biology and health sciences
Gram-positive bacteria
Plants
23174230
https://en.wikipedia.org/wiki/Calf%20%28leg%29
Calf (leg)
The calf (: calves; Latin: sura) is the back portion of the lower leg in human anatomy. The muscles within the calf correspond to the posterior compartment of the leg. The two largest muscles within this compartment are known together as the calf muscle and attach to the heel via the Achilles tendon. Several other, smaller muscles attach to the knee, the ankle, and via long tendons to the toes. Etymology From Middle English calf, kalf, from Old Norse kalfi, possibly derived from the same Germanic root as English calf ("young cow"). Cognate with Icelandic kálfi ("calf of the leg"). Calf and calf of the leg are documented in use in Middle English circa AD 1350 and AD 1425 respectively. Historically, the absence of calf, meaning a lower leg without a prominent calf muscle, was regarded by some authors as a sign of inferiority: it is well known that monkeys have no calves, and still less do they exist among the lower orders of mammals. Structure The calf is composed of the muscles of the posterior compartment of the leg: The gastrocnemius and soleus (composing the triceps surae muscle) and the tibialis posterior. The sural nerve provides innervation. Clinical significance Medical conditions that result in calf swelling among other symptoms include deep vein thrombosis compartment syndrome, Achilles tendon rupture, and varicose veins. Idiopathic leg cramps are common and typically affect the calf muscles at night. Edema also is common and in many cases idiopathic. In a small study of factory workers in good health, wearing compression garments helped to reduce edema and the pain associated with edema. A small study of runners found that wearing knee-high compression stockings while running significantly improved performance. The circumference of the calf has been used to estimate selected health risks. In Spain, a study of 22,000 persons 65 or older found that a smaller calf circumference was associated with a higher risk of undernutrition. In France, a study of 6265 persons 65 or older found an inverse correlation between calf circumference and carotid plaques. Calf augmentation and restoration is available, using a range of prosthesis devices and surgical techniques. Training and Exercise The calves can be isolated by performing movements involving plantarflexion (pointing the toes down). The two major categories of calf exercises are those that maintain an extended knee, and those that maintain a flexed knee. The first category includes movements such as standing calf raises, donkey calf raises and stair calves. The second category includes movements that maintain a bent knee, such as seated calf raises. Movements with a straight knee will target the gastrocnemius muscle more, and movements with a bent-knee will target the soleus muscle more. However, both variations will target both muscles to a large degree. It is important to train the calves relatively close to failure, which is 0-4 repetitions away from technical failure. They recover quickly, often requiring rest times of as little as 10 seconds and often no more than 60 seconds. Ensuring a 1-2 second pause at the top and bottom of the movement will put more emphasis on the muscle, and less emphasis on the achilles tendon.
Biology and health sciences
Human anatomy
Health
1471115
https://en.wikipedia.org/wiki/Sandfly
Sandfly
Sandfly or sand fly is a colloquial name for any species or genus of flying, biting, blood-sucking dipteran (fly) encountered in sandy areas. In the United States, sandfly may refer to certain horse flies that are also known as "greenheads" (family Tabanidae), or to members of the family Ceratopogonidae. The bites usually result in a small, intensely itchy bump or welt, the strength of which intensifies over a period of 5-7 days before dissipating. Sandfly bites can be distinguished from mosquito bites as sandfly bite are usually found in clusters as they attack animals in groups. Moderate relief is achieved with varying success through the application of over-the-counter products such as Benadryl (ingested) or an analgesic cream such as After Bite (applied topically). Outside the United States, sandfly may refer to members of the subfamily Phlebotominae within the Psychodidae. Biting midges (Ceratopogonidae) are sometimes called sandflies or no-see-ums (no-see-em, noseeum). New Zealand sandflies are in the genus of sand fly Austrosimulium, a type of black fly. In the various sorts of sandfly only the female is responsible for biting and sucking the blood of mammals, reptiles and birds; the protein in the blood is necessary for the production of eggs, making the sandfly an anautogenous reproducer. Some sandfly genera of the subfamily Phlebotominae are the primary vectors of leishmaniasis and pappataci fever; both diseases are confusingly referred to as sandfly fever. In Asia, Africa, and Europe, leishmaniasis is spread by sand flies of the genus Phlebotomus; in the Americas, the disease is spread by sandflies of the genus Lutzomyia. Belize and Honduras are notorious in the Caribbean for their sandfly populations and travel pages frequently warn tourists to bring bug spray containing high concentrations of DEET. Viruses Among the viruses that sandflies can carry is the Chandipura virus, which, as a cousin of rabies, is deadly. There was an outbreak in India in 2010, followed by an endemic outbreak recorded in Gujarat in 2024. Protozoa Leishmaniasis, a disease caused by several species of the genus Leishmania, is transmitted by various sandflies. Leishmania donovani causes spiking fevers, hepatosplenomegaly, and pancytopenia. It can be diagnosed through microscopic review by visualizing amastigotes in containing macrophages, and is treatable with sodium stibogluconate. Bacteria Bartonella bacilliformis, the causal agent of Carrion's disease, is transmitted by different members of the genus Lutzomyia. This disease is restricted to Andean areas of Peru and Ecuador, with historical reports in Southern Colombia. Prevention Over-the-counter repellents with high concentrations of DEET or picaridin are proven to work; however effectiveness seems to differ among individuals with some people reporting better results with one product over another while other people finding neither product effective for them. This may be partially due to various species living in different areas. A particular extract of lemon eucalyptus oil (not the essential oil) has now been shown to be as effective as DEET in various studies. Most information on repellents focuses on mosquitoes, but mosquito repellents are effective for sandflies and midges as well. Cultural views New Zealand sandflies (which are taxonomically blackflies—Simuliidae) have a native Māori legend wherein "the god Tu-te-raki-whanoa had just finished creating the landscape of Fiordland, it was absolutely stunning... so stunning that it stopped people from working. They just stood around gazing at the beauty instead. The goddess Hine-nui-te-pō became angry at these unproductive people, so she created the sandfly to bite them and get them moving". These sand flies were able, according to another Māori legend, to revive the dead hero Ha-tupatu.
Biology and health sciences
Flies (Diptera)
Animals
1471182
https://en.wikipedia.org/wiki/Crutch
Crutch
A crutch is a mobility aid that transfers weight from the legs to the upper body. It is often used by people who cannot use their legs to support their weight, for reasons ranging from short-term injuries to lifelong disabilities. History Crutches were used in ancient Egypt. In 1917, Emile Schlick patented the first commercially produced crutch; the design consisted of a walking stick with an upper arm support. Later, A.R. Lofstrand Jr. developed the first crutches with a height-adjustable feature. Over time, the design of crutches has not changed much, and the classic design continues to be the most commonly used. Types There are several types of crutches: Underarm or axillary Axillary crutches are used by placing the pad against the ribcage beneath the armpit and holding the grip, which is below and parallel to the pad. They are usually used to provide support for patients who have temporary restriction on ambulation. With underarm crutches, sometimes a towel or some kind of soft cover is needed to prevent or reduce armpit injury. A condition known as crutch paralysis, or crutch palsy can arise from pressure on nerves in the armpit, or axilla. Specifically, "the brachial plexus in the axilla is often damaged from the pressure of a crutch...In these cases the radial is the nerve most frequently implicated; the ulnar nerve suffers next in frequency." An uncommon type of axillary crutches is the spring-loaded crutch. The underarm pad is a curved design that is open in the front with the grips for the hands shaped for maximum comfort and to reduce the prevalence of overuse injuries. These crutches also contain a spring mechanism at the bottom. The idea behind this design is to allow the user to propel themselves further, resulting in quicker movement from place to place, though research has shown that the difference in speed is very small when comparing standard axillary crutches to spring-loaded crutches. Forearm A forearm crutch (also commonly known as an elbow crutch, Canadian crutch or "Lofstrand" crutch due to a brand by this name) has a cuff at the top that goes around the forearm. It is used by inserting the arm into the cuff and holding the grip. The hinged cuff, most frequently made of plastic or metal, can be a half-circle or a full circle with a V-type opening in the front allowing the forearm to slip out in case of a fall. Forearm crutches are the dominant type used in Europe, whether for short or long term use. Outside of Europe forearm crutches are more likely to be used by users with long-term disabilities, with axillary crutches more common for short-term use. Platform These are less common and used by those with poor hand or grip strength due to arthritis, cerebral palsy, or other conditions. The forearm rests on a horizontal platform and is usually strapped in place with velcro-type straps that allow the platform or trough to release in case of a fall. The hand holds an angled grip which, in addition, should allow adjustment of length from trough to grip and side-to-side sway depending on the user's disability. Leg support These non-traditional crutches are useful for users with an injury or disability affecting one lower leg only. They function by strapping the affected leg into a support frame that simultaneously holds the lower leg clear of the ground while transferring the load from the ground to the user's knee or thigh. This style of crutch has the advantage of not using the hands or arms while walking. A claimed benefit is that upper thigh atrophy is also reduced because the affected leg remains in use. Unlike other crutch designs these designs are unusable for pelvic, hip or thigh injuries and in some cases for knee injuries also. Walking sticks or canes serve an identical purpose to crutches, but are held only in the hand and have a limited load bearing capability because of this. Types of gaits One crutch When using one crutch, the crutch may be placed on the side of the unaffected leg or used to bear the load of the affected leg. Four-point gait Those who can tolerate partial weight bearing on both legs usually use the four point gait. The sequence is right crutch, left leg, left crutch, right leg. This is the slowest of all gaits but also the safest in that three of the four points are in contact with the ground at any given time. Two-point gait Those who can tolerate partial weight bearing on both legs but require less support than a four-point gait usually use the two-point gait. The sequence is right crutch with left leg and then left crutch with right leg. Three-point gait The three point gait is usually used by those who cannot bear weight on one leg. Both crutches are advanced while bearing weight on the unaffected leg. Then the unaffected leg is advanced while bearing weight on the crutches. Swing-to gait A person with a non-weight bearing injury generally performs a "swing-to" gait: lifting the affected leg, the user places both crutches in front of himself, and then swings his uninjured leg to meet the crutches. A similar "swing-through" gait is when both legs are advanced in front of the crutches rather than beside them. Stairs When climbing up stairs, the unaffected leg is advanced first, then the affected leg and the crutches are advanced. When descending stairs, the crutches are advanced first and then the affected leg and the unaffected leg. Alternative devices The knee scooter and the wheelchair are possible alternatives for patients who cannot use or do not like crutches. These wheeled devices introduce an additional limitation, however, since most cannot negotiate stairs. Materials Wood Metal alloys (most often steel, aluminium alloys, titanium alloys) Carbon or glass fiber reinforced composites Thermoplastic Carbon-fiber reinforced polymer
Technology
Mobility aids
null
1471836
https://en.wikipedia.org/wiki/Blood%20orange
Blood orange
The blood orange is a variety of orange with crimson, near blood-colored flesh. It is one of the sweet orange varieties (Citrus × sinensis). It is also known as the raspberry orange. The dark flesh color is due to the presence of anthocyanins, a family of polyphenol pigments common to many flowers and fruit, but uncommon in citrus fruits. Chrysanthemin (cyanidin 3-O-glucoside) is the main compound found in red oranges. The flesh develops its characteristic red color when the fruit develops with low temperatures during the night. Sometimes, dark coloring is seen on the exterior of the rind as well. This depends on the variety of blood orange. The skin can be tougher and harder to peel than that of other oranges. Blood oranges have a unique flavor compared to other oranges, being distinctly raspberry-like in addition to the usual citrus notes. The anthocyanin pigments of blood oranges begin accumulating in the vesicles at the edges of the segments, and at the blossom end of the fruit, and continue accumulating in cold storage after harvest. The blood orange is a natural mutation of the orange, which is itself a hybrid, probably between the pomelo and the tangerine. Within Europe, the arancia rossa di Sicilia (red orange of Sicily) has Protected Geographical Status. In the Valencian Community, it was introduced in the second half of the 19th century. Cultivars The three most common types of blood oranges are the Tarocco (native to Italy), the Sanguinello (native to Spain), and the very dark Moro (native to Italy), the newest variety of the three. Other less-common types include Maltaise demi sanguine, Washington Sanguine, Ruby, Doblafina, Delfino, Burris Valencia, Vaccaro, Grosse Ronde, Entrefina, and Sanguinello a Pignu. While also pigmented, Cara cara navels and Vainiglia sanguignos have pigmentation based on lycopene, not anthocyanins as blood oranges do. Moro The Moro is the most colorful of the blood oranges, with a deep red flesh and a rind with a bright red blush. The flavor is stronger and the aroma is more intense than a normal orange. This fruit has a distinct, sweet flavor with a hint of raspberry. This orange possesses a more bitter taste than the 'Tarocco' or the 'Sanguinello'. The 'Moro' variety is believed to have originated at the beginning of the 19th century in the citrus-growing area around Lentini (in the Province of Syracuse in Sicily, Italy) as a bud mutation of the "Sanguinello Moscato". The 'Moro' is a "deep blood orange", meaning that the flesh ranges from orange-veined with ruby coloration, to vermilion, to vivid crimson, to nearly black. Tarocco The name Tarocco is thought to be derived from an exclamation of wonder expressed by the farmer who was shown this fruit by its discoverer. It is a medium-sized fruit and is perhaps the sweetest and most flavorful of the three types. The most popular table orange in Italy, it is thought to have derived from a mutation of the 'Sanguinello'. It is referred to as "half-blood", because the flesh is not accentuated in red pigmentation as much as with the 'Moro' and 'Sanguinello' varieties. It has thin orange skin, slightly blushed in red tones. The Tarocco is one of the world's most popular oranges because of its sweetness (Brix to acid ratio is generally above 12.0) and juiciness. It has the highest vitamin C content of any orange variety grown in the world, mainly on account of the fertile soil surrounding Mount Etna, and it is easy to peel. The 'Tarocco' orange is seedless. The University of California, Riverside Citrus Variety Collection has delineated three subcultivars of 'Tarocco'. The 'Bream Tarocco', which was originally donated by Robert Bream of Lindsay, California, is of medium to large fruit with few to no seeds. 'Tarocco #7', or 'CRC 3596 Tarocco', is known for its flavor, but has a rind with little to no coloration. The 'Thermal Tarocco' was donated by A. Newcomb of Thermal Plaza Nursery in Thermal, California. Sanguinello The Sanguinello , also called Sanguinelli in the US (the plural form of its name in Italian), discovered in Spain in 1929, has reddish skin, few seeds, and sweet, tender flesh. 'Sanguinello', the Sicilian late "full-blood" orange, is close in characteristics to the 'Moro'. Where grown in the Northern Hemisphere, it matures in February, but can remain on trees unharvested until April. Fruit can last until the end of May. The peel is compact, and clear yellow with a red tinge. The flesh is orange with multiple blood-colored streaks. History and background Blood oranges may have originated in the southern Mediterranean, where they have been grown since the 18th century. They are a common orange grown in Italy. The anthocyanins – which give the orange its distinct maroon color – will only develop when temperatures are low at night, as during the Mediterranean fall and winter. Blood oranges cultivated in the United States are in season from December to March (Texas), and from November to May (California). As food Some blood orange juice may be somewhat tart; other kinds are sweet while retaining the characteristic blood orange taste. The oranges can also be used to create marmalade, and the zest can be used for baking. A popular Sicilian winter salad is made with sliced blood oranges, sliced bulb fennel, and olive oil. The oranges have also been used to create gelato, sorbet, and Italian soda. Nutrition Raw blood oranges are a rich source (20% or greater of the Daily Value, DV) of vitamin C and dietary fiber, and a moderate source of folate (15% DV), with no other micronutrients in significant content. Gallery
Biology and health sciences
Citrus fruits
Plants
1473803
https://en.wikipedia.org/wiki/Pleurotus%20ostreatus
Pleurotus ostreatus
Pleurotus ostreatus, the oyster mushroom, oyster fungus, hiratake, or pearl oyster mushroom is a common edible mushroom. It is one of the more commonly sought wild mushrooms, though it can also be cultivated on straw and other media. Description The mushroom has a broad, fan or oyster-shaped cap spanning ; natural specimens range from white to gray or tan to dark-brown; the margin is inrolled when young, and is smooth and often somewhat lobed or wavy. The flesh is white, firm, and varies in thickness due to stipe arrangement. The gills of the mushroom are white to cream, and descend on the stalk if present. If so, the stipe is off-center with a lateral attachment to wood. The spore print of the mushroom is white to lilac-gray, and best viewed on dark background. The mushroom's stipe is often absent. When present, it is short and thick. It has the bittersweet aroma of benzaldehyde (which is also characteristic of bitter almonds). Similar species It is related to the similarly cultivated Pleurotus eryngii (king oyster mushroom). Other similar species include Pleurocybella porrigens, Hohenbuehelia petaloides, and the hairy-capped Phyllotopsis nidulans. Omphalotus nidiformis is a toxic lookalike found in Australia. In North America, the toxic muscarine-containing Omphalotus olivascens (the western jack-o'-lantern mushroom) and Clitocybe dealbata (the ivory funnel mushroom) both bear a resemblance to P. ostreatus. Some toxic Lentinellus species are similar in appearance, but have gills with jagged edges and finely haired caps. Name Both the Latin and common names refer to the shape of the fruiting body. The Latin pleurotus (side-ear) refers to the sideways growth of the stem with respect to the cap, while the Latin ostreatus (and the English common name, oyster) refers to the shape of the cap which resembles the bivalve of the same name. The reference to oyster may also derive from the slippery texture of the mushroom. The name grey oyster mushroom may be used for P. ostreatus. Distribution and habitat The oyster mushroom is widespread in many temperate and subtropical forests throughout the world, although it is absent from the Pacific Northwest of North America, being replaced by P. pulmonarius and P. populinus. It is a saprotroph that acts as a primary decomposer of wood, especially deciduous trees, and beech trees in particular. It is a white-rot wood-decay fungus. The standard oyster mushroom can grow in many places, but some other related species, such as the branched oyster mushroom, grow only on trees. They may be found all year round in the UK. While this mushroom is often seen growing on dying hardwood trees, it only appears to be acting saprophytically, rather than parasitically. As the tree dies of other causes, P. ostreatus grows on the rapidly increasing mass of dead and dying wood. They actually benefit the forest by decomposing the dead wood, returning vital elements and minerals to the ecosystem in a form usable to other plants and organisms. Oyster mushrooms bioaccumulate lithium. Ecology Predatory behavior on nematodes has evolved independently in all major fungal lineages. P. ostreatus is one of at least 700 known nematophagous mushrooms. Its mycelia can kill and digest nematodes, which is believed to be a way in which the mushroom obtains nitrogen. Uses Commercial cultivation of this mushroom first began in Germany as a subsistence measure during World War I, and it is now grown commercially around the world for food. Culinary Oyster mushrooms are used in Czech, Polish, and Slovak contemporary cuisine in soups and stews in a similar fashion to meat, as well as breaded to become a vegetarian alternative to the kotlet in Polish dishes. The oyster mushroom is a choice edible, and is a delicacy in Japanese, Korean and Chinese cuisine. It is frequently served on its own, in soups, stuffed, or in stir-fry recipes with soy sauce. Oyster mushrooms may be used in sauces, such as vegetarian oyster sauce. The mushroom's taste has been described as mild with a slight odor similar to anise. The oyster mushroom is best when picked young; as the mushroom ages, the flesh becomes tough and the flavor becomes acrid and unpleasant. Other uses The pearl oyster mushroom is also used to create mycelium bricks, mycelium furniture, and leather-like products. Oyster mushrooms can also be used industrially for mycoremediation purposes. Oyster mushrooms were used to treat soil that had been polluted with diesel oil. The mushroom was able to convert 95% of the oil into non-toxic compounds. P. ostreatus is also capable of growing upon and degrading oxo-biodegradable plastic bags; it can also contribute to the degradation of green polyethylene.
Biology and health sciences
Edible fungi
null
1474467
https://en.wikipedia.org/wiki/Compton%20wavelength
Compton wavelength
The Compton wavelength is a quantum mechanical property of a particle, defined as the wavelength of a photon whose energy is the same as the rest energy of that particle (see mass–energy equivalence). It was introduced by Arthur Compton in 1923 in his explanation of the scattering of photons by electrons (a process known as Compton scattering). The standard Compton wavelength of a particle of mass is given by where is the Planck constant and is the speed of light. The corresponding frequency is given by and the angular frequency is given by The CODATA 2022 value for the Compton wavelength of the electron is . Other particles have different Compton wavelengths. Reduced Compton wavelength The reduced Compton wavelength (barred lambda, denoted below by ) is defined as the Compton wavelength divided by : where is the reduced Planck constant. Role in equations for massive particles The inverse reduced Compton wavelength is a natural representation for mass on the quantum scale, and as such, it appears in many of the fundamental equations of quantum mechanics. The reduced Compton wavelength appears in the relativistic Klein–Gordon equation for a free particle: It appears in the Dirac equation (the following is an explicitly covariant form employing the Einstein summation convention): The reduced Compton wavelength is also present in Schrödinger's equation, although this is not readily apparent in traditional representations of the equation. The following is the traditional representation of Schrödinger's equation for an electron in a hydrogen-like atom: Dividing through by and rewriting in terms of the fine-structure constant, one obtains: Distinction between reduced and non-reduced The reduced Compton wavelength is a natural representation of mass on the quantum scale and is used in equations that pertain to inertial mass, such as the Klein–Gordon and Schrödinger's equations. Equations that pertain to the wavelengths of photons interacting with mass use the non-reduced Compton wavelength. A particle of mass has a rest energy of . The Compton wavelength for this particle is the wavelength of a photon of the same energy. For photons of frequency , energy is given by which yields the Compton wavelength formula if solved for . Limitation on measurement The Compton wavelength expresses a fundamental limitation on measuring the position of a particle, taking into account quantum mechanics and special relativity. This limitation depends on the mass of the particle. To see how, note that we can measure the position of a particle by bouncing light off it – but measuring the position accurately requires light of short wavelength. Light with a short wavelength consists of photons of high energy. If the energy of these photons exceeds , when one hits the particle whose position is being measured the collision may yield enough energy to create a new particle of the same type. This renders moot the question of the original particle's location. This argument also shows that the reduced Compton wavelength is the cutoff below which quantum field theory – which can describe particle creation and annihilation – becomes important. The above argument can be made a bit more precise as follows. Suppose we wish to measure the position of a particle to within an accuracy . Then the uncertainty relation for position and momentum says that so the uncertainty in the particle's momentum satisfies Using the relativistic relation between momentum and energy , when exceeds then the uncertainty in energy is greater than , which is enough energy to create another particle of the same type. But we must exclude this greater energy uncertainty. Physically, this is excluded by the creation of one or more additional particles to keep the momentum uncertainty of each particle at or below . In particular the minimum uncertainty is when the scattered photon has limit energy equal to the incident observing energy. It follows that there is a fundamental minimum for : Thus the uncertainty in position must be greater than half of the reduced Compton wavelength . Relationship to other constants Typical atomic lengths, wave numbers, and areas in physics can be related to the reduced Compton wavelength for the electron and the electromagnetic fine-structure constant The Bohr radius is related to the Compton wavelength by: The classical electron radius is about 3 times larger than the proton radius, and is written: The Rydberg constant, having dimensions of linear wavenumber, is written: This yields the sequence: For fermions, the reduced Compton wavelength sets the cross-section of interactions. For example, the cross-section for Thomson scattering of a photon from an electron is equal to which is roughly the same as the cross-sectional area of an iron-56 nucleus. For gauge bosons, the Compton wavelength sets the effective range of the Yukawa interaction: since the photon has no mass, electromagnetism has infinite range. The Planck mass is the order of mass for which the Compton wavelength and the Schwarzschild radius are the same, when their value is close to the Planck length (). The Schwarzschild radius is proportional to the mass, whereas the Compton wavelength is proportional to the inverse of the mass. The Planck mass and length are defined by: Geometrical interpretation A geometrical origin of the Compton wavelength has been demonstrated using semiclassical equations describing the motion of a wavepacket. In this case, the Compton wavelength is equal to the square root of the quantum metric, a metric describing the quantum space: .
Physical sciences
Quantum mechanics
Physics
1474542
https://en.wikipedia.org/wiki/Animal%20embryonic%20development
Animal embryonic development
In developmental biology, animal embryonic development, also known as animal embryogenesis, is the developmental stage of an animal embryo. Embryonic development starts with the fertilization of an egg cell (ovum) by a sperm cell (spermatozoon). Once fertilized, the ovum becomes a single diploid cell known as a zygote. The zygote undergoes mitotic divisions with no significant growth (a process known as cleavage) and cellular differentiation, leading to development of a multicellular embryo after passing through an organizational checkpoint during mid-embryogenesis. In mammals, the term refers chiefly to the early stages of prenatal development, whereas the terms fetus and fetal development describe later stages. The main stages of animal embryonic development are as follows: The zygote undergoes a series of cell divisions (called cleavage) to form a structure called a morula. The morula develops into a structure called a blastula through a process called blastulation. The blastula develops into a structure called a gastrula through a process called gastrulation. The gastrula then undergoes further development, including the formation of organs (organogenesis). The embryo then transforms into the next stage of development, the nature of which varies among different animal species (examples of possible next stages include a fetus and a larva). Fertilization and the zygote The egg cell is generally asymmetric, having an animal pole (future ectoderm). It is covered with protective envelopes, with different layers. The first envelope – the one in contact with the membrane of the egg – is made of glycoproteins and is known as the vitelline membrane (zona pellucida in mammals). Different taxa show different cellular and acellular envelopes englobing the vitelline membrane. Fertilization is the fusion of gametes to produce a new organism. In animals, the process involves a sperm fusing with an ovum, which eventually leads to the development of an embryo. Depending on the animal species, the process can occur within the body of the female in internal fertilization, or outside in the case of external fertilization. The fertilized egg cell is known as the zygote. To prevent more than one sperm fertilizing the egg (polyspermy), fast block and slow block to polyspermy are used. Fast block, the membrane potential rapidly depolarizing and then returning to normal, happens immediately after an egg is fertilized by a single sperm. Slow block begins in the first few seconds after fertilization and is when the release of calcium causes the cortical reaction, in which various enzymes are released from cortical granules in the eggs plasma membrane, causing the expansion and hardening of the outside membrane, preventing more sperm from entering. Cleavage and morula Cell division with no significant growth, producing a cluster of cells that is the same size as the original zygote, is called cleavage. At least four initial cell divisions occur, resulting in a dense ball of at least sixteen cells called the morula. In the early mouse embryo, the sister cells of each division remain connected during interphase by microtubule bridges. The different cells derived from cleavage, up to the blastula stage, are called blastomeres. Depending mostly on the amount of yolk in the egg, the cleavage can be holoblastic (total) or meroblastic (partial). Holoblastic cleavage occurs in animals with little yolk in their eggs, such as humans and other mammals who receive nourishment as embryos from the mother, via the placenta or milk, such as might be secreted from a marsupium. Meroblastic cleavage occurs in animals whose eggs have more yolk (i.e. birds and reptiles). Because cleavage is impeded in the vegetal pole, there is an uneven distribution and size of cells, being more numerous and smaller at the animal pole of the zygote. In holoblastic eggs, the first cleavage always occurs along the vegetal-animal axis of the egg, and the second cleavage is perpendicular to the first. From here the spatial arrangement of blastomeres can follow various patterns, due to different planes of cleavage, in various organisms: The end of cleavage is known as midblastula transition and coincides with the onset of zygotic transcription. In amniotes, the cells of the morula are at first closely aggregated, but soon they become arranged into an outer or peripheral layer, the trophoblast, which does not contribute to the formation of the embryo proper, and an inner cell mass, from which the embryo is developed. Fluid collects between the trophoblast and the greater part of the inner cell-mass, and thus the morula is converted into a vesicle, called the blastodermic vesicle. The inner cell mass remains in contact, however, with the trophoblast at one pole of the ovum; this is named the embryonic pole, since it indicates the location where the future embryo will develop. Formation of the blastula After the seventh cleavage has produced 128 cells, the morula becomes a blastula. The blastula is usually a spherical layer of cells (the blastoderm) surrounding a fluid-filled or yolk-filled cavity the blastocoel. Mammals at this stage form a structure called the blastocyst, characterized by an inner cell mass that is distinct from the surrounding blastula. The blastocyst is similar in structure to the blastula but their cells have different fates. In the mouse, primordial germ cells arise from the inner cell mass (the epiblast) as a result of extensive genome-wide reprogramming. Reprogramming involves global DNA demethylation facilitated by the DNA base excision repair pathway as well as chromatin reorganization, and results in cellular totipotency. Before gastrulation, the cells of the trophoblast become differentiated into two layers: The outer layer forms a syncytium (i.e., a layer of protoplasm studded with nuclei, but showing no evidence of subdivision into cells), termed the syncytiotrophoblast, while the inner layer, the cytotrophoblast, consists of well-defined cells. As already stated, the cells of the trophoblast do not contribute to the formation of the embryo proper; they form the ectoderm of the chorion and play an important part in the development of the placenta. On the deep surface of the inner cell mass, a layer of flattened cells, called the endoderm, is differentiated and quickly assumes the form of a small sac, called the yolk sac. Spaces appear between the remaining cells of the mass and, by the enlargement and coalescence of these spaces, a cavity called the amniotic cavity is gradually developed. The floor of this cavity is formed by the embryonic disk, which is composed of a layer of prismatic cells – the embryonic ectoderm, derived from the inner cell mass and lying in apposition with the endoderm. Formation of the germ layers The embryonic disc becomes oval and then pear-shaped, the wider end being directed forward. Towards the narrow, posterior end, an opaque primitive streak, is formed and extends along the middle of the disc for about half of its length; at the anterior end of the streak there is a knob-like thickening termed the primitive node or knot, (known as Hensen's knot in birds). A shallow groove, the primitive groove, appears on the surface of the streak, and the anterior end of this groove communicates by means of an aperture, the blastopore, with the yolk sac. The primitive streak is produced by a thickening of the axial part of the ectoderm, the cells of which multiply, grow downward, and blend with those of the subjacent endoderm. From the sides of the primitive streak a third layer of cells, the mesoderm, extends laterally between the ectoderm and endoderm; the caudal end of the primitive streak forms the cloacal membrane. The blastoderm now consists of three layers, an outer ectoderm, a middle mesoderm, and an inner endoderm; each has distinctive characteristics and gives rise to certain tissues of the body. For many mammals, it is sometime during formation of the germ layers that implantation of the embryo in the uterus of the mother occurs. Formation of the gastrula During gastrulation cells migrate to the interior of the blastula, subsequently forming two (in diploblastic animals) or three (triploblastic) germ layers. The embryo during this process is called a gastrula. The germ layers are referred to as the ectoderm, mesoderm and endoderm. In diploblastic animals only the ectoderm and the endoderm are present.* Among different animals, different combinations of the following processes occur to place the cells in the interior of the embryo: Epiboly – expansion of one cell sheet over other cells Ingression – migration of individual cells into the embryo (cells move with pseudopods) Invagination – infolding of cell sheet into embryo, forming the mouth, anus, and archenteron. Delamination – splitting or migration of one sheet into two sheets Involution – inturning of cell sheet over the basal surface of an outer layer Polar proliferation – Cells at the polar ends of the blastula/gastrula proliferate, mostly at the animal pole. Other major changes during gastrulation: Heavy RNA transcription using embryonic genes; up to this point the RNAs used were maternal (stored in the unfertilized egg). Cells start major differentiation processes, losing their totipotentiality. In most animals, a blastopore is formed at the point where cells are migrating inward. Two major groups of animals can be distinguished according to the blastopore's fate. In deuterostomes the anus forms from the blastopore, while in protostomes it develops into the mouth. Formation of the early nervous system – neural groove, tube and notochord In front of the primitive streak, two longitudinal ridges, caused by a folding up of the ectoderm, make their appearance, one on either side of the middle line formed by the streak. These are named the neural folds; they commence some little distance behind the anterior end of the embryonic disk, where they are continuous with each other, and from there gradually extend backward, one on either side of the anterior end of the primitive streak. Between these folds is a shallow median groove, the neural groove. The groove gradually deepens as the neural folds become elevated, and ultimately the folds meet and coalesce in the middle line and convert the groove into a closed tube, the neural tube or canal, the ectodermal wall of which forms the rudiment of the nervous system. After the coalescence of the neural folds over the anterior end of the primitive streak, the blastopore no longer opens on the surface but into the closed canal of the neural tube, and thus a transitory communication, the neurenteric canal, is established between the neural tube and the primitive digestive tube. The coalescence of the neural folds occurs first in the region of the hind brain, and from there extends forward and backward; toward the end of the third week, the front opening (anterior neuropore) of the tube finally closes at the anterior end of the future brain, and forms a recess that is in contact, for a time, with the overlying ectoderm; the hinder part of the neural groove presents for a time a rhomboidal shape, and to this expanded portion the term sinus rhomboidalis has been applied. Before the neural groove is closed, a ridge of ectodermal cells appears along the prominent margin of each neural fold; this is termed the neural crest or ganglion ridge, and from it the spinal and cranial nerve ganglia and the ganglia of the sympathetic nervous system are developed. By the upward growth of the mesoderm, the neural tube is ultimately separated from the overlying ectoderm. The cephalic end of the neural groove exhibits several dilatations that, when the tube is closed, assume the form of the three primary brain vesicles, and correspond, respectively, to the future forebrain (prosencephalon), midbrain (mesencephalon), and hindbrain (rhombencephalon) (Fig. 18). The walls of the vesicles are developed into the nervous tissue and neuroglia of the brain, and their cavities are modified to form its ventricles. The remainder of the tube forms the spinal cord (medulla spinalis); from its ectodermal wall the nervous and neuroglial elements of the spinal cord are developed, while the cavity persists as the central canal. Formation of the early septum The extension of the mesoderm takes place throughout the whole of the embryonic and extra-embryonic areas of the ovum, except in certain regions. One of these is seen immediately in front of the neural tube. Here the mesoderm extends forward in the form of two crescentic masses, which meet in the middle line so as to enclose behind them an area that is devoid of mesoderm. Over this area, the ectoderm and endoderm come into direct contact with each other and constitute a thin membrane, the buccopharyngeal membrane, which forms a septum between the primitive mouth and pharynx. Early formation of the heart and other primitive structures In front of the buccopharyngeal area, where the lateral crescents of mesoderm fuse in the middle line, the pericardium is afterward developed, and this region is therefore designated the pericardial area. A second region where the mesoderm is absent, at least for a time, is that immediately in front of the pericardial area. This is termed the proamniotic area, and is the region where the proamnion is developed; in humans, however, it appears that a proamnion is never formed. A third region is at the hind end of the embryo, where the ectoderm and endoderm come into apposition and form the cloacal membrane. Somitogenesis Somitogenesis is the process by which somites (primitive segments) are produced. These segmented tissue blocks differentiate into skeletal muscle, vertebrae, and dermis of all vertebrates. Somitogenesis begins with the formation of somitomeres (whorls of concentric mesoderm) marking the future somites in the presomitic mesoderm (unsegmented paraxial). The presomitic mesoderm gives rise to successive pairs of somites, identical in appearance that differentiate into the same cell types but the structures formed by the cells vary depending upon the anteroposterior (e.g., the thoracic vertebrae have ribs, the lumbar vertebrae do not). Somites have unique positional values along this axis and it is thought that these are specified by the Hox homeotic genes. Toward the end of the second week after fertilization, transverse segmentation of the paraxial mesoderm begins, and it is converted into a series of well-defined, more or less cubical masses, also known as the somites, which occupy the entire length of the trunk on either side of the middle line from the occipital region of the head. Each segment contains a central cavity (known as a [myocoel), which, however, is soon filled with angular and spindle-shape cells. The somites lie immediately under the ectoderm on the lateral aspect of the neural tube and notochord, and are connected to the lateral mesoderm by the intermediate cell mass. Those of the trunk may be arranged in the following groups, viz.: cervical 8, thoracic 12, lumbar 5, sacral 5, and coccygeal from 5 to 8. Those of the occipital region of the head are usually described as being four in number. In mammals, somites of the head can be recognized only in the occipital region, but a study of the lower vertebrates leads to the belief that they are present also in the anterior part of the head and that, altogether, nine segments are represented in the cephalic region. Organogenesis At some point after the different germ layers are defined, organogenesis begins. The first stage in vertebrates is called neurulation, where the neural plate folds forming the neural tube (see above). Other common organs or structures that arise at this time include the heart and somites (also above), but from now on embryogenesis follows no common pattern among the different taxa of the animalia. In most animals organogenesis, along with morphogenesis, results in a larva. The hatching of the larva, which must then undergo metamorphosis, marks the end of embryonic development.
Biology and health sciences
Animal reproduction
Biology
1474961
https://en.wikipedia.org/wiki/Adverse%20effect
Adverse effect
An adverse effect is an undesired harmful effect resulting from a medication or other intervention, such as surgery. An adverse effect may be termed a "side effect", when judged to be secondary to a main or therapeutic effect. The term complication is similar to adverse effect, but the latter is typically used in pharmacological contexts, or when the negative effect is expected or common. If the negative effect results from an unsuitable or incorrect dosage or procedure, this is called a medical error and not an adverse effect. Adverse effects are sometimes referred to as "iatrogenic" because they are generated by a physician/treatment. Some adverse effects occur only when starting, increasing or discontinuing a treatment. Using a drug or other medical intervention which is contraindicated may increase the risk of adverse effects. Adverse effects may cause complications of a disease or procedure and negatively affect its prognosis. They may also lead to non-compliance with a treatment regimen. Adverse effects of medical treatment resulted in 142,000 deaths in 2013 up from 94,000 deaths in 1990 globally. The harmful outcome is usually indicated by some result such as morbidity, mortality, alteration in body weight, levels of enzymes, loss of function, or as a pathological change detected at the microscopic, macroscopic or physiological level. It may also be indicated by symptoms reported by a patient. Adverse effects may cause a reversible or irreversible change, including an increase or decrease in the susceptibility of the individual to other chemicals, foods, or procedures, such as drug interactions. Classification In terms of drugs, adverse events may be defined as: "Any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have to have a causal relationship with this treatment." In clinical trials, a distinction is made between an adverse event and a serious adverse event. Generally, any event which causes death, permanent damage, birth defects, or requires hospitalization is considered a serious adverse event. The results of trials are often included in the labelling of the medication to provide information both for patients and the prescribing physicians. The term "life-threatening" in the context of a serious adverse event refers to an event in which the patient was at risk of death at the time of the event; it does not refer to an event which hypothetically might have caused death if it were more severe. Reporting systems In many countries, adverse effects are required by law to be reported, researched in clinical trials and included into the patient information accompanying medical devices and drugs for sale to the public. Investigators in human clinical trials are obligated to report these events in clinical study reports. Research suggests that these events are often inadequately reported in publicly available reports. Because of the lack of these data and uncertainty about methods for synthesising them, individuals conducting systematic reviews and meta-analyses of therapeutic interventions often unknowingly overemphasise health benefit. To balance the overemphasis on benefit, scholars have called for more complete reporting of harm from clinical trials. United Kingdom The Yellow Card Scheme is a United Kingdom initiative run by the Medicines and Healthcare products Regulatory Agency (MHRA) and the Commission on Human Medicines (CHM) to gather information on adverse effects to medicines. This includes all licensed medicines, from medicines issued on prescription to medicines bought over the counter from a supermarket. The scheme also includes all herbal supplements and unlicensed medicines found in cosmetic treatments. Adverse drug reactions (ADRs) can be reported by a number of health care professionals including physicians, pharmacists and nurses, as well as patients. United States In the United States several reporting systems have been built, such as the Vaccine Adverse Event Reporting System (VAERS), the Manufacturer and User Facility Device Experience Database (MAUDE) and the Special Nutritionals Adverse Event Monitoring System. MedWatch is the main reporting center, operated by the Food and Drug Administration. Australia In Australia, adverse effect reporting is administered by the Adverse Drug Reactions Advisory Committee (ADRAC), a subcommittee of the Australian Drug Evaluation Committee (ADEC). Reporting is voluntary, and ADRAC requests healthcare professionals to report all adverse reactions to its current drugs of interest, and serious adverse reactions to any drug. ADRAC publishes the Australian Adverse Drug Reactions Bulletin every two months. The Government's Quality Use of Medicines program is tasked with acting on this reporting to reduce and minimize the number of preventable adverse effects each year. New Zealand Adverse reaction reporting is an important component of New Zealand's pharmacovigilance activities. The Centre for Adverse Reactions Monitoring (CARM) in Dunedin is New Zealand's national monitoring centre for adverse reactions. It collects and evaluates spontaneous reports of adverse reactions to medicines, vaccines, herbal products and dietary supplements from health professionals in New Zealand. Currently the CARM database holds over 80,000 reports and provides New Zealand-specific information on adverse reactions to these products, and serves to support clinical decision making when unusual symptoms are thought to be therapy related Canada In Canada, adverse reaction reporting is an important component of the surveillance of marketed health products conducted by the Health Products and Food Branch (HPFB) of Health Canada. Within HPFB, the Marketed Health Products Directorate leads the coordination and implementation of consistent monitoring practices with regards to assessment of signals and safety trends, and risk communications concerning regulated marketed health products. MHPD also works closely with international organizations to facilitate the sharing of information. Adverse reaction reporting is mandatory for the industry and voluntary for consumers and health professionals. Limitations In principle, medical professionals are required to report all adverse effects related to a specific form of therapy. In practice, it is at the discretion of the professional to determine whether a medical event is at all related to the therapy. As a result, routine adverse effects reporting often may not include long-term and subtle effects that may ultimately be attributed to a therapy. Part of the difficulty is identifying the source of a complaint. A headache in a patient taking medication for influenza may be caused by the underlying disease or may be an adverse effect of the treatment. In patients with end-stage cancer, death is a very likely outcome and whether the drug is the cause or a bystander is often difficult to discern. By situation Medical procedures Surgery may have a number of undesirable or harmful effects, such as infection, hemorrhage, inflammation, scarring, loss of function, or changes in local blood flow. They can be reversible or irreversible, and a compromise must be found by the physician and the patient between the beneficial or life-saving consequences of surgery versus its adverse effects. For example, a limb may be lost to amputation in case of untreatable gangrene, but the patient's life is saved. Presently, one of the greatest advantages of minimally invasive surgery, such as laparoscopic surgery, is the reduction of adverse effects. Other nonsurgical physical procedures, such as high-intensity radiation therapy, may cause burns and alterations in the skin. In general, these therapies try to avoid damage to healthy tissues while maximizing the therapeutic effect. Vaccination may have adverse effects due to the nature of its biological preparation, sometimes using attenuated pathogens and toxins. Common adverse effects may be fever, malaise and local reactions in the vaccination site. Very rarely, there is a serious adverse effect, such as eczema vaccinatum, a severe, sometimes fatal complication which may result in persons who have eczema or atopic dermatitis. Diagnostic procedures may also have adverse effects, depending much on whether they are invasive, minimally invasive or noninvasive. For example, allergic reactions to radiocontrast materials often occur, and a colonoscopy may cause the perforation of the intestinal wall. Medications Adverse effects can occur as a collateral or side effect of many interventions, but they are particularly important in pharmacology, due to its wider, and sometimes uncontrollable, use by way of self-medication. Thus, responsible drug use becomes an important issue here. Adverse effects, like therapeutic effects of drugs, are a function of dosage or drug levels at the target organs, so they may be avoided or decreased by means of careful and precise pharmacokinetics, the change of drug levels in the organism in function of time after administration. Adverse effects may also be caused by drug interaction. This often occurs when patients fail to inform their physician and pharmacist of all the medications they are taking, including herbal and dietary supplements. The new medication may interact agonistically or antagonistically (potentiate or decrease the intended therapeutic effect), causing significant morbidity and mortality around the world. Drug-drug and food-drug interactions may occur, and so-called "natural drugs" used in alternative medicine can have dangerous adverse effects. For example, extracts of St John's wort (Hypericum perforatum), a phytotherapic used for treating mild depression are known to cause an increase in the cytochrome P450 enzymes responsible for the metabolism and elimination of many drugs, so patients taking it are likely to experience a reduction in blood levels of drugs they are taking for other purposes, such as cancer chemotherapeutic drugs, protease inhibitors for HIV and hormonal contraceptives. The scientific field of activity associated with drug safety is increasingly government-regulated, and is of major concern for the public, as well as to drug manufacturers. The distinction between adverse and nonadverse effects is a major undertaking when a new drug is developed and tested before marketing it. This is done in toxicity studies to determine the nonadverse effect level (NOAEL). These studies are used to define the dosage to be used in human testing (phase I), as well as to calculate the maximum admissible daily intake. Imperfections in clinical trials, such as insufficient number of patients or short duration, sometimes lead to public health disasters, such as those of fenfluramine (the so-called fen-phen episode), thalidomide and, more recently, of cerivastatin (Baycol, Lipobay) and rofecoxib (Vioxx), where drastic adverse effects were observed, such as teratogenesis, pulmonary hypertension, stroke, heart disease, neuropathy, and a significant number of deaths, causing the forced or voluntary withdrawal of the drug from the market. Most drugs have a large list of nonsevere or mild adverse effects which do not rule out continued usage. These effects, which have a widely variable incidence according to individual sensitivity, include nausea, dizziness, diarrhea, malaise, vomiting, headache, dermatitis, dry mouth, etc. These can be considered a form of pseudo-allergic reaction, as not all users experience these effects; many users experience none at all. The Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D) warns that people with dementia are more likely to experience adverse effects, and that they are less likely to be able to reliably report symptoms. Examples with specific medications Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug (this is a case where the adverse effect has been used legally and illegally for performing abortions) Addiction to many sedatives and analgesics, such as diazepam, morphine, etc. Birth defects associated with thalidomide Bleeding of the intestine associated with aspirin therapy Cardiovascular disease associated with COX-2 inhibitors (i.e. Vioxx) Deafness and kidney failure associated with gentamicin (an antibiotic) Death, following sedation, in children using propofol (Diprivan) Depression or hepatic injury caused by interferon Diabetes caused by atypical antipsychotic medications (neuroleptic psychiatric drugs) Diarrhea caused by the use of orlistat (Xenical) Erectile dysfunction associated with many drugs, such as antidepressants Fever associated with vaccination Glaucoma associated with corticosteroid-based eye drops Hair loss and anemia may be caused by chemotherapy against cancer, leukemia, etc. Headache following spinal anaesthesia Hypertension in ephedrine users, which prompted FDA to remove the dietary supplement status of ephedra extracts Insomnia caused by stimulants, methylphenidate (Ritalin), Adderall, etc. Lactic acidosis associated with the use of stavudine (Zerit, for HIV therapy) or metformin (for diabetes) Mania caused by corticosteroids Liver damage from paracetamol Melasma and thrombosis associated with use of estrogen-containing hormonal contraception, such as the combined oral contraceptive pill Priapism associated with the use of sildenafil Rhabdomyolysis associated with statins (anticholesterol drugs) Seizures caused by withdrawal from benzodiazepines Drowsiness or increase in appetite due to antihistamine use. Some antihistamines are used in sleep aids explicitly because they cause drowsiness. Stroke or heart attack associated with sildenafil (Viagra), when used with nitroglycerin Suicide, increased tendency associated to the use of fluoxetine and other selective serotonin reuptake inhibitor (SSRI) antidepressants Tardive dyskinesia associated with use of metoclopramide and many antipsychotic medications Controversies Sometimes, putative medical adverse effects are regarded as controversial and generate heated discussions in society and lawsuits against drug manufacturers. One example is the recent controversy as to whether autism was linked to the MMR vaccine (or to thiomersal, a mercury-based preservative used in some vaccines). No link has been found in several large studies, and despite removal of thimerosal from most early childhood vaccines beginning with those manufactured in 2003, the rate of autism has not decreased as would be expected if it had been the causative agent. Another instance is the potential adverse effects of silicone breast implants, which led to class actions brought by tens of thousands of plaintiffs against manufacturers of gel-based implants, due to allegations of damage to the immune system which have not yet been conclusively proven. In 1998, Dow Corning settled its remaining suits for $3.2 Billion and went into bankruptcy. Due to the exceedingly high impact on public health of widely used medications, such as hormonal contraception and hormone replacement therapy, which may affect millions of users, even marginal probabilities of adverse effects of a severe nature, such as breast cancer, have led to public outcry and changes in medical therapy, although its benefits largely surpassed the statistical risks.
Biology and health sciences
General concepts_2
Health
1475860
https://en.wikipedia.org/wiki/Green%20bean
Green bean
Green beans are young, unripe fruits of various cultivars of the common bean (Phaseolus vulgaris), although immature or young pods of the runner bean (Phaseolus coccineus), yardlong bean (Vigna unguiculata subsp. sesquipedalis), and hyacinth bean (Lablab purpureus) are used in a similar way. Green beans are known by many common names, including French beans, string beans (although most modern varieties are "stringless"), and snap beans or simply "snaps." In the Philippines, they are also known as "Baguio beans" or "" to distinguish them from yardlong beans. They are distinguished from the many other varieties of beans in that green beans are harvested and consumed with their enclosing pods before the bean seeds inside have fully matured. An analogous practice is the harvest and consumption of unripened pea pods, as is done with snow peas or sugar snap peas. Uses As common food in many countries, green beans are sold fresh, canned, and frozen. They can be eaten raw or steamed, boiled, stir-fried, or baked. They are commonly cooked in other dishes, such as soups, stews, and casseroles. Green beans can be pickled, similarly to cucumbers. A dish with green beans common throughout the northern US, particularly at Thanksgiving, is green bean casserole, a dish of green beans, cream of mushroom soup, and French-fried onions. Nutrition Raw green beans are 90% water, 7% carbohydrates, 2% protein, and contain negligible fat (table). In a reference amount, raw green beans supply 31 calories and are a moderate source (range 10–19% of the Daily Value) of vitamin C, vitamin K, vitamin B6, and manganese, while other micronutrients are in low supply (table). Domestication The green bean (Phaseolus vulgaris) originated in Central and South America, where there is evidence that it has been cultivated in Mexico and Peru for thousands of years. Characteristics The first "stringless" bean was bred in 1894 by Calvin Keeney, called the "father of the stringless bean," while working in Le Roy, New York. Most modern green bean varieties do not have strings. Plant Green beans are classified by growth habit into two major groups, "bush" (or "dwarf") beans and "pole" (or "climbing") beans. Bush beans are short plants, growing to not more than in height, often without requiring supports. They generally reach maturity and produce all of their fruit in a relatively short period, then cease to produce. Owing to this concentrated production and ease of mechanized harvesting, bush-type beans are those most often grown on commercial farms. Bush green beans are usually cultivars of the common bean (Phaseolus vulgaris). Pole beans have a climbing habit and produce a twisting vine, which must be supported by "poles," trellises, or other means. Pole beans may be common beans (Phaseolus vulgaris), runner beans (Phaseolus coccineus) or yardlong beans (Vigna unguiculata subsp. sesquipedalis). Half-runner beans have both bush and pole characteristics, and are sometimes classified separately from bush and pole varieties. Their runners can be about 3–10 feet long. Varieties Over 130 varieties (cultivars) of edible pod beans are known. Varieties specialized for use as green beans, selected for the succulence and flavor of their green pods, are the ones usually grown in the home vegetable garden, and many varieties exist. Beans with various pod colors (green, purple, red, or streaked.) are collectively known as snap beans, while green beans are exclusively green. Pod shapes range from thin and circular ("fillet" types) to wide and flat ("romano" types) and more common types in between. The three most commonly known types of green beans belonging to the species Phaseolus vulgaris are string or snap beans, which may be round or have a flat pod; stringless or French beans, which lack a tough, fibrous string running along the length of the pod; and runner beans, which belong to a separate species, Phaseolus coccineus. Green beans may have a purple rather than green pod, which changes to green when cooked. Yellow-podded green beans are also known as wax beans. Wax bean cultivars are commonly of the bush or dwarf form. All of the following varieties have green pods and are Phaseolus vulgaris unless otherwise specified: Bush (dwarf) types Pole (climbing) types Production In 2020, world production of green beans was 23 million tonnes, with China accounting for 77% of the total (table). Gallery
Biology and health sciences
Fabales
null
33284883
https://en.wikipedia.org/wiki/Tetraethyl%20pyrophosphate
Tetraethyl pyrophosphate
Tetraethyl pyrophosphate, abbreviated TEPP, is an organophosphate compound with the formula . It is the tetraethyl derivative of pyrophosphate (P2O74-). It is a colorless oil that solidifies near room temperature. It is used as an insecticide. The compound hydrolyzes rapidly. Applications TEPP is an insecticide to aphids, mites, spiders, mealybugs, leafhoppers, lygus bugs, thrips, leafminers, and many other pests. TEPP and other organophosphates are the most widely used pesticides in the U.S. due to their effectiveness and relative small impact on the environment because this organophosphate breaks down so easily. TEPP has been used for treatment for myasthenia gravis, an autoimmune disease. The treatment would deliver an increase in strength. Synthesis The synthesis by De Clermont and Moschnin was based on the earlier work by Alexander Williamson (who is well known for the Williamson ether synthesis). Their synthesis made use of ethyl iodide and silver salts to form esters in combination with pyrophosphate. Commercial routes to TEPP often use methods developed by Schrader, Woodstock, and Toy. Triethyl phosphate reacts with phosphorus oxychloride (Schrader's method) or phosphorus pentoxide (Woodstock's method). Alternatively, controlled hydrolysis of diethyl phosphorochloridate delivers the compound: The related tetrabenzylpyrophosphate is prepared by dehydration of dibenzylphosphoric acid: Hydrolysis TEPP and most of the other organophosphates are susceptible to hydrolysis. The product is diethyl phosphate. Toxicity TEPP is bioactive as an acetylcholinesterase inhibitor. It reacts with the serine hydroxyl group at the active site, preventing this enzyme from acting on its normal substrate, the neurotransmitter acetyl choline. TEPP is highly toxic for all warm-blooded animals, including humans. There are three types of effects on these animals that have come forward during laboratory studies. DERMAL: LD50 = 2.4 mg/kg (male rat) ORAL: LD50 = 1.12 mg/kg (rat) Death is mostly due to either respiratory failure and in some cases cardiac arrest. The route of absorption might be responsible for the range of effect on certain systems. For cold-blooded animals the effects are slightly different. In a study with frogs, acute exposure caused a depression in the amount of erythrocytes in the blood. There was also a reduction of white bloodcells, especially the neutrophil granulocytes and lymphocytes. There was no visible damage to the bloodvessels to explain the loss of blood cells. Further there were no signs like hypersalivation or tears like in warm-blooded animals, though there was hypotonia leading to paralysis. History It was first synthesized by Wladimir Moschnin in 1854 while working with Adolphe Wurtz. A fellow student Philippe de Clermont is often incorrectly credited as the discoverer of TEPP despite his recognition of the Moschnin primacy in two publications. The ignorance about the potential toxicity of TEPP is evidenced by De Clermont himself, who described the taste of TEPP as having a burning taste and a peculiar odor. Even though TEPP has repeatedly been synthesized by other chemists during the years that followed, not until the 1930s had any adverse effects been observed. Furthermore, Philippe de Clermont has never been reported ill by his family up to his passing at the age of 90. In the meantime, organophosphorus chemistry has really started developing with the help of A. W. von Hofmann, Carl Arnold August Michaelis and Aleksandr Arbuzov. It was not until 1932 before the first adverse effects of compounds similar to TEPP had been recognized. Willy Lange and Gerda von Krueger were the first to report such effects, about which the following statement was published in their article (in German): Starting in 1935 the German government started gathering information about new toxic substances, of which some were to be classified as secret by the German Ministry of Defence. Gerhard Schrader, who has become famous for his studies into organophosphorus insecticides and nerve gases, was one of the chemists who was also studying TEPP. In his studies, in particular his studies into the biological aspects, he noticed that this reagent could possibly be used as an insecticide. This would make the classification of the compound as secret disadvantageous for commercial firms. Around the beginning of the Second World War, TEPP was discovered to be an inhibitor of cholinesterases. Schrader referred to the studies by Eberhard Gross, who was the first to recognize the mechanism of action for TEPP in 1939. More experiments were conducted including those of Hans Gremels, who confirmed Gross's work. Gremels was also involved in the development of nerve gases at that time. His studies involved several species of animals and human volunteers. Around that same time, atropine was discovered as a possible antidote for the anticholinesterase activity of TEPP. After the Second World War, Schrader was among many German scientists who were interrogated by English scientists, among others. During the war, the English had been developing chemical weapons of their own to surprise their enemies. In these interrogations the existence of TEPP and other insecticides were disclosed. The existence of nerve gases, however also being disclosed by Schrader, was kept secret by the military.
Technology
Pest and disease control
null
5337437
https://en.wikipedia.org/wiki/Gradient%20theorem
Gradient theorem
The gradient theorem, also known as the fundamental theorem of calculus for line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the second fundamental theorem of calculus to any curve in a plane or space (generally n-dimensional) rather than just the real line. If is a differentiable function and a differentiable curve in which starts at a point and ends at a point , then where denotes the gradient vector field of . The gradient theorem implies that line integrals through gradient fields are path-independent. In physics this theorem is one of the ways of defining a conservative force. By placing as potential, is a conservative field. Work done by conservative forces does not depend on the path followed by the object, but only the end points, as the above equation shows. The gradient theorem also has an interesting converse: any path-independent vector field can be expressed as the gradient of a scalar field. Just like the gradient theorem itself, this converse has many striking consequences and applications in both pure and applied mathematics. Proof If is a differentiable function from some open subset to and is a differentiable function from some closed interval to (Note that is differentiable at the interval endpoints and . To do this, is defined on an interval that is larger than and includes .), then by the multivariate chain rule, the composite function is differentiable on : for all in . Here the denotes the usual inner product. Now suppose the domain of contains the differentiable curve with endpoints and . (This is oriented in the direction from to ). If parametrizes for in (i.e., represents as a function of ), then where the definition of a line integral is used in the first equality, the above equation is used in the second equality, and the second fundamental theorem of calculus is used in the third equality. Even if the gradient theorem (also called fundamental theorem of calculus for line integrals) has been proved for a differentiable (so looked as smooth) curve so far, the theorem is also proved for a piecewise-smooth curve since this curve is made by joining multiple differentiable curves so the proof for this curve is made by the proof per differentiable curve component. Examples Example 1 Suppose is the circular arc oriented counterclockwise from to . Using the definition of a line integral, This result can be obtained much more simply by noticing that the function has gradient , so by the Gradient Theorem: Example 2 For a more abstract example, suppose has endpoints , , with orientation from to . For in , let denote the Euclidean norm of . If is a real number, then Here the final equality follows by the gradient theorem, since the function is differentiable on if . If then this equality will still hold in most cases, but caution must be taken if γ passes through or encloses the origin, because the integrand vector field will fail to be defined there. However, the case is somewhat different; in this case, the integrand becomes , so that the final equality becomes . Note that if , then this example is simply a slight variant of the familiar power rule from single-variable calculus. Example 3 Suppose there are point charges arranged in three-dimensional space, and the -th point charge has charge and is located at position in . We would like to calculate the work done on a particle of charge as it travels from a point to a point in . Using Coulomb's law, we can easily determine that the force on the particle at position will be Here denotes the Euclidean norm of the vector in , and , where is the vacuum permittivity. Let be an arbitrary differentiable curve from to . Then the work done on the particle is Now for each , direct computation shows that Thus, continuing from above and using the gradient theorem, We are finished. Of course, we could have easily completed this calculation using the powerful language of electrostatic potential or electrostatic potential energy (with the familiar formulas ). However, we have not yet defined potential or potential energy, because the converse of the gradient theorem is required to prove that these are well-defined, differentiable functions and that these formulas hold (see below). Thus, we have solved this problem using only Coulomb's Law, the definition of work, and the gradient theorem. Converse of the gradient theorem The gradient theorem states that if the vector field is the gradient of some scalar-valued function (i.e., if is conservative), then is a path-independent vector field (i.e., the integral of over some piecewise-differentiable curve is dependent only on end points). This theorem has a powerful converse: It is straightforward to show that a vector field is path-independent if and only if the integral of the vector field over every closed loop in its domain is zero. Thus the converse can alternatively be stated as follows: If the integral of over every closed loop in the domain of is zero, then is the gradient of some scalar-valued function. Proof of the converse Suppose is an open, path-connected subset of , and is a continuous and path-independent vector field. Fix some element of , and define byHere is any (differentiable) curve in originating at and terminating at . We know that is well-defined because is path-independent. Let be any nonzero vector in . By the definition of the directional derivative,To calculate the integral within the final limit, we must parametrize . Since is path-independent, is open, and is approaching zero, we may assume that this path is a straight line, and parametrize it as for . Now, since , the limit becomeswhere the first equality is from the definition of the derivative with a fact that the integral is equal to 0 at = 0, and the second equality is from the first fundamental theorem of calculus. Thus we have a formula for , (one of ways to represent the directional derivative) where is arbitrary; for (see its full definition above), its directional derivative with respect to iswhere the first two equalities just show different representations of the directional derivative. According to the definition of the gradient of a scalar function , , thus we have found a scalar-valued function whose gradient is the path-independent vector field (i.e., is a conservative vector field.), as desired. Example of the converse principle To illustrate the power of this converse principle, we cite an example that has significant physical consequences. In classical electromagnetism, the electric force is a path-independent force; i.e. the work done on a particle that has returned to its original position within an electric field is zero (assuming that no changing magnetic fields are present). Therefore, the above theorem implies that the electric force field is conservative (here is some open, path-connected subset of that contains a charge distribution). Following the ideas of the above proof, we can set some reference point in , and define a function by Using the above proof, we know is well-defined and differentiable, and (from this formula we can use the gradient theorem to easily derive the well-known formula for calculating work done by conservative forces: ). This function is often referred to as the electrostatic potential energy of the system of charges in (with reference to the zero-of-potential ). In many cases, the domain is assumed to be unbounded and the reference point is taken to be "infinity", which can be made rigorous using limiting techniques. This function is an indispensable tool used in the analysis of many physical systems. Generalizations Many of the critical theorems of vector calculus generalize elegantly to statements about the integration of differential forms on manifolds. In the language of differential forms and exterior derivatives, the gradient theorem states that for any 0-form, , defined on some differentiable curve (here the integral of over the boundary of the is understood to be the evaluation of at the endpoints of γ). Notice the striking similarity between this statement and the generalized Stokes’ theorem, which says that the integral of any compactly supported differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole of , i.e., This powerful statement is a generalization of the gradient theorem from 1-forms defined on one-dimensional manifolds to differential forms defined on manifolds of arbitrary dimension. The converse statement of the gradient theorem also has a powerful generalization in terms of differential forms on manifolds. In particular, suppose is a form defined on a contractible domain, and the integral of over any closed manifold is zero. Then there exists a form such that . Thus, on a contractible domain, every closed form is exact. This result is summarized by the Poincaré lemma.
Mathematics
Multivariable and vector calculus
null
40119056
https://en.wikipedia.org/wiki/Crustacean
Crustacean
Crustaceans (from Latin meaning: "those with shells" or "crusted ones") are invertebrate animals that constitute one group of arthropods that are a part of the subphylum Crustacea (), a large, diverse group of mainly aquatic arthropods including decapods (shrimps, prawns, crabs, lobsters and crayfish), seed shrimp, branchiopods, fish lice, krill, remipedes, isopods, barnacles, copepods, opossum shrimps, amphipods and mantis shrimp. The crustacean group can be treated as a subphylum under the clade Mandibulata. It is now well accepted that the hexapods (insects and entognathans) emerged deep in the Crustacean group, with the completed pan-group referred to as Pancrustacea. The three classes Cephalocarida, Branchiopoda and Remipedia are more closely related to the hexapods than they are to any of the other crustaceans (oligostracans and multicrustaceans). The 67,000 described species range in size from Stygotantulus stocki at , to the Japanese spider crab with a leg span of up to and a mass of . Like other arthropods, crustaceans have an exoskeleton, which they moult to grow. They are distinguished from other groups of arthropods, such as insects, myriapods and chelicerates, by the possession of biramous (two-parted) limbs, and by their larval forms, such as the nauplius stage of branchiopods and copepods. Most crustaceans are free-living aquatic animals, but some are terrestrial (e.g. woodlice, sandhoppers), some are parasitic (e.g. Rhizocephala, fish lice, tongue worms) and some are sessile (e.g. barnacles). The group has an extensive fossil record, reaching back to the Cambrian. More than 7.9 million tons of crustaceans per year are harvested by fishery or farming for human consumption, consisting mostly of shrimp and prawns. Krill and copepods are not as widely fished, but may be the animals with the greatest biomass on the planet, and form a vital part of the food chain. The scientific study of crustaceans is known as carcinology (alternatively, malacostracology, crustaceology or crustalogy), and a scientist who works in carcinology is a carcinologist. Anatomy The body of a crustacean is composed of segments, which are grouped into three regions: the cephalon or head, the pereon or thorax, and the pleon or abdomen. The head and thorax may be fused together to form a cephalothorax, which may be covered by a single large carapace. The crustacean body is protected by the hard exoskeleton, which must be moulted for the animal to grow. The shell around each somite can be divided into a dorsal tergum, ventral sternum and a lateral pleuron. Various parts of the exoskeleton may be fused together. Each somite, or body segment can bear a pair of appendages: on the segments of the head, these include two pairs of antennae, the mandibles and maxillae; the thoracic segments bear legs, which may be specialised as pereiopods (walking legs) and maxillipeds (feeding legs). Malacostraca and Remipedia (and the hexapods) have abdominal appendages. All other classes of crustaceans have a limbless abdomen, except from a telson and caudal rami which is present in many groups. The abdomen in malacostracans bears pleopods, and ends in a telson, which bears the anus, and is often flanked by uropods to form a tail fan. The number and variety of appendages in different crustaceans may be partly responsible for the group's success. Crustacean appendages are typically biramous, meaning they are divided into two parts; this includes the second pair of antennae, but not the first, which is usually uniramous, the exception being in the Class Malacostraca where the antennules may be generally biramous or even triramous. It is unclear whether the biramous condition is a derived state which evolved in crustaceans, or whether the second branch of the limb has been lost in all other groups. Trilobites, for instance, also possessed biramous appendages. The main body cavity is an open circulatory system, where blood is pumped into the haemocoel by a heart located near the dorsum. Malacostraca have haemocyanin as the oxygen-carrying pigment, while copepods, ostracods, barnacles and branchiopods have haemoglobins. The alimentary canal consists of a straight tube that often has a gizzard-like "gastric mill" for grinding food and a pair of digestive glands that absorb food; this structure goes in a spiral format. Structures that function as kidneys are located near the antennae. A brain exists in the form of ganglia close to the antennae, and a collection of major ganglia is found below the gut. In many decapods, the first (and sometimes the second) pair of pleopods are specialised in the male for sperm transfer. Many terrestrial crustaceans (such as the Christmas Island red crab) mate seasonally and return to the sea to release the eggs. Others, such as woodlice, lay their eggs on land, albeit in damp conditions. In most decapods, the females retain the eggs until they hatch into free-swimming larvae. Ecology Most crustaceans are aquatic, living in either marine or freshwater environments, but a few groups have adapted to life on land, such as terrestrial crabs, terrestrial hermit crabs, and woodlice. Marine crustaceans are as ubiquitous in the oceans as insects are on land. Most crustaceans are also motile, moving about independently, although a few taxonomic units are parasitic and live attached to their hosts (including sea lice, fish lice, whale lice, tongue worms, and Cymothoa exigua, all of which may be referred to as "crustacean lice"), and adult barnacles live a sessile life – they are attached headfirst to the substrate and cannot move independently. Some branchiurans are able to withstand rapid changes of salinity and will also switch hosts from marine to non-marine species. Krill are the bottom layer and most important part of the food chain in Antarctic animal communities. Some crustaceans are significant invasive species, such as the Chinese mitten crab, Eriocheir sinensis, and the Asian shore crab, Hemigrapsus sanguineus. Since the opening of the Suez Canal, close to 100 species of crustaceans from the Red Sea and the Indo-Pacific realm have established themselves in the eastern Mediterranean sub-basin, with often significant impact on local ecosystems. Life cycle Mating system Most crustaceans have separate sexes, and reproduce sexually. In fact, a recent study explains how the male T. californicus decide which females to mate with by dietary differences, preferring when the females are algae-fed instead of yeast-fed. A small number are hermaphrodites, including barnacles, remipedes, and Cephalocarida. Some may even change sex during the course of their life. Parthenogenesis is also widespread among crustaceans, where viable eggs are produced by a female without needing fertilisation by a male. This occurs in many branchiopods, some ostracods, some isopods, and certain "higher" crustaceans, such as the Marmorkrebs crayfish. Eggs In many crustaceans, the fertilised eggs are released into the water column, while others have developed a number of mechanisms for holding on to the eggs until they are ready to hatch. Most decapods carry the eggs attached to the pleopods, while peracarids, notostracans, anostracans, and many isopods form a brood pouch from the carapace and thoracic limbs. Female Branchiura do not carry eggs in external ovisacs but attach them in rows to rocks and other objects. Most leptostracans and krill carry the eggs between their thoracic limbs; some copepods carry their eggs in special thin-walled sacs, while others have them attached together in long, tangled strings. Larvae Crustaceans exhibit a number of larval forms, of which the earliest and most characteristic is the nauplius. This has three pairs of appendages, all emerging from the young animal's head, and a single naupliar eye. In most groups, there are further larval stages, including the zoea (pl. zoeæ or zoeas). This name was given to it when naturalists believed it to be a separate species. It follows the nauplius stage and precedes the post-larva. Zoea larvae swim with their thoracic appendages, as opposed to nauplii, which use cephalic appendages, and megalopa, which use abdominal appendages for swimming. It often has spikes on its carapace, which may assist these small organisms in maintaining directional swimming. In many decapods, due to their accelerated development, the zoea is the first larval stage. In some cases, the zoea stage is followed by the mysis stage, and in others, by the megalopa stage, depending on the crustacean group involved. Providing camouflage against predators, the otherwise black eyes in several forms of swimming larvae are covered by a thin layer of crystalline isoxanthopterin that gives their eyes the same color as the surrounding water, while tiny holes in the layer allow light to reach the retina. As the larvae mature into adults, the layer migrates to a new position behind the retina where it works as a backscattering mirror that increases the intensity of light passing through the eyes, as seen in many nocturnal animals. DNA repair In an effort to understand whether DNA repair processes can protect crustaceans against DNA damage, basic research was conducted to elucidate the repair mechanisms used by Penaeus monodon (black tiger shrimp). Repair of DNA double-strand breaks was found to be predominantly carried out by accurate homologous recombinational repair. Another, less accurate process, microhomology-mediated end joining, is also used to repair such breaks. The expression pattern of DNA repair related and DNA damage response genes in the intertidal copepod Tigriopus japonicus was analyzed after ultraviolet irradiation. This study revealed increased expression of proteins associated with the DNA repair processes of non-homologous end joining, homologous recombination, base excision repair and DNA mismatch repair. Classification and phylogeny The name "crustacean" dates from the earliest works to describe the animals, including those of Pierre Belon and Guillaume Rondelet, but the name was not used by some later authors, including Carl Linnaeus, who included crustaceans among the "Aptera" in his . The earliest nomenclatural valid work to use the name "Crustacea" was Morten Thrane Brünnich's in 1772, although he also included chelicerates in the group. The subphylum Crustacea comprises almost 67,000 described species, which is thought to be just to of the total number as most species remain as yet undiscovered. Although most crustaceans are small, their morphology varies greatly and includes both the largest arthropod in the world – the Japanese spider crab with a leg span of – and the smallest, the 100-micrometre-long (0.004 in) Stygotantulus stocki. Despite their diversity of form, crustaceans are united by the special larval form known as the nauplius. The exact relationships of the Crustacea to other taxa are not completely settled . Studies based on morphology led to the Pancrustacea hypothesis, in which Crustacea and Hexapoda (insects and allies) are sister groups. More recent studies using DNA sequences suggest that Crustacea is paraphyletic, with the hexapods nested within a larger Pancrustacea clade. The traditional classification of Crustacea based on morphology recognised four to six classes. Bowman and Abele (1982) recognised 652 extant families and 38 orders, organised into six classes: Branchiopoda, Remipedia, Cephalocarida, Maxillopoda, Ostracoda, and Malacostraca. Martin and Davis (2001) updated this classification, retaining the six classes but including 849 extant families in 42 orders. Despite outlining the evidence that Maxillopoda was non-monophyletic, they retained it as one of the six classes, although did suggest that Maxillipoda could be replaced by elevating its subclasses to classes. Since then phylogenetic studies have confirmed the polyphyly of Maxillipoda and the paraphyletic nature of Crustacea with respect to Hexapoda. Recent classifications recognise ten to twelve classes in Crustacea or Pancrustacea, with several former maxillopod subclasses now recognised as classes (e.g. Thecostraca, Tantulocarida, Mystacocarida, Copepoda, Branchiura and Pentastomida). The following cladogram shows the updated relationships between the different extant groups of the paraphyletic Crustacea in relation to the class Hexapoda. According to this diagram, the Hexapoda are deep in the Crustacea tree, and any of the Hexapoda is distinctly closer to e.g. a Multicrustacean than an Oligostracan is. Fossil record Crustaceans have a rich and extensive fossil record, most of the major groups of crustaceans appear in the fossil record before the end of the Cambrian, namely the Branchiopoda, Maxillopoda (including barnacles and tongue worms) and Malacostraca; there is some debate as to whether or not Cambrian animals assigned to Ostracoda are truly ostracods, which would otherwise start in the Ordovician. The only classes to appear later are the Cephalocarida, which have no fossil record, and the Remipedia, which were first described from the fossil Tesnusocaris goldichi, but do not appear until the Carboniferous. Most of the early crustaceans are rare, but fossil crustaceans become abundant from the Carboniferous period onwards. Within the Malacostraca, no fossils are known for krill, while both Hoplocarida and Phyllopoda contain important groups that are now extinct as well as extant members (Hoplocarida: mantis shrimp are extant, while Aeschronectida are extinct; Phyllopoda: Canadaspidida are extinct, while Leptostraca are extant). Cumacea and Isopoda are both known from the Carboniferous, as are the first true mantis shrimp. In the Decapoda, prawns and polychelids appear in the Triassic, and shrimp and crabs appear in the Jurassic. The fossil burrow Ophiomorpha is attributed to ghost shrimps, whereas the fossil burrow Camborygma is attributed to crayfishes. The Permian–Triassic deposits of Nurra preserve the oldest (Permian: Roadian) fluvial burrows ascribed to ghost shrimps (Decapoda: Axiidea, Gebiidea) and crayfishes (Decapoda: Astacidea, Parastacidea), respectively. However, the great radiation of crustaceans occurred in the Cretaceous, particularly in crabs, and may have been driven by the adaptive radiation of their main predators, bony fish. The first true lobsters also appear in the Cretaceous. Consumption by humans Many crustaceans are consumed by humans, and nearly 10,700,000 tons were harvested in 2007; the vast majority of this output is of decapod crustaceans: crabs, lobsters, shrimp, crawfish, and prawns. Over 60% by weight of all crustaceans caught for consumption are shrimp and prawns, and nearly 80% is produced in Asia, with China alone producing nearly half the world's total. Non-decapod crustaceans are not widely consumed, with only 118,000 tons of krill being caught, despite krill having one of the greatest biomasses on the planet.
Biology and health sciences
Biology
null
26064288
https://en.wikipedia.org/wiki/Lebesgue%20integral
Lebesgue integral
In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the axis. The Lebesgue integral, named after French mathematician Henri Lebesgue, is one way to make this concept rigorous and to extend it to more general functions. The Lebesgue integral is more general than the Riemann integral, which it largely replaced in mathematical analysis since the first half of the 20th century. It can accommodate functions with discontinuities arising in many applications that are pathological from the perspective of the Riemann integral. The Lebesgue integral also has generally better analytical properties. For instance, under mild conditions, it is possible to exchange limits and Lebesgue integration, while the conditions for doing this with a Riemann integral are comparatively baroque. Furthermore, the Lebesgue integral can be generalized in a straightforward way to more general spaces, measure spaces, such as those that arise in probability theory. The term Lebesgue integration can mean either the general theory of integration of a function with respect to a general measure, as introduced by Lebesgue, or the specific case of integration of a function defined on a sub-domain of the real line with respect to the Lebesgue measure. Introduction The integral of a positive real function between boundaries and can be interpreted as the area under the graph of , between and . This notion of area fits some functions, mainly piecewise continuous functions, including elementary functions, for example polynomials. However, the graphs of other functions, for example the Dirichlet function, don't fit well with the notion of area. Graphs like the one of the latter, raise the question: for which class of functions does "area under the curve" make sense? The answer to this question has great theoretical importance. As part of a general movement toward rigor in mathematics in the nineteenth century, mathematicians attempted to put integral calculus on a firm foundation. The Riemann integral—proposed by Bernhard Riemann (1826–1866)—is a broadly successful attempt to provide such a foundation. Riemann's definition starts with the construction of a sequence of easily calculated areas that converge to the integral of a given function. This definition is successful in the sense that it gives the expected answer for many already-solved problems, and gives useful results for many other problems. However, Riemann integration does not interact well with taking limits of sequences of functions, making such limiting processes difficult to analyze. This is important, for instance, in the study of Fourier series, Fourier transforms, and other topics. The Lebesgue integral describes better how and when it is possible to take limits under the integral sign (via the monotone convergence theorem and dominated convergence theorem). While the Riemann integral considers the area under a curve as made out of vertical rectangles, the Lebesgue definition considers horizontal slabs that are not necessarily just rectangles, and so it is more flexible. For this reason, the Lebesgue definition makes it possible to calculate integrals for a broader class of functions. For example, the Dirichlet function, which is 1 where its argument is rational and 0 otherwise, has a Lebesgue integral, but does not have a Riemann integral. Furthermore, the Lebesgue integral of this function is zero, which agrees with the intuition that when picking a real number uniformly at random from the unit interval, the probability of picking a rational number should be zero. Lebesgue summarized his approach to integration in a letter to Paul Montel: The insight is that one should be able to rearrange the values of a function freely, while preserving the value of the integral. This process of rearrangement can convert a very pathological function into one that is "nice" from the point of view of integration, and thus let such pathological functions be integrated. Intuitive interpretation summarizes the difference between the Riemann and Lebesgue approaches thus: "to compute the Riemann integral of , one partitions the domain into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of ." For the Riemann integral, the domain is partitioned into intervals, and bars are constructed to meet the height of the graph. The areas of these bars are added together, and this approximates the integral, in effect by summing areas of the form where is the height of a rectangle and is its width. For the Lebesgue integral, the range is partitioned into intervals, and so the region under the graph is partitioned into horizontal "slabs" (which may not be connected sets). The area of a small horizontal "slab" under the graph of , of height , is equal to the measure of the slab's width times : The Lebesgue integral may then be defined by adding up the areas of these horizontal slabs. From this perspective, a key difference with the Riemann integral is that the "slabs" are no longer rectangular (cartesian products of two intervals), but instead are cartesian products of a measurable set with an interval. Simple functions An equivalent way to introduce the Lebesgue integral is to use so-called simple functions, which generalize the step functions of Riemann integration. Consider, for example, determining the cumulative COVID-19 case count from a graph of smoothed cases each day (right). The Riemann–Darboux approach Partition the domain (time period) into intervals (eight, in the example at right) and construct bars with heights that meet the graph. The cumulative count is found by summing, over all bars, the product of interval width (time in days) and the bar height (cases per day). The Lebesgue approach Choose a finite number of target values (eight, in the example) in the range of the function. By constructing bars with heights equal to these values, but below the function, they imply a partitioning of the domain into the same number of subsets (subsets, indicated by color in the example, need not be connected). This is a "simple function," as described below. The cumulative count is found by summing, over all subsets of the domain, the product of the measure on that subset (total time in days) and the bar height (cases per day). Relation between the viewpoints One can think of the Lebesgue integral either in terms of slabs or simple functions. Intuitively, the area under a simple function can be partitioned into slabs based on the (finite) collection of values in the range of a simple function (a real interval). Conversely, the (finite) collection of slabs in the undergraph of the function can be rearranged after a finite repartitioning to be the undergraph of a simple function. The slabs viewpoint makes it easy to define the Lebesgue integral, in terms of basic calculus. Suppose that is a (Lebesgue measurable) function, taking non-negative values (possibly including ). Define the distribution function of as the "width of a slab", i.e., Then is monotone decreasing and non-negative, and therefore has an (improper) Riemann integral over . The Lebesgue integral can then be defined by where the integral on the right is an ordinary improper Riemann integral, of a non-negative function (interpreted appropriately as if on a neighborhood of 0). Most textbooks, however, emphasize the simple functions viewpoint, because it is then more straightforward to prove the basic theorems about the Lebesgue integral. Measure theory Measure theory was initially created to provide a useful abstraction of the notion of length of subsets of the real line—and, more generally, area and volume of subsets of Euclidean spaces. In particular, it provided a systematic answer to the question of which subsets of have a length. As later set theory developments showed (see non-measurable set), it is actually impossible to assign a length to all subsets of in a way that preserves some natural additivity and translation invariance properties. This suggests that picking out a suitable class of measurable subsets is an essential prerequisite. The Riemann integral uses the notion of length explicitly. Indeed, the element of calculation for the Riemann integral is the rectangle , whose area is calculated to be . The quantity is the length of the base of the rectangle and is the height of the rectangle. Riemann could only use planar rectangles to approximate the area under the curve, because there was no adequate theory for measuring more general sets. In the development of the theory in most modern textbooks (after 1950), the approach to measure and integration is axiomatic. This means that a measure is any function defined on a certain class of subsets of a set , which satisfies a certain list of properties. These properties can be shown to hold in many different cases. Measurable functions We start with a measure space where is a set, is a σ-algebra of subsets of , and is a (non-negative) measure on defined on the sets of . For example, can be Euclidean -space or some Lebesgue measurable subset of it, is the σ-algebra of all Lebesgue measurable subsets of , and is the Lebesgue measure. In the mathematical theory of probability, we confine our study to a probability measure , which satisfies . Lebesgue's theory defines integrals for a class of functions called measurable functions. A real-valued function on is measurable if the pre-image of every interval of the form is in : We can show that this is equivalent to requiring that the pre-image of any Borel subset of be in . The set of measurable functions is closed under algebraic operations, but more importantly it is closed under various kinds of point-wise sequential limits: are measurable if the original sequence , where , consists of measurable functions. There are several approaches for defining an integral for measurable real-valued functions defined on , and several notations are used to denote such an integral. Following the identification in Distribution theory of measures with distributions of order , or with Radon measures, one can also use a dual pair notation and write the integral with respect to in the form Definition The theory of the Lebesgue integral requires a theory of measurable sets and measures on these sets, as well as a theory of measurable functions and integrals on these functions. Via simple functions One approach to constructing the Lebesgue integral is to make use of so-called simple functions: finite, real linear combinations of indicator functions. Simple functions that lie directly underneath a given function can be constructed by partitioning the range of into a finite number of layers. The intersection of the graph of with a layer identifies a set of intervals in the domain of , which, taken together, is defined to be the preimage of the lower bound of that layer, under the simple function. In this way, the partitioning of the range of implies a partitioning of its domain. The integral of a simple function is found by summing, over these (not necessarily connected) subsets of the domain, the product of the measure of the subset and its image under the simple function (the lower bound of the corresponding layer); intuitively, this product is the sum of the areas of all bars of the same height. The integral of a non-negative general measurable function is then defined as an appropriate supremum of approximations by simple functions, and the integral of a (not necessarily positive) measurable function is the difference of two integrals of non-negative measurable functions. Indicator functions To assign a value to the integral of the indicator function of a measurable set consistent with the given measure , the only reasonable choice is to set: Notice that the result may be equal to , unless is a finite measure. Simple functions A finite linear combination of indicator functions where the coefficients are real numbers and are disjoint measurable sets, is called a measurable simple function. We extend the integral by linearity to non-negative measurable simple functions. When the coefficients are positive, we set whether this sum is finite or +∞. A simple function can be written in different ways as a linear combination of indicator functions, but the integral will be the same by the additivity of measures. Some care is needed when defining the integral of a real-valued simple function, to avoid the undefined expression : one assumes that the representation is such that whenever . Then the above formula for the integral of makes sense, and the result does not depend upon the particular representation of satisfying the assumptions. If is a measurable subset of and is a measurable simple function one defines Non-negative functions Let be a non-negative measurable function on , which we allow to attain the value , in other words, takes non-negative values in the extended real number line. We define We need to show this integral coincides with the preceding one, defined on the set of simple functions, when is a segment . There is also the question of whether this corresponds in any way to a Riemann notion of integration. It is possible to prove that the answer to both questions is yes. We have defined the integral of for any non-negative extended real-valued measurable function on . For some functions, this integral is infinite. It is often useful to have a particular sequence of simple functions that approximates the Lebesgue integral well (analogously to a Riemann sum). For a non-negative measurable function , let be the simple function whose value is whenever for a non-negative integer less than, say, Then it can be proven directly that and that the limit on the right hand side exists as an extended real number. This bridges the connection between the approach to the Lebesgue integral using simple functions, and the motivation for the Lebesgue integral using a partition of the range. Signed functions To handle signed functions, we need a few more definitions. If is a measurable function of the set to the reals (including ), then we can write where Note that both and are non-negative measurable functions. Also note that We say that the Lebesgue integral of the measurable function exists, or is defined if at least one of and is finite: In this case we define If we say that is Lebesgue integrable. That is, belongs to the . It turns out that this definition gives the desirable properties of the integral. Via improper Riemann integral Assuming that is measurable and non-negative, the function is monotonically non-increasing. The Lebesgue integral may then be defined as the improper Riemann integral of : This integral is improper at the upper limit of , and possibly also at zero. It exists, with the allowance that it may be infinite. As above, the integral of a Lebesgue integrable (not necessarily non-negative) function is defined by subtracting the integral of its positive and negative parts. Complex-valued functions Complex-valued functions can be similarly integrated, by considering the real part and the imaginary part separately. If for real-valued integrable functions , , then the integral of is defined by The function is Lebesgue integrable if and only if its absolute value is Lebesgue integrable (see Absolutely integrable function). Example Consider the indicator function of the rational numbers, , also known as the Dirichlet function. This function is nowhere continuous. is not Riemann-integrable on : No matter how the set is partitioned into subintervals, each partition contains at least one rational and at least one irrational number, because rationals and irrationals are both dense in the reals. Thus the upper Darboux sums are all one, and the lower Darboux sums are all zero. is Lebesgue-integrable on using the Lebesgue measure: Indeed, it is the indicator function of the rationals so by definition because is countable. Domain of integration A technical issue in Lebesgue integration is that the domain of integration is defined as a set (a subset of a measure space), with no notion of orientation. In elementary calculus, one defines integration with respect to an orientation: Generalizing this to higher dimensions yields integration of differential forms. By contrast, Lebesgue integration provides an alternative generalization, integrating over subsets with respect to a measure; this can be notated as to indicate integration over a subset . For details on the relation between these generalizations, see . The main theory linking these ideas is that of homological integration (sometimes called geometric integration theory), pioneered by Georges de Rham and Hassler Whitney. Limitations of the Riemann integral With the advent of Fourier series, many analytical problems involving integrals came up whose satisfactory solution required interchanging limit processes and integral signs. However, the conditions under which the integrals are equal proved quite elusive in the Riemann framework. There are some other technical difficulties with the Riemann integral. These are linked with the limit-taking difficulty discussed above. Failure of monotone convergence As shown above, the indicator function on the rationals is not Riemann integrable. In particular, the Monotone convergence theorem fails. To see why, let be an enumeration of all the rational numbers in (they are countable so this can be done). Then let The function is zero everywhere, except on a finite set of points. Hence its Riemann integral is zero. Each is non-negative, and this sequence of functions is monotonically increasing, but its limit as is , which is not Riemann integrable. Unsuitability for unbounded intervals The Riemann integral can only integrate functions on a bounded interval. It can however be extended to unbounded intervals by taking limits, so long as this doesn't yield an answer such as . Integrating on structures other than Euclidean space The Riemann integral is inextricably linked to the order structure of the real line. Basic theorems of the Lebesgue integral Two functions are said to be equal almost everywhere ( for short) if is a subset of a null set. Measurability of the set is not required. The following theorems are proved in most textbooks on measure theory and Lebesgue integration. If and are non-negative measurable functions (possibly assuming the value ) such that almost everywhere, then To wit, the integral respects the equivalence relation of almost-everywhere equality. If and are functions such that almost everywhere, then is Lebesgue integrable if and only if is Lebesgue integrable, and the integrals of and are the same if they exist. Linearity: If and are Lebesgue integrable functions and and are real numbers, then is Lebesgue integrable and Monotonicity: If , then Monotone convergence theorem: Suppose is a sequence of non-negative measurable functions such that Then, the pointwise limit of is Lebesgue measurable and The value of any of the integrals is allowed to be infinite. Fatou's lemma: If is a sequence of non-negative measurable functions, then Again, the value of any of the integrals may be infinite. Dominated convergence theorem: Suppose is a sequence of complex measurable functions with pointwise limit , and there is a Lebesgue integrable function (i.e., belongs to the ) such that for all . Then is Lebesgue integrable and Necessary and sufficient conditions for the interchange of limits and integrals were proved by Cafiero, generalizing earlier work of Renato Caccioppoli, Vladimir Dubrovskii, and Gaetano Fichera. Alternative formulations It is possible to develop the integral with respect to the Lebesgue measure without relying on the full machinery of measure theory. One such approach is provided by the Daniell integral. There is also an alternative approach to developing the theory of integration via methods of functional analysis. The Riemann integral exists for any continuous function of compact support defined on (or a fixed open subset). Integrals of more general functions can be built starting from these integrals. Let be the space of all real-valued compactly supported continuous functions of . Define a norm on by Then is a normed vector space (and in particular, it is a metric space.) All metric spaces have Hausdorff completions, so let be its completion. This space is isomorphic to the space of Lebesgue integrable functions modulo the subspace of functions with integral zero. Furthermore, the Riemann integral is a uniformly continuous functional with respect to the norm on , which is dense in . Hence has a unique extension to all of . This integral is precisely the Lebesgue integral. More generally, when the measure space on which the functions are defined is also a locally compact topological space (as is the case with the real numbers ), measures compatible with the topology in a suitable sense (Radon measures, of which the Lebesgue measure is an example) an integral with respect to them can be defined in the same manner, starting from the integrals of continuous functions with compact support. More precisely, the compactly supported functions form a vector space that carries a natural topology, and a (Radon) measure is defined as a continuous linear functional on this space. The value of a measure at a compactly supported function is then also by definition the integral of the function. One then proceeds to expand the measure (the integral) to more general functions by continuity, and defines the measure of a set as the integral of its indicator function. This is the approach taken by Nicolas Bourbaki and a certain number of other authors. For details see Radon measures. Limitations of Lebesgue integral The main purpose of the Lebesgue integral is to provide an integral notion where limits of integrals hold under mild assumptions. There is no guarantee that every function is Lebesgue integrable. But it may happen that improper integrals exist for functions that are not Lebesgue integrable. One example would be the sinc function: over the entire real line. This function is not Lebesgue integrable, as On the other hand, exists as an improper integral and can be computed to be finite; it is twice the Dirichlet integral and equal to .
Mathematics
Integral calculus
null
31747219
https://en.wikipedia.org/wiki/Chromebook
Chromebook
Chromebook (sometimes stylized in lowercase as chromebook) is a line of laptops, desktops, tablets and all-in-one computers that run ChromeOS, a proprietary operating system developed by Google. Chromebooks are optimised for web access. They also run Android apps, Linux applications, and Progressive web apps which do not require an Internet connection. They are manufactured and offered by various OEMs. The first Chromebooks shipped on June 15, 2011. As of 2020, Chromebook's market share is 10.8%, placing it above the Mac platform; it has mainly found success in education markets. Since 2021, all Chromebooks receive 10 years of regular automatic updates with security patches from Google, previously it was 8 years. Chromebooks can be repurposed with other operating systems and/or used for other purposes if required. History The first Chromebooks for sale, by Acer Inc. and Samsung, were announced at the Google I/O conference in May 2011 and began shipping on June 15, 2011. Lenovo, Hewlett-Packard (now HP Inc.) and Google itself entered the market in early 2013. In December 2013, Samsung launched a Samsung Chromebook specifically for the Indian market that employed the company's Exynos 5 Dual core processor. Critical reaction to the device was initially skeptical, with some reviewers, such as then New York Times technology columnist David Pogue, unfavorably comparing the value proposition of Chromebooks with that of more fully featured laptops running the Microsoft Windows operating system. That complaint dissipated later in reviews of machines from Acer and Samsung that were priced lower. In February 2013, Google announced and began shipping the Chromebook Pixel, a higher-spec machine with a high-end retail price. In January 2015, Acer announced the first big screen Chromebook, the Acer Chromebook 15 with an FHD 15.6-inch display. By March 2018, Chromebooks made up 60% of computers purchased by schools in the United States. In October 2012, Simon Phipps, writing in InfoWorld, said, "The Chromebook line is probably the most successful Linux desktop/laptop computer we've seen to date". On October 9, 2023, Google announced Chromebook Plus, a new category of Chromebooks that requires minimum hardware specifications, such as CPU (Intel Core i3 12th Gen or the AMD Ryzen 3 7000 series), at least 8 GB of RAM, 128 GB of local storage, 1080p IPS or better display and a 1080p+ web camera. The Plus supports video-editing with LumaFusion and web versions of Google Photos Magic Eraser and Adobe Photoshop. Integration with Android In May 2016, Google announced it would make Android apps available on Chromebooks via the Google Play application distribution platform. At the time, Google Play access was scheduled for the ASUS Chromebook Flip, the Acer Chromebook R 11 and the most recent Chromebook Pixel, with other Chromebooks slated over time. Partnering with Google, Samsung released the Chromebook Plus and Chromebook Pro in early 2017, the first Chromebooks to come with the Play Store pre-installed. A February 2017 review in The Verge reported that the Plus with its ARM processor handled Android apps "much better" than the Intel-based Pro, but said that "Android apps on Chrome OS are still in beta" and are "very much [an] unfinished experience." The number of ChromeOS systems supporting Android apps in either the stable or beta channel is increasing. In the 2021 Android 11 launch, it was announced that Android apps would be moved to a new virtual machine called ArcVM, to improve Android’s environment isolation for better security and maintainability. Compatibility with Linux applications (GNU compatibility) In May 2018, Google announced it would make Linux desktop applications available on Chromebooks via a virtual machine code-named "Crostini". ChromeOS, which runs on Chromebooks, is already based on the Linux kernel, but it does not provide default support for applications that expect a GNU-based system. Crostini left the beta stage in May 2021 as part of release 91. Google maintains a list of devices that were launched before 2019, which support Crostini. Design Initial hardware partners for Chromebook development included Acer, Adobe, Asus, Freescale, Hewlett-Packard (later HP Inc.), Lenovo, Qualcomm, Texas Instruments, Toshiba, Intel, Samsung, Dell, LG, NEC, and Sharp, the latter three manufacturers were only available in their home countries. Chromebooks ship with ChromeOS, an operating system that uses the Linux kernel and the Google Chrome web browser with an integrated media-player. Enabling developer mode allows the installation of Linux distributions and other operating systems on Chromebooks. Chromebooks also include a screw or switch directly on the motherboard to enable or disable write protection. Crouton is a script that allows the installation of Linux distributions from ChromeOS and running both operating systems simultaneously. Some Chromebooks include SeaBIOS, which can be enabled to install and boot Linux distributions directly. With limited offline capability and a fast boot-time, Chromebooks are primarily designed for use while connected to the Internet and signed in to a Google account. Instead of installing traditional applications that propose risk of malware, users install web apps from the Chrome Web Store. Google claims that a multi-layer security architecture eliminates the need for anti-virus software. Support for many Bluetooth and USB devices such as cameras, mice, external keyboards and flash drives is included, utilizing a feature similar to plug-and-play on other operating systems. All Chromebooks, except the first three, boot with the help of Coreboot, a fast booting BIOS. Google supports new Chromebooks made since 2021 with automatic updates for at least 10 years. Previously, Chromebooks were supported 8 years, while they were initially supported for 6.5 years. The date when a device will stop receiving automatic software and security updates can be found in the "Additional info" section of the "About device" in the device settings. Google maintains an Auto Update policy listing ChromeOS makes and models with their auto update expiration dates. The hardware generation and Linux kernel version of most products can be inferred from the code name and its corresponding video game series: Form factors Chromebooks are available from OEMs in various form factors, some form factors may be known by other names: Chromebook in laptop form factor. Chromebook tablet, introduced in March 2018 by Acer, the Chromebook Tab 10. The device was to compete with the lower-priced Apple iPad tablet in the education market. Chromebox, an ultra small form-factor desktop PC first introduced by Samsung in May 2012. Chromebase, an all-in-one desktop PC was introduced by LG Electronics in January 2014. Chromebit, an HDMI stick PC introduced by Asus in April 2015. However, as of 2019 OEMs do not currently manufacture this form factor. Sales and marketing The first two commercially available Chromebooks, the Samsung Series 5 and the Acer AC700, were unveiled on May 11, 2011, at the Google I/O developer conference. They were to begin selling through online channels, including Amazon and Best Buy in the United States, the United Kingdom, France, Germany, the Netherlands, Italy and Spain starting June 15, 2011; however, Acer's AC700 was not available until early July. The first machines sold for between $349 and $499, depending on the model and 3G option. Google also offered a monthly payment scheme for business and education customers at $28 and $20 per user, per month, respectively for a three-year contract, including replacements and upgrades. Verizon offers models equipped with 3G/4G LTE connectivity 100–200 MB of free wireless data per month, for two years. Google's early marketing efforts relied primarily on hands-on experience: giving away Samsung machines to 10 Cr-48 pilot program participants along with the title Chromebook Guru and lending Chromebooks to passengers on some Virgin America flights. At the end of September 2011, Google launched the Chrome Zone, a "store within a store", inside the Currys and PC World superstore in London. The store had a Google-style look and feel with splashes of color all around the retail store front. The concept was later changed to a broader in-store Google shop, which has not expanded beyond the PC World on Tottenham Court Road. In addition to these marketing strategies, Google Chrome has created several "Chromebook minis" that demonstrate the ease of use and simplicity of the devices in a comical manner. For example, when the question "How do you back up a Chromebook" is asked, it is implied to refer to data backup, but instead, shows two hands pushing a Chromebook back to the end of a table. This is followed by the statement, "You don't have to back up a Chromebook," showing how all data is stored on the web. In an article published on ZDNet in June 2011, entitled "Five Chromebook concerns for businesses", Steven J. Vaughan-Nichols faulted the devices for lack of virtual private network capability, not supporting some Wi-Fi security methods, in particular Wi-Fi Protected Access II (WPA2) Enterprise with Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) or Cisco's Lightweight Extensible Authentication Protocol (LEAP). He also noted that its file manager does not work, the need to use the undocumented crosh shell to accomplish basic tasks such as setting up a secure shell (SSH) network connection as well as serious deficiencies in documentation. In one of the first customer reviews, the city of Orlando, Florida, reported on their initial testing of 600 Chromebooks as part of a broader study related to accessing virtual desktops. Early indications show potential value in reducing IT support costs. End users have indicated that the Chromebook is easy to travel with and starts up quickly. One stated that "If I just need to stay connected for emergencies, I take my Chrome," but when traveling for business she would still take her laptop. Orlando does plan to continue to use the Chromebooks. On November 21, 2011, Google announced price reductions on all Chromebooks. Since then, the Wi-Fi-only Samsung Series 5 was reduced to $349, the 3G Samsung Series 5 was reduced to $449, and the Acer AC700 was reduced to $299. The updated Series 5 550 and the Chromebox, the first ChromeOS desktop machines, were released by Samsung in May 2012. While the two lowest cost Chromebooks emerged later in the fall: the $249 Samsung Series 3 and the $199 Acer C7. The following February, Google introduced the most expensive machine, their Chromebook Pixel, with a starting price of $1299. All models released after May 2012, include 100 GB–1.09 TB of Google Drive cloud storage and 12 GoGo Wi-Fi passes. By January 2013, Acer's Chromebook sales were being driven by "heavy Internet users with educational institutions", and the platform represented 5–10 percent of the company's US shipments, according to Acer president Jim Wong. He called those numbers sustainable, contrasting them with low Windows 8 sales which he blamed for a slump in the market. Wong said that the company would consider marketing Chromebooks to other developed countries, as well as to corporations. He noted that although ChromeOS is free to license for hardware vendors, it has required greater marketing expenditure than Windows, offsetting the licensing savings. During the first 11 months of 2013, 1.76 million Chromebooks sold in the United States, representing 21% of the US commercial business-to-business laptop market. During the same period in 2012, Chromebooks sold 400,000 units and had a negligible market share. In January 2015, Silviu Stahie noted in Softpedia that Chromebooks were eating into Microsoft's market share. He said "Microsoft is engaged in a silent war and it's actually losing. They are fighting an enemy that is so insidious and so cunning that it's actually hurting the company more than anything else. The enemy is called Chromebooks and they are using Linux...There is no sign that things are slowing down and Microsoft really needs a win, and soon if it wants to remain relevant." In 2015, Chromebooks, by sales volume (to companies in the US), are second after Windows based devices (with Android tablets, overtaking Apple's devices in 2014): "Chromebook sales through the U.S. B2B channels increased 43 percent during the first half of 2015, helping to keep overall B2B PC and tablet sales from falling. [..] Sales of Google OS-equipped (Android and Chrome) devices saw a 29 percent increase over 2014 propelled by Chromebook sales, while Apple devices declined 12 percent and Windows devices fell 8 percent." As of October 28, 2024 Lenovo Chromebook N23 Intel Celeron is the cheapest Chromebook in the world. In 2020, Chromebooks outsold Apple Macs for the first time by taking market share from laptops running Microsoft Windows. This rise is attributed to the platform's success in the education market. Education market The education market has been the Chromebooks' most notable success, competing on the low cost of the hardware, software and upkeep. The simplicity of the machines, which could be a drawback in other markets, has proven an advantage to school districts by reducing training and maintenance costs. By January 2012, even while commercial sales were flat, Google placed nearly 27,000 Chromebooks in schools across 41 states in the US, including "one-on-one" programs, which allocate a computer for every student in South Carolina, Illinois, and Iowa. As of August 2012, over 500 school districts in the United States and Europe were using the device. In 2016, Chromebooks represented 58 percent of the 2.6 million mobile devices purchased by U.S. schools and about 64 percent of that market outside the U.S. By contrast, sales of Apple devices to U.S. schools dropped that year to 19 percent, compared with 52 percent in 2012. Helping spur Chromebook sales is Google Classroom, an app designed for teachers in 2014, that serves as a hub for classroom activities including attendance, classroom discussions, homework and communication with students and parents. There have, however, been concerns about privacy within the context of the education market for Chromebooks. Officials at schools issuing Chromebooks for students have affirmed that students have no right to privacy when using school-issued Chromebooks, even at home, and that all online and offline activity can be monitored by the school using third-party software, such as GoGuardian, pre-installed on the laptops. Further, the Electronic Frontier Foundation has complained that Google itself is violating the privacy of students by enabling the synchronization function within Google Chrome ("Chrome Sync") by default, allowing web browsing histories and other data of students – including those under-13 – to be stored on Google servers and potentially used for purposes other than authorized educational purposes. A point of contention has been the fact that users of school-issued Chromebooks cannot change these settings themselves as a measure to protect their privacy; only the administrator who issued the laptops can change them. The EFF claims that this violates a Student Privacy Pledge already signed by Google in 2014. EFF staff attorney Nate Cardozo stated: "Minors shouldn't be tracked or used as guinea pigs, with their data treated as a profit center. If Google wants to use students' data to 'improve Google products', then it needs to get express consent from parents." Despite this, Chromebooks made up nearly 60% of computers used in US schools in March 2018. CNET writer Alfred Ng cited superior security as the main reason for this level of market adoption. According to research firms Gartner and Canalys, over 30 million Chromebooks were shipped in 2020, as school districts and parents purchased them for remote learning purposes during the COVID-19 pandemic. Manufacturers and model examples Chromebooks made by 12 manufacturers are sold as of October 31, 2022. Acer Inc. Asus Dell Fujitsu Client Computing Limited Google (Pixelbook) HP Inc. Lenovo LG Electronics: Not retailed for private. For the company's home country only. NEC: Not retailed for private. For the company's home country only. Poin2 Lab. Samsung Electronics Sharp Corporation: Not retailed for private. For the company's home country only. Further, there are four manufacturers that ended making Chromebooks before 2022. AOpen Haier Hisense Toshiba Google Cr-48 At a December 7, 2010, press briefing, Google announced the ChromeOS Pilot Program, a pilot experiment and the first Chromebook, the Cr-48 Chrome Notebook, a prototype, to test the ChromeOS operating system and modified hardware for it. The device had a minimal design and was all black, completely unbranded although it was made by Inventec, and had a rubberized coating. The device was named after Chromium-48, an unstable isotope of the metallic element Chromium (chemical symbol Cr), and the participants were named Cr-48 Test Pilots. Google distributed about 60,000 Cr-48 Chrome Notebooks between December 2010 and March 2011 for free to participants and in return asked for feedback such as suggestions and bug reports. The Cr-48 was intended for testing only, not retail sales. The Cr-48's hardware design broke convention by replacing certain keys with shortcut keys, such as the function keys, and replacing the caps lock key with a dedicated search key (now called the "Everything Button"), which can be changed back to caps lock in the OS's keyboard settings. Google addressed complaints that the operating system offers little functionality when the host device is not connected to the Internet, demonstrated an offline version of Google Docs, and announced a 3G plan that would give users 100 MB of free data each month, with additional paid plans available from Verizon. The device's USB port is capable of supporting a keyboard, mouse, Ethernet adapter, or USB storage, but not a printer, as ChromeOS offers no print stack. Adding further hardware outside of the previously mentioned items will likely cause problems with the operating system's "self knowing" security model. Users instead were encouraged to use a secure service called Google Cloud Print to print to legacy printers connected to their desktop computers, or to connect an HP ePrint, Kodak Hero, Kodak ESP, or Epson Connect printer to the Google Cloud Print service for a "cloud aware" printer connection. The Cr-48 prototype laptop gave reviewers their first opportunity to evaluate ChromeOS running on a device. Ryan Paul of Ars Technica wrote that the machine "met the basic requirements for Web surfing, gaming, and personal productivity, but falls short for more intensive tasks." He praised Google's approach to security but wondered whether mainstream computer users would accept an operating system whose only application is a browser. He thought ChromeOS "could appeal to some niche audiences": people who just need a browser or companies that rely on Google Apps and other Web applications. But the operating system was "decidedly not a full-fledged alternative to the general purpose computing environments that currently ship on netbooks." Paul wrote that most of ChromeOS's advantages "can be found in other software environments without having to sacrifice native applications." In reviewing the Cr-48 on December 29, 2010, Kurt Bakke of Conceivably Tech wrote that a Chromebook had become the most frequently used family appliance in his household. "Its 15 second startup time and dedicated Google user accounts made it the go-to device for quick searches, email as well as YouTube and Facebook activities." But the device did not replace other five notebooks in the house: one for gaming, two for the kids, and two more for general use. "The biggest complaint I heard was its lack of performance in Flash applications." In ongoing testing, Wolfgang Gruener, also writing in Conceivably Tech, said that cloud computing at cellular data speeds is unacceptable and that the lack of offline ability turns the Cr-48 "into a useless brick" when not connected. "It's difficult to use the Chromebook as an everyday device and give up what you are used to on a Mac/Windows PC, while you surely enjoy the dedicated cloud computing capabilities occasionally." The Cr-48 features an Intel Atom N455, a single-core processor clocked at 1.66 GHz, with 512 KB of cache and hyperthreading enabled. It also features 2 GB of removable DDR3 memory in a single SO-DIMM, integrated chipset graphics (Intel GMA 3150), and a 66 watt-hour battery. It has been found that the Intel NM10 chipset can get very hot during operation due to lack of a proper heatsink, but this has been fixed in production Chromebooks. Pixel Launched by Google in February 2013, the Chromebook Pixel was the high-end machine in the Chromebook family. The laptop has an unusual 3:2 display aspect ratio touch screen featuring what was at its debut the highest pixel density of any laptop, a faster CPU than its predecessors in the Intel Core i5, and an exterior design described by Wired as "an austere rectangular block of aluminum with subtly rounded edges". A second Pixel featuring LTE wireless communication and twice the storage capacity was shipped for arrival on April 12, 2013. The machine received much media attention, with many reviewers questioning the Pixel's value proposition compared to similarly priced Windows machines and the MacBook Air. Pixelbook In 2017, Google launched the Pixelbook to replace the Chromebook Pixel. Like the Chromebook Pixel, the Pixelbook has a 3:2 aspect ratio touchscreen with a high pixel density 12.3" display. Unlike the original Chromebook Pixel but like the second generation, the Pixelbook excludes an option for LTE. Instead, it implements Google's "instant tethering", which automatically tethers a Pixelbook to a Pixel phone's mobile connection. Pixelbook Go Announced in October 2019, The Pixelbook Go was released as a budget version of the Pixelbook, notable for its comparatively low price of $649 and light weight of 2.1 pounds. A version with a 4K display and Intel Core i7 processor followed in December. Samsung Samsung Series 5 Reviewing the Samsung Series 5 specifications, Scott Stein of CNET was unimpressed with a machine with a 12-inch screen and just 16 GB of onboard storage. "Chrome OS might be lighter than Windows XP, but we'd still prefer more media storage space. At this price, you could also get an Wi-Fi AMD E-350-powered ultraportable running Windows 7." On the other hand, MG Siegler of TechCrunch wrote a largely favorable review, praising the improvements in speed and touchpad sensitivity over the CR-48 prototype, as well as the long battery life and the fact that all models are priced below the iPad. In June 2011, iFixit dismantled a Samsung Series 5 and concluded that it was essentially an improved Cr-48. They rated it as 6/10 for repairability, predominantly because the case has to be opened to change the battery and because the RAM chip is soldered to the motherboard. iFixit noted that the "mostly-plastic construction" felt "a little cheap". On the plus side they stated that the screen was easy to remove and most of the components, including the solid-state drive would be easy to replace. iFixit's Kyle Wiens wrote that the Series 5 "fixes the major shortfalls of the Cr-48 and adds the polish necessary to strike lust into the heart of a broad consumer base: sleek looks, 8+ hours of battery life, and optimized performance." Samsung Series 5 550 In May 2012, Samsung introduced the Chromebook Series 5 550, with a Wi-Fi model and more expensive 3G model. Reviews generally questioned the value proposition. Dana Wollman of Engadget wrote that the Chromebook's keyboard "put thousand-dollar Ultrabooks to shame" and offered better display quality than on many laptops selling for twice as much. But the price "seems to exist in a vacuum—a place where tablet apps aren't growing more sophisticated, where Transformer-like Win8 tablets aren't on the way and where there aren't some solid budget Windows machines to choose from." Joe Wilcox of BetaNews wrote that "price to performance and how it compares to other choices" is "where Chromebook crumbles for many potential buyers." He noted that the new models sell for more than their predecessors, and while the price-performance ratio is quite favorable compared to the MacBook Air, "by the specs, there are plenty of lower-cost options." Samsung Series 3 In October 2012, the Series 3 Chromebook was introduced at a San Francisco event with the Samsung Chromebook XE303. The device was cheaper, thinner and lighter than the Chromebook 550. Google marketed the Series 3 as the computer for everyone, due to its simple operating system (ChromeOS) and affordable price. Target markets included students and first-time computer users, as well as households looking for an extra computer. The lower price proved a watershed for some reviewers. New York Times technology columnist David Pogue reversed his earlier thumbs-down verdict on the Chromebook, writing that "$250 changes everything." The price is half that of an "iPad, even less than an iPad Mini or an iPod Touch. And you’re getting a laptop." He wrote that the Chromebook does many of the things people use computers and laptops for: playing flash videos, and opening Microsoft Office documents. "In other words, Google is correct when it asserts that the Chromebook is perfect for schools, second computers in homes and businesses who deploy hundreds of machines." CNET's review of the Series 3 Chromebook was even more favorable, saying the machine largely delivered as a computer for students and as an additional computer for a household—especially for users who are already using Google Web applications like Google Docs, Google Drive, and Gmail. "It's got workable if not standout hardware, its battery life is good, it switches on quickly, and the $249 price tag means it's not as much of a commitment as the $550 Samsung Series 5 550 that arrived in May." The review subtracted points for performance. "It's fine for many tasks, but power users accustomed to having more than a couple dozen browser tabs open should steer clear." Samsung Chromebook 3 The Chromebook 3 is distinct from and distinguished from the similarly named Samsung Series 3 in several respects: newer (released 2016), different architecture (Intel Celeron N3050 instead of Exynos 5 Dual ARM Cortex), thinner (0.7"), and less expensive (about $100 less than the Series 3); while remaining a full implementation of ChromeOS. Samsung Galaxy Chromebook In 2020, Samsung introduced the Galaxy Chromebook, a high-end 2-in-1 laptop under the Galaxy branding for $999. Reviews praised the 4K AMOLED display, thin and light body, addition of the S-Pen, and speedy Intel Core i5-10210U performance. But they also criticized its poor battery life and heat output. Samsung Chromebook 4 and 4+ In October 2019, Samsung announced the Chromebook 4 (11.6") and 4+ (15.6") models. Both continue the budget model Chromebook line with a dual core Intel Celeron N4000 processor. The 4+ has a larger display and has model choices up to 6 GB RAM. Reviews praised the cheap price and comfortable keyboard but criticized the terrible displays. Samsung Galaxy Chromebook 2 The follow-on to the Galaxy Chromebook, the Galaxy Chromebook 2 was introduced in 2021. With a cheaper price, lower FHD QLED display, lower Core i3 processor, and no stylus, it is largely a downgrade from the previous model. It is intended that these changes improve the battery life. HP HP's first Chromebook, and the largest Chromebook on the market at that time, was the Pavilion 14 Chromebook launched February 3, 2013. It had an Intel Celeron 847 CPU and either 2 GB or 4 GB of RAM. Battery life was not long, at just over 4 hours, but the larger form factor made it more friendly for all-day use. HP introduced the Chromebook 11 on October 8, 2013, in the US. In December 2013, Google and HP recalled 145,000 chargers due to overheating. Sales were halted, resuming with a redesigned charger the following month. The HP Chromebook 14 was announced September 11, 2013 with an Intel Haswell Celeron processor, USB 3.0 ports, and 4G broadband. An updated version of the Chromebook lineup was announced on September 3, 2014. The 11-inch models included an Intel processor while the 14-inch models featured a fanless design powered by a Nvidia Tegra K1 processor. HP Chromebooks are available in several colors. Desktop variants Three types of desktop computers also run ChromeOS. Chromebox Classed as small form-factor PCs, Chromeboxes typically feature a power switch and a set of ports: local area network, USB, DVI-D, DisplayPort, and audio. As with Chromebooks, Chromeboxes employ solid-state memory and support Web applications, but require an external monitor, keyboard, and pointing device. Chromebase Chromebase is an "all-in-one" ChromeOS device. The first model was released by LG Electronics, which integrated a screen, speakers, 1.3-megapixel webcam and microphone, with a suggested retail price of $350. LG unveiled the product in January 2014, at International CES in Las Vegas. Chromebit The Chromebit is a stick PC running on Google's ChromeOS operating system. When placed in the HDMI port of a television or monitor, this device turns that display into a personal computer. Chromebit allows adding a keyboard and mouse over Bluetooth or USB port. HDMI does not provide power to connected devices, so the Chromebit is supplied power from either an external USB power supply or draws power via a USB port on the monitor. As of 2020, it no longer receives updates.
Technology
Specific hardware
null
44503418
https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene%20extinction%20event
Cretaceous–Paleogene extinction event
The Cretaceous–Paleogene (K–Pg) extinction event, also known as the K–T extinction, was the mass extinction of three-quarters of the plant and animal species on Earth approximately 66 million years ago. The event caused the extinction of all non-avian dinosaurs. Most other tetrapods weighing more than also became extinct, with the exception of some ectothermic species such as sea turtles and crocodilians. It marked the end of the Cretaceous period, and with it the Mesozoic era, while heralding the beginning of the current era, the Cenozoic. In the geologic record, the K–Pg event is marked by a thin layer of sediment called the K–Pg boundary, Fatkito boundary or K–T boundary, which can be found throughout the world in marine and terrestrial rocks. The boundary clay shows unusually high levels of the metal iridium, which is more common in asteroids than in the Earth's crust. As originally proposed in 1980 by a team of scientists led by Luis Alvarez and his son Walter, it is now generally thought that the K–Pg extinction was caused by the impact of a massive asteroid wide, 66 million years ago causing the Chicxulub crater, which devastated the global environment, mainly through a lingering impact winter which halted photosynthesis in plants and plankton. The impact hypothesis, also known as the Alvarez hypothesis, was bolstered by the discovery of the Chicxulub crater in the Gulf of Mexico's Yucatán Peninsula in the early 1990s, which provided conclusive evidence that the K–Pg boundary clay represented debris from an asteroid impact. The fact that the extinctions occurred simultaneously provides strong evidence that they were caused by the asteroid. A 2016 drilling project into the Chicxulub peak ring confirmed that the peak ring comprised granite ejected within minutes from deep in the earth, but contained hardly any gypsum, the usual sulfate-containing sea floor rock in the region: the gypsum would have vaporized and dispersed as an aerosol into the atmosphere, causing longer-term effects on the climate and food chain. In October 2019, researchers asserted that the event rapidly acidified the oceans and produced long-lasting effects on the climate, detailing the mechanisms of the mass extinction. Other causal or contributing factors to the extinction may have been the Deccan Traps and other volcanic eruptions, climate change, and sea level change. However, in January 2020, scientists reported that climate-modeling of the extinction event favored the asteroid impact and not volcanism. A wide range of terrestrial species perished in the K–Pg extinction, the best-known being the non-avian dinosaurs, along with many mammals, birds, lizards, insects, plants, and all the pterosaurs. In the oceans, the K–Pg extinction killed off plesiosaurs and mosasaurs and devastated teleost fish, sharks, mollusks (especially ammonites, which became extinct), and many species of plankton. It is estimated that 75% or more of all species on Earth vanished. However, the extinction also provided evolutionary opportunities: in its wake, many groups underwent remarkable adaptive radiation—sudden and prolific divergence into new forms and species within the disrupted and emptied ecological niches. Mammals in particular diversified in the Paleogene, evolving new forms such as horses, whales, bats, and primates. The surviving group of dinosaurs were avians, a few species of ground and water fowl, which radiated into all modern species of birds. Among other groups, teleost fish and perhaps lizards also radiated. Extinction patterns The K–Pg extinction event was severe, global, rapid, and selective, eliminating a vast number of species. Based on marine fossils, it is estimated that 75% or more of all species became extinct. The event appears to have affected all continents at the same time. Non-avian dinosaurs, for example, are known from the Maastrichtian of North America, Europe, Asia, Africa, South America, and Antarctica, but are unknown from the Cenozoic anywhere in the world. Similarly, fossil pollen shows devastation of the plant communities in areas as far apart as New Mexico, Alaska, China, and New Zealand. Nevertheless, high latitudes appear to have been less strongly affected than low latitudes. Despite the event's severity, there was significant variability in the rate of extinction between and within different clades. Species that depended on photosynthesis declined or became extinct as atmospheric particles blocked sunlight and reduced the solar energy reaching the ground. This plant extinction caused a major reshuffling of the dominant plant groups. Omnivores, insectivores, and carrion-eaters survived the extinction event, perhaps because of the increased availability of their food sources. Neither strictly herbivorous nor strictly carnivorous mammals seem to have survived. Rather, the surviving mammals and birds fed on insects, worms, and snails, which in turn fed on detritus (dead plant and animal matter). In stream communities and lake ecosystems, few animal groups became extinct, including large forms like crocodyliforms and champsosaurs, because such communities rely less directly on food from living plants, and more on detritus washed in from the land, protecting them from extinction. Modern crocodilians can live as scavengers and survive for months without food, and their young are small, grow slowly, and feed largely on invertebrates and dead organisms for their first few years. These characteristics have been linked to crocodilian survival at the end of the Cretaceous. Similar, but more complex patterns have been found in the oceans. Extinction was more severe among animals living in the water column than among animals living on or in the sea floor. Animals in the water column are almost entirely dependent on primary production from living phytoplankton, while animals on the ocean floor always or sometimes feed on detritus. Coccolithophorids and mollusks (including ammonites, rudists, freshwater snails, and mussels), and those organisms whose food chain included these shell builders, became extinct or suffered heavy losses. For example, it is thought that ammonites were the principal food of mosasaurs, a group of giant marine reptiles that became extinct at the boundary. The K–Pg extinction had a profound effect on the evolution of life on Earth. The elimination of dominant Cretaceous groups allowed other organisms to take their place, causing a remarkable amount of species diversification during the Paleogene Period. After the K–Pg extinction event, biodiversity required substantial time to recover, despite the existence of abundant vacant ecological niches. Evidence from the Salamanca Formation suggests that biotic recovery was more rapid in the Southern Hemisphere than in the Northern Hemisphere. Despite the massive loss of life inferred to have occurred during the extinction, and a number of geologic formations worldwide that span the boundary, only a few fossil sites contain direct evidence of the mass mortality that occurred exactly at the K-Pg boundary. These include the Tanis site of the Hell Creek Formation in North Dakota, USA, which contains a high number of well-preserved fossils that appear to have buried in a catastrophic flood event that was likely caused by the impact. Another important site is the Hornerstown Formation in New Jersey, USA, which has prominent layer at the K-Pg boundary known as the Main Fossiliferous Layer (MFL) containing a thanatocoenosis of disarticulated vertebrate fossils, which was likely also caused by a catastrophic flood from the impact. Microbiota The K–Pg boundary represents one of the most dramatic turnovers in the fossil record for various calcareous nanoplankton that formed the calcium deposits for which the Cretaceous is named. The turnover in this group is clearly marked at the species level. Statistical analysis of marine losses at this time suggests that the decrease in diversity was caused more by a sharp increase in extinctions than by a decrease in speciation. Major spatial differences existed in calcareous nannoplankton diversity patterns; in the Southern Hemisphere, the extinction was less severe and recovery occurred much faster than in the Northern Hemisphere. Following the extinction, survivor communities dominated for several hundred thousand years. The North Pacific acted as a diversity hotspot from which later nannoplankton communities radiated as they replaced survivor faunas across the globe. The K–Pg boundary record of dinoflagellates is not so well understood, mainly because only microbial cysts provide a fossil record, and not all dinoflagellate species have cyst-forming stages, which likely causes diversity to be underestimated. Recent studies indicate that there were no major shifts in dinoflagellates through the boundary layer. There were blooms of the taxa Thoracosphaera operculata and Braarudosphaera bigelowii at the boundary. Radiolaria have left a geological record since at least the Ordovician times, and their mineral fossil skeletons can be tracked across the K–Pg boundary. There is no evidence of mass extinction of these organisms, and there is support for high productivity of these species in southern high latitudes as a result of cooling temperatures in the early Paleocene. Approximately 46% of diatom species survived the transition from the Cretaceous to the Upper Paleocene, a significant turnover in species but not a catastrophic extinction. The occurrence of planktonic foraminifera across the K–Pg boundary has been studied since the 1930s. Research spurred by the possibility of an impact event at the K–Pg boundary resulted in numerous publications detailing planktonic foraminiferal extinction at the boundary; there is ongoing debate between groups which think the evidence indicates substantial extinction of these species at the K–Pg boundary, and those who think the evidence supports a gradual extinction through the boundary. There is strong evidence that local conditions heavily influenced diversity changes in planktonic foraminifera. Low and mid-latitude communities of planktonic foraminifera experienced high extinction rates, while high latitude faunas were relatively unaffected. Numerous species of benthic foraminifera became extinct during the event, presumably because they depend on organic debris for nutrients, while biomass in the ocean is thought to have decreased. As the marine microbiota recovered, it is thought that increased speciation of benthic foraminifera resulted from the increase in food sources. In some areas, such as Texas, benthic foraminifera show no sign of any major extinction event, however. Phytoplankton recovery in the early Paleocene provided the food source to support large benthic foraminiferal assemblages, which are mainly detritus-feeding. Ultimate recovery of the benthic populations occurred over several stages lasting several hundred thousand years into the early Paleocene. Marine invertebrates There is significant variation in the fossil record as to the extinction rate of marine invertebrates across the K–Pg boundary. The apparent rate is influenced by a lack of fossil records, rather than extinctions. Ostracods, a class of small crustaceans that were prevalent in the upper Maastrichtian, left fossil deposits in a variety of locations. A review of these fossils shows that ostracod diversity was lower in the Paleocene than any other time in the Cenozoic. Current research cannot ascertain whether the extinctions occurred prior to, or during, the boundary interval. Ostracods that were heavily sexually selected were more vulnerable to extinction, and ostracod sexual dimorphism was significantly rarer following the mass extinction. Among decapods, extinction patterns were highly heterogeneous and cannot be neatly attributed to any particular factor. Decapods that inhabited the Western Interior Seaway were especially hard-hit, while other regions of the world's oceans were refugia that increased chances of survival into the Palaeocene. Among retroplumid crabs, the genus Costacopluma was a notable survivor. Approximately 60% of late-Cretaceous scleractinian coral genera failed to cross the K–Pg boundary into the Paleocene. Further analysis of the coral extinctions shows that approximately 98% of colonial species, ones that inhabit warm, shallow tropical waters, became extinct. The solitary corals, which generally do not form reefs and inhabit colder and deeper (below the photic zone) areas of the ocean were less impacted by the K–Pg boundary. Colonial coral species rely upon symbiosis with photosynthetic algae, which collapsed due to the events surrounding the K–Pg boundary, but the use of data from coral fossils to support K–Pg extinction and subsequent Paleocene recovery, must be weighed against the changes that occurred in coral ecosystems through the K–Pg boundary. Most species of brachiopods, a small phylum of marine invertebrates, survived the K–Pg extinction event and diversified during the early Paleocene. The numbers of bivalve genera exhibited significant diminution after the K–Pg boundary. Entire groups of bivalves, including rudists (reef-building clams) and inoceramids (giant relatives of modern scallops), became extinct at the K–Pg boundary, with the gradual extinction of most inoceramid bivalves beginning well before the K–Pg boundary. Deposit feeders were the most common bivalves in the catastrophe's aftermath. Abundance was not a factor that affected whether a bivalve taxon went extinct, according to evidence from North America. Veneroid bivalves developed deeper burrowing habitats as the recovery from the crisis ensued. Except for nautiloids (represented by the modern order Nautilida) and coleoids (which had already diverged into modern octopodes, squids, and cuttlefish) all other species of the molluscan class Cephalopoda became extinct at the K–Pg boundary. These included the ecologically significant belemnoids, as well as the ammonoids, a group of highly diverse, numerous, and widely distributed shelled cephalopods. The extinction of belemnites enabled surviving cephalopod clades to fill their niches. Ammonite genera became extinct at or near the K–Pg boundary; there was a smaller and slower extinction of ammonite genera prior to the boundary associated with a late Cretaceous marine regression, and a small, gradual reduction in ammonite diversity occurred throughout the very late Cretaceous. Researchers have pointed out that the reproductive strategy of the surviving nautiloids, which rely upon few and larger eggs, played a role in outsurviving their ammonoid counterparts through the extinction event. The ammonoids utilized a planktonic strategy of reproduction (numerous eggs and planktonic larvae), which would have been devastated by the K–Pg extinction event. Additional research has shown that subsequent to this elimination of ammonoids from the global biota, nautiloids began an evolutionary radiation into shell shapes and complexities theretofore known only from ammonoids. Approximately 35% of echinoderm genera became extinct at the K–Pg boundary, although taxa that thrived in low-latitude, shallow-water environments during the late Cretaceous had the highest extinction rate. Mid-latitude, deep-water echinoderms were much less affected at the K–Pg boundary. The pattern of extinction points to habitat loss, specifically the drowning of carbonate platforms, the shallow-water reefs in existence at that time, by the extinction event. Atelostomatans were affected by the Lilliput effect. Terrestrial invertebrates Insect damage to the fossilized leaves of flowering plants from fourteen sites in North America was used as a proxy for insect diversity across the K–Pg boundary and analyzed to determine the rate of extinction. Researchers found that Cretaceous sites, prior to the extinction event, had rich plant and insect-feeding diversity. During the early Paleocene, flora were relatively diverse with little predation from insects, even 1.7 million years after the extinction event. Studies of the size of the ichnotaxon Naktodemasis bowni, produced by either cicada nymphs or beetle larvae, over the course of the K-Pg transition show that the Lilliput effect occurred in terrestrial invertebrates thanks to the extinction event. The extinction event produced major changes in Paleogene insect communities. Many groups of ants were present in the Cretaceous, but in the Eocene ants became dominant and diverse, with larger colonies. Butterflies diversified as well, perhaps to take the place of leaf-eating insects wiped out by the extinction. The advanced mound-building termites, Termitidae, also appear to have risen in importance. Fish There are fossil records of jawed fishes across the K–Pg boundary, which provide good evidence of extinction patterns of these classes of marine vertebrates. While the deep-sea realm was able to remain seemingly unaffected, there was an equal loss between the open marine apex predators and the durophagous demersal feeders on the continental shelf. Within cartilaginous fish, approximately 7 out of the 41 families of neoselachians (modern sharks, skates, and rays) disappeared after this event and batoids (skates and rays) lost nearly all the identifiable species, while more than 90% of teleost fish (bony fish) families survived. In the Maastrichtian age, 28 shark families and 13 batoid families thrived, of which 25 and 9, respectively, survived the K–T boundary event. Forty-seven of all neoselachian genera cross the K–T boundary, with 85% being sharks. Batoids display with 15%, a comparably low survival rate. Among elasmobranchs, those species that inhabited higher latitudes and lived pelagic lifestyles were more likely to survive, whereas epibenthic lifestyles and durophagy were strongly associated with the likelihood of perishing during the extinction event. There is evidence of a mass extinction of bony fishes at a fossil site immediately above the K–Pg boundary layer on Seymour Island near Antarctica, apparently precipitated by the K–Pg extinction event; the marine and freshwater environments of fishes mitigated the environmental effects of the extinction event. The result was Patterson's Gap, a period in the earliest part of the Cenozoic of decreased acanthomorph diversity, although acanthomorphs diversified rapidly after the extinction. Teleost fish diversified explosively after the mass extinction, filling the niches left vacant by the extinction. Groups appearing in the Paleocene and Eocene epochs include billfish, tunas, eels, and flatfish. Amphibians There is limited evidence for extinction of amphibians at the K–Pg boundary. A study of fossil vertebrates across the K–Pg boundary in Montana concluded that no species of amphibian became extinct. Yet there are several species of Maastrichtian amphibian, not included as part of this study, which are unknown from the Paleocene. These include the frog Theatonius lancensis and the albanerpetontid Albanerpeton galaktion; therefore, some amphibians do seem to have become extinct at the boundary. The relatively low levels of extinction seen among amphibians probably reflect the low extinction rates seen in freshwater animals. Following the mass extinction, frogs radiated substantially, with 88% of modern anuran diversity being traced back to three lineages of frogs that evolved after the cataclysm. Reptiles Choristoderes The choristoderes (a group of semi-aquatic diapsids of uncertain position) survived across the K–Pg boundary subsequently becoming extinct in the Miocene. The gharial-like choristodere genus Champsosaurus palatal teeth suggest that there were dietary changes among the various species across the K–Pg event. Turtles More than 80% of Cretaceous turtle species passed through the K–Pg boundary. All six turtle families in existence at the end of the Cretaceous survived into the Paleogene and are represented by living species. Analysis of turtle survivorship in the Hell Creek Formation shows a minimum of 75% of turtle species survived. Following the extinction event, turtle diversity exceeded pre-extinction levels in the Danian of North America, although in South America it remained diminished. European turtles likewise recovered rapidly following the mass extinction. Lepidosauria The rhynchocephalians, which were a globally distributed and diverse group of lepidosaurians during the early Mesozoic, had begun to decline by the mid-Cretaceous, although they remained successful in the Late Cretaceous of southern South America. They are represented today by a single species, the tuatara (Sphenodon punctatus) found in New Zealand. Outside of New Zealand, one rhynchocephalian is known to have crossed the K-Pg boundary, Kawasphenodon peligrensis, known from the earliest Paleocene (Danian) of Patagonia. The order Squamata comprising lizards and snakes first diversified during the Jurassic and continued to diversify throughout the Cretaceous. They are currently the most successful and diverse group of living reptiles, with more than 10,000 extant species. The only major group of terrestrial lizards to go extinct at the end of the Cretaceous were the polyglyphanodontians, a diverse group of mainly herbivorous lizards known predominantly from the Northern Hemisphere. The mosasaurs, a diverse group of large predatory marine reptiles, also became extinct. Fossil evidence indicates that squamates generally suffered very heavy losses in the K–Pg event, only recovering 10 million years after it. The extinction of Cretaceous lizards and snakes may have led to the evolution of modern groups such as iguanas, monitor lizards, and boas. The diversification of crown group snakes has been linked to the biotic recovery in the aftermath of the K-Pg extinction event. Pan-Gekkotans weathered the extinction event well, with multiple lineages likely surviving. Marine reptiles ∆44/42Ca values indicate that prior to the mass extinction, marine reptiles at the top of food webs were feeding on only one source of calcium, suggesting their populations exhibited heightened vulnerability to extinctions at the terminus of the Cretaceous. Along with the aforementioned mosasaurs, plesiosaurs, represented by the families Elasmosauridae and Polycotylidae, became extinct during the event. The ichthyosaurs had disappeared from fossil record tens of millions of years prior to the K-Pg extinction event. Crocodyliforms Ten families of crocodilians or their close relatives are represented in the Maastrichtian fossil records, of which five died out prior to the K–Pg boundary. Five families have both Maastrichtian and Paleocene fossil representatives. All of the surviving families of crocodyliforms inhabited freshwater and terrestrial environments—except for the Dyrosauridae, which lived in freshwater and marine locations. Approximately 50% of crocodyliform representatives survived across the K–Pg boundary, the only apparent trend being that no large crocodiles survived. Crocodyliform survivability across the boundary may have resulted from their aquatic niche and ability to burrow, which reduced susceptibility to negative environmental effects at the boundary. Jouve and colleagues suggested in 2008 that juvenile marine crocodyliforms lived in freshwater environments as do modern marine crocodile juveniles, which would have helped them survive where other marine reptiles became extinct; freshwater environments were not so strongly affected by the K–Pg extinction event as marine environments were. Among the terrestrial clade Notosuchia, only the family Sebecidae survived; the exact reasons for this pattern are not known. Sebecids were large terrestrial predators, are known from the Eocene of Europe, and would survive in South America into the Miocene. Tethysuchians radiated explosively after the extinction event. Pterosaurs Two families of pterosaurs, Azhdarchidae and Nyctosauridae, were definitely present in the Maastrichtian, and they likely became extinct at the K–Pg boundary. Several other pterosaur lineages may have been present during the Maastrichtian, such as the ornithocheirids, pteranodontids, a possible tapejarid, a possible thalassodromid and a basal toothed taxon of uncertain affinities, though they are represented by fragmentary remains that are difficult to assign to any given group. While this was occurring, modern birds were undergoing diversification; traditionally it was thought that they replaced archaic birds and pterosaur groups, possibly due to direct competition, or they simply filled empty niches, but there is no correlation between pterosaur and avian diversities that are conclusive to a competition hypothesis, and small pterosaurs were present in the Late Cretaceous. At least some niches previously held by birds were reclaimed by pterosaurs prior to the K–Pg event. Non-avian dinosaurs Scientists agree that all non-avian dinosaurs became extinct at the K–Pg boundary. The dinosaur fossil record has been interpreted to show both a decline in diversity and no decline in diversity during the last few million years of the Cretaceous, and it may be that the quality of the dinosaur fossil record is simply not good enough to permit researchers to distinguish between the options. There is no evidence that late Maastrichtian non-avian dinosaurs could burrow, swim, or dive, which suggests they were unable to shelter themselves from the worst parts of any environmental stress that occurred at the K–Pg boundary. It is possible that small dinosaurs (other than birds) did survive, but they would have been deprived of food, as herbivorous dinosaurs would have found plant material scarce and carnivores would have quickly found prey in short supply. The growing consensus about the endothermy of dinosaurs (see dinosaur physiology) helps to understand their full extinction in contrast with their close relatives, the crocodilians. Ectothermic ("cold-blooded") crocodiles have very limited needs for food (they can survive several months without eating), while endothermic ("warm-blooded") animals of similar size need much more food to sustain their faster metabolism. Thus, under the circumstances of food chain disruption previously mentioned, non-avian dinosaurs died out, while some crocodiles survived. In this context, the survival of other endothermic animals, such as some birds and mammals, could be due, among other reasons, to their smaller needs for food, related to their small size at the extinction epoch. Prolonged cold is unlikely to have been a reason for the extinction of non-avian dinosaurs given the adaptations of many dinosaurs to cold environments. Whether the extinction occurred gradually or suddenly has been debated, as both views have support from the fossil record. A highly informative sequence of dinosaur-bearing rocks from the K–Pg boundary is found in western North America, particularly the late Maastrichtian-age Hell Creek Formation of Montana. Comparison with the older Judith River Formation (Montana) and Dinosaur Park Formation (Alberta), which both date from approximately 75 Ma, provides information on the changes in dinosaur populations over the last 10 million years of the Cretaceous. These fossil beds are geographically limited, covering only part of one continent. The middle–late Campanian formations show a greater diversity of dinosaurs than any other single group of rocks. The late Maastrichtian rocks contain the largest members of several major clades: Tyrannosaurus, Ankylosaurus, Pachycephalosaurus, Triceratops, and Torosaurus, which suggests food was plentiful immediately prior to the extinction. A study of 29 fossil sites in Catalan Pyrenees of Europe in 2010 supports the view that dinosaurs there had great diversity until the asteroid impact, with more than 100 living species. More recent research indicates that this figure is obscured by taphonomic biases and the sparsity of the continental fossil record. The results of this study, which were based on estimated real global biodiversity, showed that between 628 and 1,078 non-avian dinosaur species were alive at the end of the Cretaceous and underwent sudden extinction after the Cretaceous–Paleogene extinction event. Alternatively, interpretation based on the fossil-bearing rocks along the Red Deer River in Alberta, Canada, supports the gradual extinction of non-avian dinosaurs; during the last 10 million years of the Cretaceous layers there, the number of dinosaur species seems to have decreased from about 45 to approximately 12. Other scientists have made the same assessment following their research. Several researchers support the existence of Paleocene non-avian dinosaurs. Evidence of this existence is based on the discovery of dinosaur remains in the Hell Creek Formation up to above and 40,000 years later than the K–Pg boundary. Pollen samples recovered near a fossilized hadrosaur femur recovered in the Ojo Alamo Sandstone at the San Juan River in Colorado, indicate that the animal lived during the Cenozoic, approximately (about 1 million years after the K–Pg extinction event). If their existence past the K–Pg boundary can be confirmed, these hadrosaurids would be considered a dead clade walking. The scientific consensus is that these fossils were eroded from their original locations and then re-buried in much later sediments (also known as reworked fossils). Birds Most paleontologists regard birds as the only surviving dinosaurs (see Origin of birds). It is thought that all non-avian theropods became extinct, including then-flourishing groups such as enantiornithines and hesperornithiforms. Several analyses of bird fossils show divergence of species prior to the K–Pg boundary, and that duck, chicken, and ratite bird relatives coexisted with non-avian dinosaurs. Large collections of bird fossils representing a range of different species provide definitive evidence for the persistence of archaic birds to within 300,000 years of the K–Pg boundary. The absence of these birds in the Paleogene is evidence that a mass extinction of archaic birds took place there, although Qinornis from China has been suggested to be a more basal member of Ornithurae which survived into the Paleocene. Only a small fraction of ground and water-dwelling Cretaceous bird species survived the impact, giving rise to today's birds. The only bird group known for certain to have survived the K–Pg boundary is the Aves. Avians may have been able to survive the extinction as a result of their abilities to dive, swim, or seek shelter in water and marshlands. Many species of avians can build burrows, or nest in tree holes, or termite nests, all of which provided shelter from the environmental effects at the K–Pg boundary. Long-term survival past the boundary was assured as a result of filling ecological niches left empty by extinction of non-avian dinosaurs. Based on molecular sequencing and fossil dating, many species of birds (the Neoaves group in particular) appeared to radiate after the K–Pg boundary. The open niche space and relative scarcity of predators following the K-Pg extinction allowed for adaptive radiation of various avian groups. Ratites, for example, rapidly diversified in the early Paleogene and are believed to have convergently developed flightlessness at least three to six times, often fulfilling the niche space for large herbivores once occupied by non-avian dinosaurs. Mammals Mammalian species began diversifying approximately 30 million years prior to the K–Pg boundary. Diversification of mammals stalled across the boundary. All major Late Cretaceous mammalian lineages, including monotremes (egg-laying mammals), multituberculates, metatherians (which includes modern marsupials), eutherians (which includes modern placentals), meridiolestidans, and gondwanatheres survived the K–Pg extinction event, although they suffered losses. In particular, metatherians largely disappeared from North America, and the Asian deltatheroidans became extinct (aside from the lineage leading to Gurbanodelta). In the Hell Creek beds of North America, at least half of the ten known multituberculate species and all eleven metatherians species are not found above the boundary. Multituberculates in Europe and North America survived relatively unscathed and quickly bounced back in the Paleocene, but Asian forms were devastated, never again to represent a significant component of mammalian fauna. A recent study indicates that metatherians suffered the heaviest losses at the K–Pg event, followed by multituberculates, while eutherians recovered the quickest. K–Pg boundary mammalian species were generally small, comparable in size to rats; this small size would have helped them find shelter in protected environments. It is postulated that some early monotremes, marsupials, and placentals were semiaquatic or burrowing, as there are multiple mammalian lineages with such habits today. Any burrowing or semiaquatic mammal would have had additional protection from K–Pg boundary environmental stresses. After the K–Pg extinction, mammals evolved to fill the niches left vacant by the dinosaurs. Some research indicates that mammals did not explosively diversify across the K–Pg boundary, despite the ecological niches made available by the extinction of dinosaurs. Several mammalian orders have been interpreted as diversifying immediately after the K–Pg boundary, including Chiroptera (bats) and Cetartiodactyla (a diverse group that today includes whales and dolphins and even-toed ungulates), although recent research concludes that only marsupial orders diversified soon after the K–Pg boundary. However, morphological diversification rates among eutherians after the extinction event were thrice those of before it. Also significant, within the mammalian genera, new species were approximately 9.1% larger after the K–Pg boundary. After about 700,000 years, some mammals had reached 50 kilos (110 pounds), a 100-fold increase over the weight of those which survived the extinction. It is thought that body sizes of placental mammalian survivors evolutionarily increased first, allowing them to fill niches after the extinctions, with brain sizes increasing later in the Eocene. Terrestrial plants Plant fossils illustrate the reduction in plant species across the K–Pg boundary. There is overwhelming evidence of global disruption of plant communities at the K–Pg boundary. Extinctions are seen both in studies of fossil pollen, and fossil leaves. In North America, the data suggests massive devastation and mass extinction of plants at the K–Pg boundary sections, although there were substantial megafloral changes before the boundary. In North America, approximately 57% of plant species became extinct. In high southern hemisphere latitudes, such as New Zealand and Antarctica, the mass die-off of flora caused no significant turnover in species, but dramatic and short-term changes in the relative abundance of plant groups. European flora was also less affected, most likely due to its distance from the site of the Chicxulub impact. In northern Alaska and the Anadyr-Koryak region of Russia, the flora was minimally impacted. Another line of evidence of a major floral extinction is that the divergence rate of subviral pathogens (viroids) of angiosperms sharply decreased, which indicates an enormous reduction in the number of flowering plants. However, phylogenetic evidence shows no mass angiosperm extinction. Due to the wholesale destruction of plants at the K–Pg boundary, there was a proliferation of saprotrophic organisms, such as fungi, that do not require photosynthesis and use nutrients from decaying vegetation. The dominance of fungal species lasted only a few years while the atmosphere cleared and plenty of organic matter to feed on was present. Once the atmosphere cleared photosynthetic organisms returned – initially ferns and other ground-level plants. In some regions, the Paleocene recovery of plants began with recolonizations by fern species, represented as a fern spike in the geologic record; this same pattern of fern recolonization was observed after the 1980 Mount St. Helens eruption. Just two species of fern appear to have dominated the landscape for centuries after the event. In the sediments below the K–Pg boundary the dominant plant remains are angiosperm pollen grains, but the boundary layer contains little pollen and is dominated by fern spores. More usual pollen levels gradually resume above the boundary layer. This is reminiscent of areas blighted by modern volcanic eruptions, where the recovery is led by ferns, which are later replaced by larger angiosperm plants. In North American terrestrial sequences, the extinction event is best represented by the marked discrepancy between the rich and relatively abundant late-Maastrichtian pollen record and the post-boundary fern spike. Polyploidy appears to have enhanced the ability of flowering plants to survive the extinction, probably because the additional copies of the genome such plants possessed allowed them to more readily adapt to the rapidly changing environmental conditions that followed the impact. Beyond extinction impacts, the event also caused more general changes of flora such as giving rise to neotropical rainforest biomes like the Amazonia, replacing species composition and structure of local forests during ~6 million years of recovery to former levels of plant diversity. Fungi While it appears that many fungi were wiped out at the K-Pg boundary, there is some evidence that some fungal species thrived in the years after the extinction event. Microfossils from that period indicate a great increase in fungal spores, long before the resumption of plentiful fern spores in the recovery after the impact. Monoporisporites and hypha are almost exclusive microfossils for a short span during and after the iridium boundary. These saprophytes would not need sunlight, allowing them to survive during a period when the atmosphere was likely clogged with dust and sulfur aerosols. The proliferation of fungi has occurred after several extinction events, including the Permian–Triassic extinction event, the largest known mass extinction in Earth's history, with up to 96% of all species suffering extinction. Dating A 1991 study of fossil leaves dated the extinction-associated freezing to early June. A later study shifted the dating to spring season, based on the osteological evidence and stable isotope records of well-preserved bones of acipenseriform fishes. The study noted that "the palaeobotanical identities, taphonomic inferences and stratigraphic assumptions" for the June dating have since all been refuted. Depalma et al. (2021) opted for the spring–summer range, but During et al. (2024) reevaluated and criticized this study based on its lack of primary data, unidentified laboratory for the analyses, insufficient methods for accurate replication and problematic isotopic graphs with irregular data and error bars. A study of fossilized fish bones found at Tanis in North Dakota suggests that the Cretaceous-Paleogene mass extinction happened during the Northern Hemisphere spring. Duration The extinction's rapidity is a controversial issue because some researchers think the extinction was the result of a sudden event, while others argue that it took place over a long period. The exact length of time is difficult to determine because of the Signor–Lipps effect, where the fossil record is so incomplete that most extinct species probably died out long after the most recent fossil that has been found. Scientists have also found very few continuous beds of fossil-bearing rock that cover a time range from several million years before the K–Pg extinction to several million years after it. The sedimentation rate and thickness of K–Pg clay from three sites suggest rapid extinction, perhaps over a period of less than 10,000 years. At one site in the Denver Basin of Colorado, after the K–Pg boundary layer was deposited, the fern spike lasted approximately 1,000 years, and no more than 71,000 years; at the same location, the earliest appearance of Cenozoic mammals occurred after approximately 185,000 years, and no more than 570,000 years, "indicating rapid rates of biotic extinction and initial recovery in the Denver Basin during this event." Analysis of the carbon cycle disruptions caused by the impact constrains them to an interval of just 5,000 years. Models presented at the annual meeting of the American Geophysical Union demonstrated that the period of global darkness following the Chicxulub impact would have persisted in the Hell Creek Formation nearly 2 years. Causes Chicxulub impact Evidence for impact In 1980, a team of researchers consisting of Nobel Prize-winning physicist Luis Alvarez, his son, geologist Walter Alvarez, and chemists Frank Asaro and Helen Michel discovered that sedimentary layers found all over the world at the Cretaceous–Paleogene boundary contain a concentration of iridium many times greater than normal (30, 160, and 20 times in three sections originally studied). Iridium is extremely rare in Earth's crust because it is a siderophile element which mostly sank along with iron into Earth's core during planetary differentiation. Instead, iridium is more common in comets and asteroids. Because of this, the Alvarez team suggested that an asteroid struck the Earth at the time of the K–Pg boundary. There were earlier speculations on the possibility of an impact event, but this was the first hard evidence, and since then, studies have continued to demonstrate elevated iridium levels in association with the K-Pg boundary. This hypothesis was viewed as radical when first proposed, but additional evidence soon emerged. The boundary clay was found to be full of minute spherules of rock, crystallized from droplets of molten rock formed by the impact. Shocked quartz and other minerals were also identified in the K–Pg boundary. The identification of giant tsunami beds along the Gulf Coast and the Caribbean provided more evidence, and suggested that the impact might have occurred nearby, as did the discovery that the K–Pg boundary became thicker in the southern United States, with meter-thick beds of debris occurring in northern New Mexico. A K-Pg boundary "cocktail" of microfossils, lithic fragments, and impact-derived material deposited by gigantic sediment gravity flows was discovered in the Caribbean that served to demarcate the impact. Further research identified the giant Chicxulub crater, buried under Chicxulub on the coast of Yucatán, as the source of the K–Pg boundary clay. Identified in 1990 based on work by geophysicist Glen Penfield in 1978, the crater is oval, with an average diameter of roughly , about the size calculated by the Alvarez team. In March 2010, an international panel of 41 scientists reviewed 20 years of scientific literature and endorsed the asteroid hypothesis, specifically the Chicxulub impact, as the cause of the extinction, ruling out other theories such as massive volcanism. They had determined that a asteroid hurtled into Earth at Chicxulub on Mexico's Yucatán Peninsula. Additional evidence for the impact event is found at the Tanis site in southwestern North Dakota, United States. Tanis is part of the heavily studied Hell Creek Formation, a group of rocks spanning four states in North America renowned for many significant fossil discoveries from the Upper Cretaceous and lower Paleocene. Tanis is an extraordinary and unique site because it appears to record the events from the first minutes until a few hours after the impact of the giant Chicxulub asteroid in extreme detail. Amber from the site has been reported to contain microtektites matching those of the Chicxulub impact event. Some researchers question the interpretation of the findings at the site or are skeptical of the team leader, Robert DePalma, who had not yet received his Ph.D. in geology at the time of the discovery and whose commercial activities have been regarded with suspicion. Furthermore, indirect evidence of an asteroid impact as the cause of the mass extinction comes from patterns of turnover in marine plankton. Some critics of the impact theory have put forward that the impact precedes the mass extinction by about 300,000 years and thus was not its cause. However, in a 2013 paper, Paul Renne of the Berkeley Geochronology Center dated the impact at years ago, based on argon–argon dating. He further posits that the mass extinction occurred within 32,000 years of this date. The dating of hydrothermally altered structures around the crater is consistent with this timeline. In 2007, it was proposed that the impactor belonged to the Baptistina family of asteroids. This link has been doubted, though not disproved, in part because of a lack of observations of the asteroid and its family. It was reported in 2009 that 298 Baptistina does not share the chemical signature of the K–Pg impactor. Further, a 2011 Wide-field Infrared Survey Explorer (WISE) study of reflected light from the asteroids of the family estimated their break-up at 80 Ma, giving them insufficient time to shift orbits and impact Earth by 66 Ma. Effects of impact The collision would have released the same energy as —more than a billion times the energy of the atomic bombings of Hiroshima and Nagasaki. The Chicxulub impact caused a global catastrophe. Some of the phenomena were brief occurrences immediately following the impact, but there were also long-term geochemical and climatic disruptions that devastated the ecology. The scientific consensus is that the asteroid impact at the K–Pg boundary left megatsunami deposits and sediments around the area of the Caribbean Sea and Gulf of Mexico, from the colossal waves created by the impact. These deposits have been identified in the La Popa basin in northeastern Mexico, platform carbonates in northeastern Brazil, in Atlantic deep-sea sediments, and in the form of the thickest-known layer of graded sand deposits, around , in the Chicxulub crater itself, directly above the shocked granite ejecta. The megatsunami has been estimated at more than tall, as the asteroid fell into relatively shallow seas; in deep seas it would have been tall. Fossiliferous sedimentary rocks deposited during the K–Pg impact have been found in the Gulf of Mexico area, including tsunami wash deposits carrying remains of a mangrove-type ecosystem, indicating that water in the Gulf of Mexico sloshed back and forth repeatedly after the impact; dead fish left in these shallow waters were not disturbed by scavengers. The re-entry of ejecta into Earth's atmosphere included a brief (hours-long) but intense pulse of infrared radiation, cooking exposed organisms. This is debated, with opponents arguing that local ferocious fires, probably limited to North America, fall short of global firestorms. This is the "Cretaceous–Paleogene firestorm debate". A paper in 2013 by a prominent modeler of nuclear winter suggested that, based on the amount of soot in the global debris layer, the entire terrestrial biosphere might have burned, implying a global soot-cloud blocking out the sun and creating an impact winter effect. If widespread fires occurred this would have exterminated the most vulnerable organisms that survived the period immediately after the impact. Experimental analysis suggests that any impact-induced wildfires were insufficient on their own to cause plant extinctions, and much of the thermal radiation generated by the impact would have been absorbed by the atmosphere and ejecta in the lower atmosphere. Aside from the hypothesized fire effects on reduction of insolation, the impact would have created a dust cloud that blocked sunlight for up to a year, inhibiting photosynthesis. The asteroid hit an area of gypsum and anhydrite rock containing a large amount of combustible hydrocarbons and sulfur, much of which was vaporized, thereby injecting sulfuric acid aerosols into the stratosphere, which might have reduced sunlight reaching the Earth's surface by more than 50%. Fine silicate dust also contributed to the intense impact winter, as did soot from wildfires. The climatic forcing of this impact winter was about 100 times more potent than that of the 1991 eruption of Mount Pinatubo. According to models of the Hell Creek Formation, the onset of global darkness would have reached its maximum in only a few weeks and likely lasted upwards of 2 years. Freezing temperatures probably lasted for at least three years. At Brazos section, the sea surface temperature dropped as much as for decades after the impact. It would take at least ten years for such aerosols to dissipate, and would account for the extinction of plants and phytoplankton, and subsequently herbivores and their predators. Creatures whose food chains were based on detritus would have a reasonable chance of survival. In 2016, a scientific drilling project obtained deep rock-core samples from the peak ring around the Chicxulub impact crater. The discoveries confirmed that the rock comprising the peak ring had been shocked by immense pressure and melted in just minutes from its usual state into its present form. Unlike sea-floor deposits, the peak ring was made of granite originating much deeper in the earth, which had been ejected to the surface by the impact. Gypsum is a sulfate-containing rock usually present in the shallow seabed of the region; it had been almost entirely removed, vaporized into the atmosphere. The impactor was large enough to create a peak ring, to melt, shock, and eject deep granite, to create colossal water movements, and to eject an immense quantity of vaporized rock and sulfates into the atmosphere, where they would have persisted for several years. This worldwide dispersal of dust and sulfates would have affected climate catastrophically, led to large temperature drops, and devastated the food chain. The release of large quantities of sulphur aerosols into the atmosphere as a consequence of the impact would also have caused acid rain. Oceans acidified as a result. This decrease in ocean pH would kill many organisms that grow shells of calcium carbonate. The heating of the atmosphere during the impact itself may have also generated nitric acid rain through the production of nitrogen oxides and their subsequent reaction with water vapour. After the impact winter, the Earth entered a period of global warming as a result of the vapourisation of carbonates into carbon dioxide, whose long residence time in the atmosphere ensured significant warming would occur after more short-lived cooling gases dissipated. Carbon monoxide concentrations also increased and caused particularly devastating global warming because of the consequent increases in tropospheric ozone and methane concentrations. The impact's injection of water vapour into the atmosphere also produced major climatic perturbations. The end-Cretaceous event is the only mass extinction definitively known to be associated with an impact, and other large extraterrestrial impacts, such as the Manicouagan Reservoir impact, do not coincide with any noticeable extinction events. Multiple impact event Other crater-like topographic features have also been proposed as impact craters formed in connection with Cretaceous–Paleogene extinction. This suggests the possibility of near-simultaneous multiple impacts, perhaps from a fragmented asteroidal object similar to the Shoemaker–Levy 9 impact with Jupiter. In addition to the Chicxulub crater, there is the Boltysh crater in Ukraine (), the Silverpit crater in the North Sea () possibly formed by bolide impact, and the controversial and much larger Shiva crater. Any other craters that might have formed in the Tethys Ocean would since have been obscured by the northward tectonic drift of Africa and India. Deccan Traps The Deccan Traps, which erupted close to the boundary between the Mesozoic and Cenozoic, have been cited as an alternate explanation for the mass extinction. Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 Mya and lasted more than 2 million years. The most recent evidence shows that the traps erupted over a period of only 800,000 years spanning the K–Pg boundary, and therefore may be responsible for the extinction and the delayed biotic recovery thereafter. The Deccan Traps could have caused extinction through several mechanisms, including the release of dust and sulfuric aerosols into the air, which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, the latest Cretaceous saw a rise in global temperatures; Deccan Traps volcanism resulted in carbon dioxide emissions that increased the greenhouse effect when the dust and aerosols cleared from the atmosphere. Plant fossils record a 250 ppm increase in carbon dioxide concentrations across the K-Pg boundary likely attributable to Deccan Traps activity. The increased carbon dioxide emissions also caused acid rain, evidenced by increased mercury deposition due to increased solubility of mercury compounds in more acidic water. Evidence for extinctions caused by the Deccan Traps includes the reduction in diversity of marine life when the climate near the K–Pg boundary increased in temperature. The temperature increased about three to four degrees very rapidly between 65.4 and 65.2 million years ago, which is very near the time of the extinction event. Not only did the climate temperature increase, but the water temperature decreased, causing a drastic decrease in marine diversity. Evidence from Tunisia indicates that marine life was deleteriously affected by a major period of increased warmth and humidity linked to a pulse of intense Deccan Traps activity, and that marine extinctions there began before the impact event. Charophyte declines in the Songliao Basin, China before the asteroid impact have been concluded to be connected to climate changes caused by Deccan Traps activity. In the years when the Deccan Traps hypothesis was linked to a slower extinction, Luis Alvarez (d. 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. Even Walter Alvarez acknowledged that other major changes might have contributed to the extinctions. More recent arguments against the Deccan Traps as an extinction cause include that the timeline of Deccan Traps activity and pulses of climate change has been found by some studies to be asynchronous, that palynological changes do not coincide with intervals of volcanism, and that many sites show climatic stability during the latest Maastrichtian and no sign of major disruptions caused by volcanism. Multiple modelling studies conclude that an impact event, not volcanism, fits best with available evidence of extinction patterns. Combining these theories, some geophysical models suggest that the impact contributed to the Deccan Traps. These models, combined with high-precision radiometric dating, suggest that the Chicxulub impact could have triggered some of the largest Deccan eruptions, as well as eruptions at active volcano sites anywhere on Earth. Maastrichtian sea-level regression There is clear evidence that sea levels fell in the final stage of the Cretaceous by more than at any other time in the Mesozoic era. In some Maastrichtian stage rock layers from various parts of the world, the later layers are terrestrial; earlier layers represent shorelines and the earliest layers represent seabeds. These layers do not show the tilting and distortion associated with mountain building, therefore the likeliest explanation is a regression, a drop in sea level. There is no direct evidence for the cause of the regression, but the currently accepted explanation is that the mid-ocean ridges became less active and sank under their own weight. A severe regression would have greatly reduced the continental shelf area, the most species-rich part of the sea, and therefore could have been enough to cause a marine mass extinction. This change would not have caused the extinction of the ammonites. The regression would also have caused climate changes, partly by disrupting winds and ocean currents and partly by reducing the Earth's albedo and increasing global temperatures. Marine regression also resulted in the loss of epeiric seas, such as the Western Interior Seaway of North America. The loss of these seas greatly altered habitats, removing coastal plains that ten million years before had been host to diverse communities such as are found in rocks of the Dinosaur Park Formation. Another consequence was an expansion of freshwater environments, since continental runoff now had longer distances to travel before reaching oceans. While this change was favorable to freshwater vertebrates, those that prefer marine environments, such as sharks, suffered. However, sea level fall as a cause of the extinction event is contradicted by other evidence, namely that sections which show no sign of marine regression still show evidence of a major drop in diversity. Multiple causes Proponents of multiple causation view the suggested single causes as either too small to produce the vast scale of the extinction, or not likely to produce its observed taxonomic pattern. In a review article, J. David Archibald and David E. Fastovsky discussed a scenario combining three major postulated causes: volcanism, marine regression, and extraterrestrial impact. In this scenario, terrestrial and marine communities were stressed by the changes in, and loss of, habitats. Dinosaurs, as the largest vertebrates, were the first affected by environmental changes, and their diversity declined. At the same time, particulate materials from volcanism cooled and dried areas of the globe. Then an impact event occurred, causing collapses in photosynthesis-based food chains, both in the already-stressed terrestrial food chains and in the marine food chains. Based on studies at Seymour Island in Antarctica, Sierra Petersen and colleagues argue that there were two separate extinction events near the Cretaceous–Paleogene boundary, with one correlating to Deccan Trap volcanism and one correlated with the Chicxulub impact. The team analyzed combined extinction patterns using a new clumped isotope temperature record from a hiatus-free, expanded K–Pg boundary section. They documented a 7.8±3.3 °C warming synchronous with the onset of Deccan Traps volcanism and a second, smaller warming at the time of meteorite impact. They suggested that local warming had been amplified due to the simultaneous disappearance of continental or sea ice. Intra-shell variability indicates a possible reduction in seasonality after Deccan eruptions began, continuing through the meteorite event. Species extinction at Seymour Island occurred in two pulses that coincide with the two observed warming events, directly linking the end-Cretaceous extinction at this site to both volcanic and meteorite events via climate change.
Physical sciences
Geological history
null
3968480
https://en.wikipedia.org/wiki/Disturbance%20%28ecology%29
Disturbance (ecology)
In ecology, a disturbance is a temporary change in environmental conditions that causes a pronounced change in an ecosystem. Disturbances often act quickly and with great effect, to alter the physical structure or arrangement of biotic and abiotic elements. A disturbance can also occur over a long period of time and can impact the biodiversity within an ecosystem. Major ecological disturbances may include fires, flooding, storms, insect outbreaks and trampling. Earthquakes, various types of volcanic eruptions, tsunami, firestorms, impact events, climate change, and the devastating effects of human impact on the environment (anthropogenic disturbances) such as clearcutting, forest clearing and the introduction of invasive species can be considered major disturbances. Not only invasive species can have a profound effect on an ecosystem, but also naturally occurring species can cause disturbance by their behavior. Disturbance forces can have profound immediate effects on ecosystems and can, accordingly, greatly alter the natural community’s population size or species richness. Because of these and the impacts on populations, disturbance determines the future shifts in dominance, various species successively becoming dominant as their life history characteristics, and associated life-forms, are exhibited over time. Definition and types The scale of disturbance ranges from events as small as a single tree falling, to as large as a mass extinction. Many natural ecosystems experience periodic disturbance that may broadly fall into a cyclical pattern. Ecosystems that form under these conditions are often maintained by regular disturbance. Wetland ecosystems, for example, can be maintained by the movement of water through them and by periodic fires. Different types of disturbance events occur in different habitats and climates with different weather conditions. Natural fire disturbances for example occur more often in areas with a higher incidence of lightning and flammable biomass, such as longleaf pine ecosystems in the southeastern United States. Wildfires, droughts, floods, disease outbreaks, changes in hydrology, tornadoes and other extreme weather, landslides, and windstorms are all examples of natural disturbance events that may form a cyclical or periodic pattern over time. Other disturbances, such as those caused by humans, invasive species or impact events, can occur anywhere and are not necessarily cyclic. These disturbances can alter the trajectory of change within an ecosystem permanently. Extinction vortices may result in multiple disturbances or a greater frequency of a single disturbance. Anthropogenic disturbance Logging, dredging, conversion of land to ranching or agriculture, mowing, and mining are examples of anthropogenic disturbance. Human activities have introduced disturbances into ecosystems worldwide on a large scale, resulting in widespread range expansion and rapid evolution of disturbance-adapted species. Agricultural practices create novel ecosystems, known as agroecosystems, which are colonized by plant species adapted to disturbance and enforce evolutionary pressure upon those species. Species adapted to anthropogenic disturbance are often known as weeds. Another example of anthropogenic disturbance is controlled burns used by Native Americans to maintain fire-dependent ecosystems. These disturbances helped maintain stability and biodiversity in ecosystems, enhancing overall ecosystem health and functioning. Anthropogenic climate change is considered a major source of change in future successional trajectories of ecosystems. Effects Immediately after a disturbance there is a pulse of recruitment or regrowth under conditions of little competition for space or other resources. After the initial pulse, recruitment slows since once an individual plant is established it is very difficult to displace. Because scale-dependent relationships are ubiquitous in ecology, the spatial scale modulates the effect of disturbance on natural communities. For example, seed dispersal and herbivory may decrease with distance from the edge of a burn. Consequently, plant communities in the interior areas of large fires respond differently than those in smaller fires. Although disturbance types have varied on ecosystems, spatial scale likely influences ecological interactions and community recovery from all cases because organisms differ in dispersal and movement capabilities. Cyclic disturbance Often, when disturbances occur naturally, they provide conditions that favor the success of different species over pre-disturbance organisms. This can be attributed to physical changes in the biotic and abiotic conditions of an ecosystem. Because of this, a disturbance force can change an ecosystem for significantly longer than the period over which the immediate effects persist. With the passage of time following a disturbance, shifts in dominance may occur with ephemeral herbaceous life-forms progressively becoming over topped by taller perennials herbs, shrubs and trees. However, in the absence of further disturbance forces, many ecosystems trend back toward pre-disturbance conditions. Long lived species and those that can regenerate in the presence of their own adults finally become dominant. Such alteration, accompanied by changes in the abundance of different species over time, is called ecological succession. Succession often leads to conditions that will once again predispose an ecosystem to disturbance. Pine forests in western North America provide a good example of such a cycle involving insect outbreaks. The mountain pine beetle (Dendroctonus ponderosae) plays an important role in limiting pine trees like lodgepole pine in forests of western North America. In 2004 the beetles affected more than 90,000 square kilometres. The beetles exist in endemic and epidemic phases. During epidemic phases swarms of beetles kill large numbers of old pines. This mortality creates openings in the forest for new vegetation. Spruce, fir, and younger pines, which are unaffected by the beetles, thrive in canopy openings. Eventually pines grow into the canopy and replace those lost. Younger pines are often able to ward off beetle attacks but, as they grow older, pines become less vigorous and more susceptible to infestation. This cycle of death and re-growth creates a temporal mosaic of pines in the forest. Similar cycles occur in association with other disturbances such as fire and windstorms. When multiple disturbance events affect the same location in quick succession, this often results in a "compound disturbance", an event which, due to the combination of forces, creates a new situation which is more than the sum of its parts. For example, windstorms followed by fire can create fire temperatures and durations that are not expected in even severe wildfires, and may have surprising effects on post-fire succession. Environmental stresses can be described as pressure on the environment, with compounding variables such as extreme temperature or precipitation changes—which all play a role in the diversity and succession of an ecosystem. With environmental moderation, diversity increases because of the intermediate-disturbance effect, decreases because of the competitive-exclusion effect, increases because of the prevention of competitive exclusion by moderate predation, and decreases because of the local extinction of prey by severe predation. A reduction in recruitment density reduces the importance of competition for a given level of environmental stress. Species adapted to disturbance (eurytopy) A disturbance may change a forest significantly. Afterwards, the forest floor is often littered with dead material. This decaying matter and abundant sunlight promote an abundance of new growth. In the case of forest fires a portion of the nutrients previously held in plant biomass is returned quickly to the soil as biomass burns. Many plants and animals benefit from disturbance conditions. Some species are particularly suited for exploiting recently disturbed sites. Vegetation with the potential for rapid growth can quickly take advantage of the lack of competition. In the northeastern United States, shade-intolerant trees (trees stenotopic to shade) like pin cherry and aspen quickly fill in forest gaps created by fire or windstorm (or human disturbance). Silver maple and eastern sycamore are similarly well adapted to floodplains. They are highly tolerant of standing water and will frequently dominate floodplains where other species are periodically wiped out. When a tree is blown over, gaps typically are filled with small herbaceous seedlings but, this is not always the case; shoots from the fallen tree can develop and take over the gap. The sprouting ability can have major impacts on the plant population, plant populations that typically would have exploited the tree fall gap get over run and can not compete against the shoots of the fallen tree. Species adaptation to disturbances is species specific but how each organism adapts affects all the species around them. Another species well adapted to a particular disturbance is the Jack pine in boreal forests exposed to crown fires. They, as well as some other pine species, have specialized serotinous cones that only open and disperse seeds with sufficient heat generated by fire. As a result, this species often dominates in areas where competition has been reduced by fire. Species that are well adapted for exploiting disturbance sites are referred to as pioneers or early successional species. These shade-intolerant species are able to photosynthesize at high rates and as a result grow quickly. Their fast growth is usually balanced by short life spans. Furthermore, although these species often dominate immediately following a disturbance, they are unable to compete with shade-tolerant species later on and replaced by these species through succession. However these shifts may not reflect the progressive entry to the community of the taller long-lived forms, but instead, the gradual emergence and dominance of species that may have been present, but inconspicuous directly after the disturbance. Disturbances have also been shown to be important facilitators of non-native plant invasions. While plants must deal directly with disturbances, many animals are not as immediately affected by them. Most can successfully evade fires, and many thrive afterwards on abundant new growth on the forest floor. New conditions support a wider variety of plants, often rich in nutrients compared to pre-disturbance vegetation. The plants in turn support a variety of wildlife, temporarily increasing biological diversity in the forest. Importance Biological diversity is dependent on natural disturbance. The success of a wide range of species from all taxonomic groups is closely tied to natural disturbance events such as fire, flooding, and windstorm. As an example, many shade-intolerant plant species rely on disturbances for successful establishment and to limit competition. Without this perpetual thinning, diversity of forest flora can decline, affecting animals dependent on those plants as well. A good example of this role of disturbance is in ponderosa pine (Pinus ponderosa) forests in the western United States, where surface fires frequently thin existing vegetation allowing for new growth. If fire is suppressed, douglas fir (Pesudotsuga menziesii), a shade tolerant species, eventually replaces the pines. Douglas firs, having dense crowns, severely limit the amount of sunlight reaching the forest floor. Without sufficient light new growth is severely limited. As the diversity of surface plants decreases, animal species that rely on them diminish as well. Fire, in this case, is important not only to the species directly affected but also to many other organisms whose survival depends on those key plants. Diversity is low in harsh environments because of the intolerance of all but opportunistic and highly resistant species to such conditions. The interplay between disturbance and these biological processes seems to account for a major portion of the organization and spatial patterning of natural communities. Disturbance variability and species diversity are heavily linked, and as a result require adaptations that help increase plant fitness necessary for survival. Relationship to climate change adaptation Disturbance in ecosystems can form a way of modeling future ability of ecosystems to adapt to climate change. Likewise, adaptation of a species to disturbance may be a predictor of its future ability to survive the current biodiversity crisis.
Biology and health sciences
Ecology
Biology
3969819
https://en.wikipedia.org/wiki/Evolution%20of%20insects
Evolution of insects
The most recent understanding of the evolution of insects is based on studies of the following branches of science: molecular biology, insect morphology, paleontology, insect taxonomy, evolution, embryology, bioinformatics and scientific computing. It is estimated that the class of insects originated on Earth about 480 million years ago, in the Ordovician, at about the same time terrestrial plants appeared. Insects are thought to have evolved from a group of crustaceans. The first insects were landbound, but about 400 million years ago in the Devonian period one lineage of insects evolved flight, the first animals to do so. The oldest insect fossil has been proposed to be Rhyniognatha hirsti, estimated to be 400 million years old, but the insect identity of the fossil has been contested. Global climate conditions changed several times during the history of Earth, and along with it the diversity of insects. The Pterygotes (winged insects) underwent a major radiation in the Carboniferous (358 to 299 million years ago) while the Endopterygota (insects that go through different life stages with metamorphosis) underwent another major radiation in the Permian (299 to 252 million years ago). Most extant orders of insects developed during the Permian period. Many of the early groups became extinct during the mass extinction at the Permo-Triassic boundary, the largest extinction event in the history of the Earth, around 252 million years ago. The survivors of this event evolved in the Triassic (252 to 201 million years ago) to what are essentially the modern insect orders that persist to this day. Most modern insect families appeared in the Jurassic (201 to 145 million years ago). In an important example of co-evolution, a number of highly successful insect groups — especially the Hymenoptera (wasps, bees and ants) and Lepidoptera (butterflies) as well as many types of Diptera (flies) and Coleoptera (beetles) — evolved in conjunction with flowering plants during the Cretaceous (145 to 66 million years ago). Many modern insect genera developed during the Cenozoic that began about 66 million years ago; insects from this period onwards frequently became preserved in amber, often in perfect condition. Such specimens are easily compared with modern species, and most of them are members of extant genera. Fossils Preservation Due to their external skeleton, the fossil history of insects is not entirely dependent on lagerstätte type preservation as for many soft-bodied organisms. However, with their small size and light build, insects have not left a particularly robust fossil record. Other than insects preserved in amber, most finds are terrestrial or near terrestrial sources and only preserved under very special conditions such as at the edge of freshwater lakes. While some 1/3 of known non-insect species are extinct fossils, due to the paucity of their fossil record, only 1/100 of known insects are extinct fossils. Insect fossils are often three dimensional preservations of the original fossil. Loose wings are a common type of fossil as the wings do not readily decay or digest, and are often left behind by predators. Fossilization will often preserve their outer appearance, contrary to vertebrate fossils whom are mostly preserved just as bony remains (or inorganic casts thereof). Due to their size, vertebrate fossils with the external aspect similarly preserved are rare, and most known cases are subfossils. Fossils of insects, when preserved, are often preserved as three-dimensional, permineralized, and charcoalified replicas; and as inclusions in amber and even within some minerals. Sometimes even their colour and patterning is still discernible. Preservation in amber is, however, limited since copious resin production by trees only evolved in the Mesozoic. There is also abundant fossil evidence for the behavior of extinct insects, including feeding damage on fossil vegetation and in wood, fecal pellets, and nests in fossil soils. Such preservation is rare in vertebrates, and is mostly confined to footprints and coprolites. Freshwater and marine insect fossils The common denominator among most deposits of fossil insects and terrestrial plants is the lake environment. Those insects that became preserved were either living in the fossil lake (autochthonous) or carried into it from surrounding habitats by winds, stream currents, or their own flight (allochthonous). Drowning and dying insects not eaten by fish and other predators settle to the bottom, where they may be preserved in the lake's sediments, called lacustrine, under appropriate conditions. Even amber, or fossil resin from trees, requires a watery environment that is lacustrine or brackish in order to be preserved. Without protection in anoxic sediments, amber would gradually disintegrate; it is never found buried in fossil soils. Various factors contribute greatly to what kinds of insects become preserved and how well, if indeed at all, including lake depth, temperature, and alkalinity; type of sediments; whether the lake was surrounded by forest or vast and featureless salt pans; and if it was choked in anoxia or highly oxygenated. There are some major exceptions to the lacustrine theme of fossil insects, the most famous being the Late Jurassic limestones from Solnhofen and Eichstätt, Germany, which are marine. These deposits are famous for pterosaurs and the bird-like Archaeopteryx. The limestones were formed by a very fine mud of calcite that settled within stagnant, hypersaline bays isolated from inland seas. Most organisms in these limestones, including rare insects, were preserved intact, sometimes with feathers and outlines of soft wing membranes, indicating that there was very little decay. The insects, however, are like casts or molds, having relief but little detail. In some cases iron oxides precipitated around wing veins, revealing better detail. Compressions, impressions and mineralization There are many different ways insects can be fossilized and preserved including compressions and impressions, concretions, mineral replication, charcoalified (fusainized) remains, and their trace remains. Compressions and impressions are the most extensive types of insect fossils, occurring in rocks from the Carboniferous to the Holocene. Impressions are like a cast or mold of a fossil insect, showing its form and even some relief, like pleating in the wings, but usually little or no color from the cuticle. Compressions preserve remains of the cuticle, so color distinguishes structure. In exceptional situations, microscopic features such as microtrichia on sclerites and wing membranes are even visible, but preservation of this scale also requires a matrix of exceptionally fine grain, such as in micritic muds and volcanic tuffs. Because arthropod sclerites are held together by membranes, which readily decompose, many fossil arthropods are known only by isolated sclerites. Far more desirable are complete fossils. Concretions are stones with a fossil at the core whose chemical composition differs from that of the surrounding matrix, usually formed as a result of mineral precipitation from decaying organisms. The most significant deposit consists of various localities of the Late Carboniferous Francis Creek Shale of the Carbondale Formation at Mazon Creek, Illinois, which are composed of shales and coal seams yielding oblong concretions. Within most concretions is a mold of an animal and sometimes a plant that is usually marine in origin. When an insect is partly or wholly replaced by minerals, usually completely articulated and with three-dimensional fidelity, is called mineral replication. This is also called petrifaction, as in petrified wood. Insects preserved this way are often, but not always, preserved as concretions, or within nodules of minerals that formed around the insect as its nucleus. Such deposits generally form where the sediments and water are laden with minerals, and where there is also quick mineralization of the carcass by coats of bacteria. Evolutionary history The insect fossil record extends back some 400 million years to the lower Devonian, while the Pterygotes (winged insects) underwent a major radiation in the Carboniferous. The Endopterygota underwent another major radiation in the Permian. Survivors of the mass extinction at the P-T boundary evolved in the Triassic to what are essentially the modern Insecta Orders that persist to modern times. Most modern insect families appeared in the Jurassic, and further diversification probably in genera occurred in the Cretaceous. By the Tertiary, there existed many of what are still modern genera; hence, most insects in amber are, indeed, members of extant genera. Insects diversified in only about 100 million years into essentially modern forms. Insect evolution is characterized by rapid adaptation due to selective pressures exerted by the environment and furthered by high fecundity. It appears that rapid radiations and the appearance of new species, a process that continues to this day, result in insects filling all available environmental niches. The evolution of insects is closely related to the evolution of flowering plants. Insect adaptations include feeding on flowers and related structures, with some 20% of extant insects depending on flowers, nectar or pollen for their food source. This symbiotic relationship is even more paramount in evolution considering that more than 2/3 of flowering plants are insect pollinated. Insects, particularly mosquitoes and flies, are also vectors of many pathogens that may even have been responsible for the decimation or extinction of some mammalian species. Before the Devonian Molecular analysis by Gaunt & Miles 2002 suggests that the hexapods diverged from their sister group, the Anostraca (fairy shrimps), at around the start of the Silurian period - coinciding with the appearance of vascular plants on land. Misof et. al. suggest that insects could have appeared much earlier, in the Early Ordovician or even Cambrian. According to this version, the early radiation of insects occurred no later than in marine or coastal environments. However, the authors emphasize that due to the lack of insect fossils of the Cambrian to the Silurian, this version remains highly controversial. Devonian The Devonian (419 to 359 million years ago) was a relatively warm period, and probably lacked any glaciers. The details of early insect fossil records are not well understood. The fossils that were considered as Devonian insects, such as Rhyniognatha hirsti or Strudiella devonica were later reconsidered that their affinities as insects are insufficient. But based on phylogenic study, the first insects probably appeared earlier, in the Silurian period, from stem-group crustaceans like Tanazios dokeron that had lost the second antenna. The first winged insect likely evolved in the Devonian given the appearance of large numbers of insects with wings in the Carboniferous. Carboniferous The Carboniferous () is famous for its wet, warm climates and extensive swamps of mosses, ferns, horsetails, and calamites. Glaciations in Gondwana, triggered by Gondwana's southward movement, continued into the Permian and because of the lack of clear markers and breaks, the deposits of this glacial period are often referred to as Permo-Carboniferous in age. The cooling and drying of the climate led to the Carboniferous rainforest collapse (CRC). Tropical rain forests fragmented and then were eventually devastated by climate change. Remains of insects are scattered throughout the coal deposits, particularly of wings from stem-dictyopterans (Blattoptera); two deposits in particular are from Mazon Creek, Illinois and Commentry, France. The earliest winged insects are from this time period (Pterygota), including the aforementioned Blattoptera, Caloneurodea, primitive stem-group Ephemeropterans, Orthoptera, Palaeodictyopteroidea. In 1940 (in Noble County, Oklahoma), a fossil of Meganeuropsis americana represented the largest complete insect wing ever found. Juvenile insects are also known from the Carboniferous Period. Very early Blattopterans had a large, discoid pronotum and coriaceous forewings with a distinct CuP vein (a unbranched wing vein, lying near the claval fold and reaching the wing posterior margin). These were not true cockroaches, as they had an ovipositor, although through the Carboniferous, the ovipositor started to diminish. The orders Caloneurodea and Miomoptera are known, with Orthoptera and Blattodea to be among the earliest Neoptera; developing from the upper Carboniferous to the Permian. These insects had wings with similar form and structure: small anal lobes. Species of Orthoptera, or grasshoppers and related kin, is an ancient order that still exist till today extending from this time period. From which time even the distinctive synapomorphy of saltatorial, or adaptive for jumping, hind legs is preserved. Palaeodictyopteroidea is a large and diverse group that includes 50% of all known Paleozoic insects. Containing many of the primitive features of the time: very long cerci, an ovipositor, and wings with little or no anal lobe. Protodonata, as its name implies, is a primitive paraphyletic group similar to Odonata; although lacks distinct features such as a nodus, a pterostigma and an arculus. Most were only slightly larger than modern dragonflies, but the group does include the largest known insects, such as griffinflies like the late Carboniferous Meganeura monyi, and the even larger later Permian Meganeuropsis permiana, with wingspans of up to . They were probably the top predators for some 100 million years and far larger than any present-day insects. Their nymphs must also have reached a very impressive size. This gigantism may have been due to higher atmospheric oxygen-levels (up to 80% above modern levels during the Carboniferous) that allowed increased respiratory efficiency relative to today. This allowed giant forms of pterygotes, millipedes and scorpions to exist, making the newly arrived tetrapods remain small until the Carboniferous Rainforest Collapse. However, a large griffinfly with a wingspan about is known from the late Permian, when the oxygen level was much lower. In addition, griffinflies probably lived in open habitats, as evidenced by Meganeurites gracilipes. M. gracilipes exhibited elongate wings that did not befit densely forested habitats, and had dorsally enlarged compound eyes much like modern dragonflies that hunt in open habitats. Permian The Permian () was a relatively short time period, during which all the Earth's major land masses were collected into a single supercontinent known as Pangaea. Pangaea straddled the equator and extended toward the poles, with a corresponding effect on ocean currents in the single great ocean ("Panthalassa", the "universal sea"), and the Paleo-Tethys Ocean, a large ocean that was between Asia and Gondwana. The Cimmeria continent rifted away from Gondwana and drifted north to Laurasia, causing the Paleo-Tethys to shrink. At the end of the Permian, the biggest mass extinction in history occurred, collectively called the Permian–Triassic extinction event: 30% of all insect species became extinct; this is one of three known mass insect extinctions in Earth's history. A 2007 study based on DNA of living beetles and maps of likely beetle evolution indicated that beetles may have originated during the Lower Permian, up to . In 2009, a fossil beetle was described from the Pennsylvanian of Mazon Creek, Illinois, pushing the origin of the beetles to an earlier date, . Fossils from this time have been found in Asia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czech Republic and Tshekarda in the Ural mountains, Russia. More discoveries from North America were made in the Wellington Formation of Oklahoma and were published in 2005 and 2008. Some of the most important fossil deposits from this era are from Elmo, Kansas (260 mya); others include New South Wales, Australia (240 mya) and central Eurasia (250 mya). During this time, many of the species from the Carboniferous diversified, and many new orders developed, including: Protelytroptera, primitive relatives of Plecoptera (Paraplecoptera), Psocoptera, Mecoptera, Coleoptera, Raphidioptera, and Neuroptera, the last four being the first definitive records of the Holometabola. By the Pennsylvanian and well into the Permian, by far the most successful were primitive Blattoptera, or relatives of cockroaches. Six fast legs, two well-developed folding wings, fairly good eyes, long, well-developed antennae (olfactory), an omnivorous digestive system, a receptacle for storing sperm, a chitin skeleton that could support and protect, as well as a form of gizzard and efficient mouth parts, gave it formidable advantages over other herbivorous animals. About 90% of insects were cockroach-like insects ("Blattopterans"). The dragonflies Odonata were the dominant aerial predator and probably dominated terrestrial insect predation as well. True Odonata appeared in the Permian and all are amphibian. Their prototypes are the oldest winged fossils, go back to the Devonian, and are different from other wings in every way. Their prototypes may have had the beginnings of many modern attributes even by late Carboniferous and it is possible that they even captured small vertebrates, for some species had a wing span of 71 cm. The oldest known insect that resembles species of Coleoptera date back to the Lower Permian (), though they instead have 13-segmented antennae, elytra with more fully developed venation and more irregular longitudinal ribbing, and an abdomen and ovipositor extending beyond the apex of the elytra. The oldest true beetle would have features that include 11-segmented antennae, regular longitudinal ribbing on the elytra, and having genitalia that are internal. The earliest beetle-like species had pointed, leather like forewings with cells and pits. Hemiptera, or true bugs had appeared in the form of Arctiniscytina and Paraknightia. The later had expanded parapronotal lobes, a large ovipositor, and forewings with unusual venation, possibly diverging from Blattoptera. The orders Raphidioptera and Neuroptera are grouped together as Neuropterida. The one family of putative Raphidiopteran clade (Sojanoraphidiidae) has been controversially placed as so. Although the group had a long ovipositor distinctive to this order and a series of short crossveins, however with a primitive wing venation. Early families of Plecoptera had wing venation consistent with the order and its recent descendants. Psocoptera was first appeared in the Permian period, they are often regarded as the most primitive of the hemipteroids. Triassic The Triassic () was a period when arid and semiarid savannas developed and when the first mammals, dinosaurs, and pterosaurs appeared. During the Triassic, almost all the Earth's land mass was still concentrated into Pangaea. From the east a vast gulf entered Pangaea, the Tethys sea. The remaining shores were surrounded by the world-ocean known as Panthalassa. The supercontinent Pangaea was rifting during the Triassic—especially late in the period—but had not yet separated. The climate of the Triassic was generally hot and dry, forming typical red bed sandstones and evaporites. There is no evidence of glaciation at or near either pole; in fact, the polar regions were apparently moist and temperate, a climate suitable for reptile-like creatures. Pangaea's large size limited the moderating effect of the global ocean; its continental climate was highly seasonal, with very hot summers and cold winters. It probably had strong, cross-equatorial monsoons. As a consequence of the P-Tr Mass Extinction at the border of Permian and Triassic, there is only little fossil record of insects including beetles from the Lower Triassic. However, there are a few exemptions, like in Eastern Europe: At the Babiy Kamen site in the Kuznetsk Basin numerous beetle fossils were discovered, even entire specimen of the infraorders Archostemata (i.e., Ademosynidae, Schizocoleidae), Adephaga (i.e., Triaplidae, Trachypachidae) and Polyphaga (i.e., Hydrophilidae, Byrrhidae, Elateroidea) and in nearly a perfectly preserved condition. However, species from the families Cupedidae and Schizophoroidae are not present at this site, whereas they dominate at other fossil sites from the Lower Triassic. Further records are known from Khey-Yaga, Russia in the Korotaikha Basin. Around this time, during the Late Triassic, mycetophagous, or fungus feeding species of beetle (i.e., Cupedidae) appear in the fossil record. In the stages of the Upper Triassic representatives of the algophagous, or algae feeding species (i.e., Triaplidae and Hydrophilidae) begin to appear, as well as predatory water beetles. The first primitive weevils appear (i.e., Obrienidae), as well as the first representatives of the rove beetles (i.e., Staphylinidae), which show no marked difference in physique compared to recent species. This was also around the first time evidence of diverse freshwater insect fauna appeared. Some of the oldest living families also appear around during the Triassic. Hemiptera included the Cercopidae, the Cicadellidae, the Cixiidae, and the Membracidae. Coleoptera included the Carabidae, the Staphylinidae, and the Trachypachidae. Hymenoptera included the Xyelidae. Diptera included the Anisopodidae, the Chironomidae, and the Tipulidae. The first Thysanoptera appeared as well. The first true species of Diptera are known from the Middle Triassic, becoming widespread during the Middle and Late Triassic . A single large wing from a species of Diptera in the Triassic (10 mm instead of usual 2–6 mm) was found in Australia (Mt. Crosby). This family Tilliardipteridae, despite the numerous 'tipuloid' features, should be included in Psychodomorpha sensu Hennig on account of loss of the convex distal 1A reaching wing margin and formation of the anal loop. Jurassic The Jurassic () was important in the development of birds, one of the insects' major predators. During the early Jurassic period, the supercontinent Pangaea broke up into the northern supercontinent Laurasia and the southern supercontinent Gondwana; the Gulf of Mexico opened in the new rift between North America and what is now Mexico's Yucatan Peninsula. The Jurassic North Atlantic Ocean was relatively narrow, while the South Atlantic did not open until the following Cretaceous Period, when Gondwana itself rifted apart. The global climate during the Jurassic was warm and humid. Similar to the Triassic, there were no larger landmasses situated near the polar caps and consequently, no inland ice sheets existed during the Jurassic. Although some areas of North and South America and Africa stayed arid, large parts of the continental landmasses were lush. The laurasian and the gondwanian fauna differed considerably in the Early Jurassic. Later it became more intercontinental and many species started to spread globally. There are many important sites from the Jurassic, with more than 150 important sites with beetle fossils, the majority being situated in Eastern Europe and North Asia. In North America and especially in South America and Africa the number of sites from that time period is smaller and the sites have not been exhaustively investigated yet. Outstanding fossil sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian Formation in Liaoning, North China as well as the Jiulongshan Formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin. Numerous deposits of other insects occur in Europe and Asia. Including Grimmen and Solnhofen, German; Solnhofen being famous for findings of the earliest bird-like theropods (i.e. Archaeopteryx). Others include Dorset, England; Issyk-Kul, Kirghizstan; and the most productive site of all, Karatau, Kazakhstan. During the Jurassic there was a dramatic increase in the known diversity of . This includes the development and growth of carnivorous and herbivorous species. Species of the superfamily Chrysomeloidea are believed to have developed around the same time, which include a wide array of plant host ranging from cycads and conifers, to angiosperms. Close to the Upper Jurassic, the portion of the Cupedidae decreased, however at the same time the diversity of the early plant eating, or phytophagous species increased. Most of the recent phytophagous species of Coleoptera feed on flowering plants or angiosperms. Cretaceous The Cretaceous () had much of the same insect fauna as the Jurassic until much later on. During the Cretaceous, the late-Paleozoic-to-early-Mesozoic supercontinent of Pangaea completed its tectonic breakup into present day continents, although their positions were substantially different at the time. As the Atlantic Ocean widened, the convergent-margin orogenies that had begun during the Jurassic continued in the North American Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies. Though Gondwana was still intact in the beginning of the Cretaceous, it broke up as South America, Antarctica and Australia rifted away from Africa (though India and Madagascar remained attached to each other); thus, the South Atlantic and Indian Oceans were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea levels worldwide. To the north of Africa the Tethys Sea continued to narrow. Broad shallow seas advanced across central North America (the Western Interior Seaway) and Europe, then receded late in the period, leaving thick marine deposits sandwiched between coal beds. At the peak of the Cretaceous transgression, one-third of Earth's present land area was submerged. The Berriasian epoch showed a cooling trend that had been seen in the last epoch of the Jurassic. There is evidence that snowfalls were common in the higher latitudes and the tropics became wetter than during the Triassic and Jurassic. Glaciation was however restricted to alpine glaciers on some high-latitude mountains, though seasonal snow may have existed farther south. Rafting by ice of stones into marine environments occurred during much of the Cretaceous but evidence of deposition directly from glaciers is limited to the Early Cretaceous of the Eromanga Basin in southern Australia. There are a large number of important fossil sites worldwide containing beetles from the Cretaceous. Most of them are located in Europe and Asia and belong to the temperate climate zone during the Cretaceous. A few of the fossil sites mentioned in the chapter Jurassic also shed some light on the early cretaceous beetle fauna (e.g. the Yixian formation in Liaoning, North China). Further important sites from the Lower Cretaceous include the Crato Fossil Beds in the Araripe basin in the Ceará, North Brazil as well as overlying Santana formation, with the latter was situated near the paleoequator, or the position of the earth's equator in the geologic past as defined for a specific geologic period. In Spain there are important sites near Montsec and Las Hoyas. In Australia the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria is noteworthy. Important fossil sites from the Upper Cretaceous are Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia. During the Cretaceous the diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns: whereas the Carabidae predominantly occurred in the warm regions, the Staphylinidae and click beetles (Elateridae) preferred many areas with temperate climate. Likewise, predatory species of Cleroidea and Cucujoidea, hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The jewel beetles diversity increased rapidly during the Cretaceous, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare and their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles have been recorded from the Upper Cretaceous, and are believed to have lived on the excrement of herbivorous dinosaurs, however there is still a discussion, whether the beetles were always tied to mammals during its development. Also, the first species with an adaption of both larvae and adults to the aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (i.e., Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae. Paleogene There are many fossils of beetles known from this era, though the beetle fauna of the Paleocene is comparatively poorly investigated. In contrast, the knowledge on the Eocene beetle fauna is very good. The reason is the occurrence of fossil insects in amber and clay slate sediments. Amber is fossilized tree resin, that means it consists of fossilized organic compounds, not minerals. Different amber is distinguished by location, age and species of the resin producing plant. For the research on the Oligocene beetle fauna, Baltic and Dominican amber is most important. Even with the insect fossils record in general lacking, the most diverse deposit being from the Fur Formation, Denmark; including giant ants and primitive moths (Noctuidae). The first butterflies are from the Upper Paleogene, while most, like beetles, already had recent genera and species already existed during the Miocene, however, their distribution differed considerably from today's. Neogene The most important sites for beetle fossils of the Neogene are situated in the warm temperate and to subtropical zones. Many recent genera and species already existed during the Miocene, however, their distribution differed considerably from today's. One of the most important fossil sites for insects of the Pliocene is Willershausen near Göttingen, Germany with excellently preserved beetle fossils of various families (longhorn beetles, weevils, ladybugs and others) as well as representatives of other orders of insects. In the Willershausen clay pit so far 35 genera from 18 beetle families have been recorded, of which six genera are extinct. The Pleistocene beetle fauna is relatively well known, since the composition of the beetle fauna has been used to reconstruct climate conditions in the Rocky Mountains and on Beringia, the former land bridge between Asia and North America. Phylogeny A report in November 2014 unambiguously places the insects in one clade, with the remipedes as the nearest sister clade. This study resolved insect phylogeny of all extant insect orders, and provides "a robust phylogenetic backbone tree and reliable time estimates of insect evolution." Finding strong support for the closest living relatives of the hexapods had proven challenging due to convergent adaptations in a number of arthropod groups for living on land. In 2008, researchers at Tufts University uncovered what they believe is the world's oldest known full-body impression of a primitive flying insect, a 300 million-year-old specimen from the Carboniferous Period. Devonian Rhyniognatha hirsti, from the 396 million year old Rhynie chert is known only from mandibles, and considered as the oldest insect. This species already possessed dicondylic mandibles (two articulations in the mandible), a feature associated with winged insects, suggesting that wings may already have evolved at this time. Thus, if Rhyniognatha is actual flying insect, the first insects probably appeared earlier, in the Silurian period. However, this species is also considered as myriapod in later study. There have been four super radiations of insects: beetles (evolved around ), flies (evolved around ), moths and wasps (evolved around ). These four groups account for the majority of described species. The flies and moths along with the fleas evolved from the Mecoptera. The origins of insect flight remain obscure, since the earliest winged insects currently known appear to have been capable fliers. Some extinct insects had an additional pair of winglets attaching to the first segment of the thorax, for a total of three pairs. There is no evidence that suggests that the insects were a particularly successful group of animals before they evolved to have wings. Evolutionary relationships Insects are prey for a variety of organisms, including terrestrial vertebrates. The earliest vertebrates on land existed and were large amphibious piscivores, through gradual evolutionary change, insectivory was the next diet type to evolve. Insects were among the earliest terrestrial herbivores and acted as major selection agents on plants. Plants evolved chemical defenses against this herbivory and the insects in turn evolved mechanisms to deal with plant toxins. These toxins limit the diet breadth of herbivores, and evolving mechanisms to nonetheless continue herbivory is an important part of maintaining diet breadth in insects, and so in their evolutionary history as a whole. Both pleiotropy and epistasis have complex effects in this regard, with the simulations of Griswold 2006 showing that more genes provide the benefit of more targets for adaptive mutations, while Fisher 1930 showed that a mutation can improve one trait while epistasis causes it to also trigger negative effects - slowing down adaptation. Many insects also make use of these toxins to protect themselves from their predators. Such insects often advertise their toxicity using warning colors. This successful evolutionary pattern has also been utilized by mimics. Over time, this has led to complex groups of coevolved species. Conversely, some interactions between plants and insects, like pollination, are beneficial to both organisms. Coevolution has led to the development of very specific mutualisms in such systems. Taxonomy Traditional morphology-based or appearance-based systematics has usually given Hexapoda the rank of superclass, and identified four groups within it: insects (Ectognatha), springtails (Collembola), Protura and Diplura, the latter three being grouped together as Entognatha on the basis of internalized mouth parts. Supraordinal relationships have undergone numerous changes with the advent of methods based on evolutionary history and genetic data. A recent theory is that Hexapoda is polyphyletic (where the last common ancestor was not a member of the group), with the entognath classes having separate evolutionary histories from Insecta. Many of the traditional appearance-based taxa have been shown to be paraphyletic, so rather than using ranks like subclass, superorder and infraorder, it has proved better to use monophyletic groupings (in which the last common ancestor is a member of the group). The following represents the best supported monophyletic groupings for the Insecta. Insects can be divided into two groups historically treated as subclasses: wingless insects, known as Apterygota, and winged insects, known as Pterygota. The Apterygota consist of the primitively wingless order of the silverfish (Thysanura). Archaeognatha make up the Monocondylia based on the shape of their mandibles, while Thysanura and Pterygota are grouped together as Dicondylia. It is possible that the Thysanura themselves are not monophyletic, with the family Lepidotrichidae being a sister group to the Dicondylia (Pterygota and the remaining Thysanura). Paleoptera and Neoptera are the winged orders of insects differentiated by the presence of hardened body parts called sclerites; also, in Neoptera, muscles that allow their wings to fold flatly over the abdomen. Neoptera can further be divided into incomplete metamorphosis-based (Polyneoptera and Paraneoptera) and complete metamorphosis-based groups. It has proved difficult to clarify the relationships between the orders in Polyneoptera because of constant new findings calling for revision of the taxa. For example, Paraneoptera has turned out to be more closely related to Endopterygota than to the rest of the Exopterygota. The recent molecular finding that the traditional louse orders Mallophaga and Anoplura are derived from within Psocoptera has led to the new taxon Psocodea. Phasmatodea and Embiidina have been suggested to form Eukinolabia. Mantodea, Blattodea and Isoptera are thought to form a monophyletic group termed Dictyoptera. It is likely that Exopterygota is paraphyletic in regard to Endopterygota. Matters that have had a lot of controversy include Strepsiptera and Diptera grouped together as Halteria based on a reduction of one of the wing pairs – a position not well-supported in the entomological community. The Neuropterida are often lumped or split on the whims of the taxonomist. Fleas are now thought to be closely related to boreid mecopterans. Many questions remain to be answered when it comes to basal relationships amongst endopterygote orders, particularly Hymenoptera. The study of the classification or taxonomy of any insect is called systematic entomology. If one works with a more specific order or even a family, the term may also be made specific to that order or family, for example systematic dipterology. Early evidence According to phylogenic estimation, first insects possibly appeared in the Silurian period and got wings in Devonian. The subclass Apterygota (wingless insects) is now considered artificial as the silverfish (order Thysanura) are more closely related to Pterygota (winged insects) than to bristletails (order Archaeognatha). For instance, just like flying insects, Thysanura have so-called dicondylic mandibles, while Archaeognatha have monocondylic mandibles. The reason for their resemblance is not due to a particularly close relationship, but rather because they both have kept a primitive and original anatomy in a much higher degree than the winged insects. The most primitive order of flying insects, the mayflies (Ephemeroptera), are also those who are most morphologically and physiologically similar to these wingless insects. Some mayfly nymphs resemble aquatic thysanurans. Modern Archaeognatha and Thysanura still have rudimentary appendages on their abdomen called styli, while more primitive and extinct insects known as Monura had much more developed abdominal appendages. The abdominal and thoracic segments in the earliest terrestrial ancestor of the insects would have been more similar to each other than they are today, and the head had well-developed compound eyes and long antennae. Their body size is not known yet. As the most primitive group today, Archaeognatha, is most abundant near the coasts, it could mean that this was the kind of habitat where the insect ancestors became terrestrial. But this specialization to coastal niches could also have a secondary origin, just as could their jumping locomotion, as it is the crawling Thysanura who are considered to be most original (plesiomorphic). By looking at how primitive cheliceratan book gills (still seen in horseshoe crabs) evolved into book lungs in primitive spiders and finally into tracheae in more advanced spiders (most of them still have a pair of book lungs intact as well), it is possible the trachea of insects was formed in a similar way, modifying gills at the base of their appendages. So far, no published research suggests that insects were a particularly successful group prior to their evolution of wings. Odonata The Odonata (dragonflies) are also a good candidate as the oldest living member of the Pterygota. Mayflies are morphologically and physiologically more basal, but the derived characteristics of dragonflies could have evolved independently in their own direction for a long time. It seems that orders with aquatic nymphs or larvae become evolutionarily conservative once they had adapted to water. If mayflies made it to the water first, this could partly explain why they are more primitive than dragonflies, even if dragonflies have an older origin. Similarly, stoneflies retain the most basal traits of the Neoptera, but they were not necessarily the first order to branch off. This also makes it less likely that an aquatic ancestor would have the evolutionary potential to give rise to all the different forms and species of insects that we know today. Dragonfly nymphs have a unique labial "mask" used for catching prey, and the imago has a unique way of copulating, using a secondary male sex organ on the second abdominal segment. It looks like abdominal appendages modified for sperm transfer and direct insemination have occurred at least twice in insect evolution, once in Odonata and once in the other flying insects. If these two different methods are the original ways of copulating for each group, it is a strong indication that it is the dragonflies who are the oldest, not the mayflies. There is still not agreement about this. Another scenario is that abdominal appendages adapted for direct insemination have evolved three times in insects; once Odonata, once in mayflies and once in the Neoptera, both mayflies and Neoptera choosing the same solution. If so, it is still possible that mayflies are the oldest order among the flying insects. The power of flight is assumed to have evolved only once, suggesting sperm was still transferred indirectly in the earliest flying insects. One possible scenario on how direct insemination evolved in insects is seen in scorpions. The male deposits a spermatophore on the ground, locks its claws with the female's claws and then guides her over his packet of sperm, making sure it comes in contact with her genital opening. When the early (male) insects laid their spermatophores on the ground, it seems likely that some of them used the clasping organs at the end of their body to drag the female over the package. The ancestors of Odonata evolved the habit of grabbing the female behind her head, as they still do today. This action, rather than not grasping the female at all, would have increased the male's chances of spreading its genes. The chances would be further increased if they first attached their spermatophore safely on their own abdomen before they placed their abdominal claspers behind the female's head; the male would then not let the female go before her abdomen had made direct contact with his sperm storage, allowing the transfer of all sperm. This also meant increased freedom in searching for a female mate because the males could now transport the packet of sperm elsewhere if the first female slipped away. This ability would eliminate the need to either wait for another female at the site of the deposited sperm packet or to produce a new packet, wasting energy. Other advantages include the possibility of mating in other, safer places than flat ground, such as in trees or bushes. If the ancestors of the other flying insects evolved the same habit of clasping the female and dragging her over their spermatophore, but posterior instead of anterior like the Odonata does, their genitals would come very close to each other. And from there on, it would be a very short step to modify the vestigial appendages near the male genital opening to transfer the sperm directly into the female. The same appendages the male Odonata use to transfer their sperm to their secondary sexual organs at the front of their abdomen. All insects with an aquatic nymphal or larval stage seem to have adapted to water secondarily from terrestrial ancestors. Of the most primitive insects with no wings at all, Archaeognatha and Thysanura, all members live their entire life cycle in terrestrial environments. As mentioned previously, Archaeognatha were the first to split off from the branch that led to the winged insects (Pterygota), and then the Thysanura branched off. This indicates that these three groups (Archaeognatha, Thysanura and Pterygota) have a common terrestrial ancestor, which probably resembled a primitive model of Apterygota, was an opportunistic generalist and laid spermatophores on the ground instead of copulating, like Thysanura still do today. If it had feeding habits similar to the majority of apterygotes of today, it lived mostly as a decomposer. One should expect that a gill breathing arthropod would modify its gills to breathe air if it were adapting to terrestrial environments, and not evolve new respiration organs from bottom up next to the original and still functioning ones. Then comes the fact that insect (larva and nymph) gills are actually a part of a modified, closed trachea system specially adapted for water, called tracheal gills. The arthropod trachea can only arise in an atmosphere and as a consequence of the adaptations of living on land. This too indicates that insects are descended from a terrestrial ancestor. And finally when looking at the three most primitive insects with aquatic nymphs (called naiads: Ephemeroptera, Odonata and Plecoptera), each order has its own kind of tracheal gills that are so different from one another that they must have separate origins. This would be expected if they evolved from land-dwelling species. This means that one of the most interesting parts of insect evolution is what happened between the Thysanura-Pterygota split and the first flight. Origin of insect flight The origin of insect flight remains obscure, since the earliest winged insects currently known appear to have been capable fliers. Some extinct insects (e.g. the Palaeodictyoptera) had an additional pair of winglets attached to the first segment of the thorax, for a total of three pairs. The wings themselves are sometimes said to be highly modified (tracheal) gills. By comparing a well-developed pair of gill blades in mayfly naiads and a reduced pair of hind wings on the adults, it is not hard to imagine that the mayfly gills (tergaliae) and insect wings have a common origin, and newer research also supports this. Specifically, genetic research on mayflies has revealed that the gills and insect wings both may have originated from insect legs. The tergaliae are not found in any other order of insects, and they have evolved in different directions with time. In some nymphs/naiads the most anterior pair has become sclerotized and works as a gill cover for the rest of the gills. Others can form a large sucker, be used for swimming or modified into other shapes. But that need not necessarily mean that these structures were originally gills. It could also mean that the tergaliae evolved from the same structures which gave rise to the wings, and that flying insects evolved from a wingless terrestrial species with pairs of plates on its body segments: three on the thorax and nine on the abdomen (mayfly nymphs with nine pairs of tergaliae on the abdomen exist, but so far no living or extinct insects with plates on the last two segments have been found). If these were primary gills, it would be a mystery why they should have waited so long to be modified when we see the different modifications in modern mayfly nymphs. Theories When the first forests arose on Earth, new niches for terrestrial animals were created. Spore-feeders and others who depended on plants and/or the animals living around them would have to adapt too to make use of them. In a world with no flying animals, it would probably just be a matter of time before some arthropods who were living in the trees evolved paired structures with muscle attachments from their exoskeleton and used them for gliding, one pair on each segment. Further evolution in this direction would give bigger gliding structures on their thorax and gradually smaller ones on their abdomen. Their bodies would have become stiffer while thysanurans, which never evolved flight, kept their flexible abdomen. Mayfly nymphs must have adapted to water while they still had the "gliders" on their abdomen intact. So far there is no concrete evidence to support this theory either, but it is one that offers an explanation for the problems of why presumably aquatic animals evolved in the direction they did. Leaping and arboreal insects seems like a good explanation for this evolutionary process for several reasons. Because early winged insects were lacking the sophisticated wing folding mechanism of neopterous insects, they must have lived in the open and not been able to hide or search for food under leaves, in cracks, under rocks and other such confined spaces. In these old forests there were few open places where insects with huge structures on their back could have lived without experiencing huge disadvantages. If insects got their wings on land and not in water, which clearly seems to be the case, the tree canopies would be the most obvious place where such gliding structures could have emerged, in a time when the air was a new territory. The question is if the plates used for gliding evolved from "scratch" or by modifying already existing anatomical details. The thorax in Thysanura and Archaeognatha are known to have some structures connected to their trachea which share similarities to the wings of primitive insects. This suggests the origin of the wings and the spiracles are related. Gliding requires universal body modifications, as seen in present-day vertebrates such as some rodents and marsupials, which have grown wide, flat expansions of skin for this purpose. The flying dragons (genus Draco) of Indonesia has modified its ribs into gliders, and even some snakes can glide through the air by spreading their ribs. The main difference is that while vertebrates have an inner skeleton, primitive insects had a flexible and adaptive exoskeleton. Some animals would be living in the trees, as animals are always taking advantage of all available niches, both for feeding and protection. At the time, the reproductive organs were by far the most nutritious part of the plant, and these early plants show signs of arthropod consumption and adaptations to protect themselves, for example by placing their reproductive organs as high up as possible. But there will always be some species who will be able to cope with that by following their food source up the trees. Knowing that insects were terrestrial at that time and that some arthropods (like primitive insects) were living in the tree crowns, it seems less likely that they would have developed their wings down on the ground or in the water. In a three dimensional environment such as trees, the ability to glide would increase the insects' chances to survive a fall, as well as saving energy. This trait has repeated itself in modern wingless species such as the gliding ants who are living an arboreal life. When the gliding ability first had originated, gliding and leaping behavior would be a logical next step, which would eventually be reflected in their anatomical design. The need to navigate through vegetation and to land safely would mean good muscle control over the proto-wings, and further improvements would eventually lead to true (but primitive) wings. While the thorax got the wings, a long abdomen could have served as a stabilizer in flight. Some of the earliest flying insects were large predators: it was a new ecological niche. Some of the prey were no doubt other insects, as insects with proto-wings would have radiated into other species even before the wings were fully evolved. From this point on, the arms race could continue: the same predator/prey co-evolution which has existed as long as there have been predators and prey on earth; both the hunters and the hunted were in need of improving and extending their flight skills even further to keep up with the other. Insects that had evolved their proto-wings in a world without flying predators could afford to be exposed openly without risk, but this changed when carnivorous flying insects evolved. It is unknown when they first evolved, but once these predators had emerged they put a strong selection pressure on their victims and themselves. Those of the prey who came up with a good solution about how to fold their wings over their backs in a way that made it possible for them to live in narrow spaces would not only be able to hide from flying predators (and terrestrial predators if they were on the ground) but also to exploit a wide variety of niches that were closed to those unable to fold their wings in this way. And today the neopterous insects (those that can fold their wings back over the abdomen) are by far the most dominant group of insects. The water-skimming theory suggests that skimming on the water surface is the origin of insect flight. This theory is based on the fact that what may be the first fossil insects, the Devonian Rhyniognatha hirsti—though it may be closer to the myriapods, is thought to have possessed wings, even though the insects' closest evolutionary ties are with crustaceans, which are aquatic. Life cycle Mayflies Another primitive trait of the mayflies are the subimago; no other insects have this winged yet sexually immature stage. A few specialized species have females with no subimago, but retain the subimago stage for males. The reasons the subimago still exists in this order could be that there has never been enough selection pressure to get rid of it; it also seems specially adapted to do the transition from water to air. The male genitalia are not fully functional at this point. One reason for this could be that the modification of the abdominal appendages into male copulation organs emerged later than the evolution of flight. This is indicated by the fact that dragonflies have a different copulation organ than other insects. As we know, in mayflies the nymphs and the adults are specialized for two different ways of living; in the water and in the air. The only stage (instar) between these two is the subimago. In more primitive fossil forms, the preadult individuals had not just one instar but numerous ones (while the modern subimago do not eat, older and more primitive species with a subimagos were probably feeding in this phase of life too as the lines between the instars were much more diffuse and gradual than today). Adult form was reached several moults before maturity. They probably did not have more instars after becoming fully mature. This way of maturing is how Apterygota do it, which moult even when mature, but not winged insects. Modern mayflies have eliminated all the instars between imago and nymph, except the single instar called subimago, which is still not (at least not in the males) fully sexually mature. The other flying insects with incomplete metamorphosis (Exopterygota) have gone a little further and completed the trend; here all the immature structures of the animal from the last nymphal stage are completed at once in a single final moult. The more advanced insects with larvae and complete metamorphosis (Endopterygota) have gone even further. An interesting theory is that the pupal stage is actually a strongly modified and extended stage of subimago, but so far it is nothing more than a theory. There are some insects within the Exopterygota, thrips and whiteflies (Aleyrodidae), who have evolved pupae-like stages too. Distant ancestors The distant ancestor of flying insects, a species with primitive proto-wings, had a more or less ametabolous life-cycle and instars of basically the same type as thysanurans with no defined nymphal, subimago or adult stages as the individual became older. Individuals developed gradually as they were grew and moulting, but probably without major changes inbetween instars. Modern mayfly nymphs do not acquire gills until after their first moult. Before this stage they are so small that they need no gills to extract oxygen from the water. This could be a trait from the common ancestor of all flyers. An early terrestrial insect would have no need for paired outgrowths from the body before it started to live in the trees (or in the water, for that matter), so it would not have any. This would also affect the way their offspring looked like in the early instars, resembling earlier ametabolous generations even after they had started to adapt to a new way of living, in a habitat where they actually could have some good use for flaps along their body. Since they matured in the same way as thysanurans with plenty of moultings as they were growing and very little difference between the adults and much younger individuals (unlike modern insects, which are hemimetabolous or holometabolous), there probably was not as much room for adaptation into different niches depending on age and stage. Also, it would have been difficult for an animal already adapted to a niche to make a switch to a new niche later in life based on age or size differences alone when these differences were not significant. So proto-insects had to specialize and focus their whole existence on improving a single lifestyle in a particular niche. The older the species and the single individuals became, the more would they differ from their original form as they adapted to their new lifestyles better than the generations before. The final body-structure was no longer achieved while still inside the egg, but continued to develop for most of a lifetime, causing a bigger difference between the youngest and oldest individuals. Assuming that mature individuals most likely mastered their new element better than did the nymphs who had the same lifestyle, it would appear to be an advantage if the immature members of the species reached adult shape and form as soon as possible. This may explain why they evolved fewer but more intense instars and a stronger focus on the adult body, and with greater differences between the adults and the first instars, instead of just gradually growing bigger as earlier generations had done. This evolutionary trend explains how they went from ametabolous to hemimetabolous insects. Reaching maturity and a fully-grown body became only a part of the development process; gradually a new anatomy and new abilities - only possible in the later stages of life - emerged. The anatomy insects were born and grew up with had limitations which the adults who had learned to fly did not suffer from. If they were unable to live their early life the way adults did, immature individuals had to adapt to the best way of living and surviving despite their limitations till the moment came when they could leave them behind. This would be a starting point in the evolution where imago and nymphs started to live in different niches, some more clearly defined than others. Also, a final anatomy, size and maturity reached at once with a single final nymphal stage meant less waste of time and energy, and also made a more complex adult body structure. These strategies obviously became very successful with time.
Biology and health sciences
Basics_4
Biology
30777141
https://en.wikipedia.org/wiki/WASH
WASH
WASH (or WatSan, WaSH; stemming from the first letters of "water, sanitation and hygiene") is a sector in development cooperation, or within local governments, that provides water, sanitation, and hygiene services to communities. The main purposes of providing access to WASH services are to achieve public health gains, implement the human right to water and sanitation, reduce the burden of collecting drinking water for women, and improve education and health outcomes at schools and healthcare facilities. Access to WASH services is an important component of water security. Universal, affordable, and sustainable access to WASH is a key issue within international development, and is the focus of the first two targets of Sustainable Development Goal 6 (SDG 6). Targets 6.1 and 6.2 aim for equitable and accessible water and sanitation for all. In 2017, it was estimated that 2.3 billion people live without basic sanitation facilities, and 844 million people live without access to safe and clean drinking water. The acronym WASH is used widely by non-governmental organizations and aid agencies in developing countries. The WASH-attributable burden of disease and injuries has been studied in depth. Typical diseases and conditions associated with a lack of WASH include diarrhea, malnutrition, and stunting, in addition to neglected tropical diseases. There are additional health risks for women, for example, during pregnancy and birth, or in connection with menstrual hygiene management. Chronic diarrhea can have long-term negative effects on children in terms of both physical and cognitive development. Still, collecting precise scientific evidence regarding health outcomes that result from improved access to WASH is difficult due to a range of complicating factors. Scholars suggest a need for longer-term studies of technological efficiency, greater analysis of sanitation interventions, and studies of the combined effects of multiple interventions to better analyze WASH health outcomes. Access to WASH is required not only at the household level but also in non-household settings like schools, healthcare facilities, workplaces, prisons, temporary use settings and for dislocated populations. In schools, group handwashing facilities can improve hygiene. Lack of WASH facilities at schools often causes female students to not attend school, thus reducing their educational achievements. It is difficult to provide safely managed WASH services in urban slums. WASH systems can also fail quite soon after installation (e.g., leaking water distribution systems). Further challenges include polluted water sources and the impacts of climate change on water security. Planning approaches for more reliable and equitable access to WASH include, for example, national WASH plans and monitoring, women's empowerment, and improving the climate resilience of WASH services. Adaptive capacity in water management systems can help to absorb some of the impacts of climate-related events and increase climate resilience. Stakeholders at various scales, for example, from small urban utilities to national governments, need to have access to reliable information about the regional climate and any expected changes due to climate change. Components The WASH concept groups together the various aspects of water supply, including access to drinking water services, sanitation, and hygiene because the impact of deficiencies in each area overlap strongly. Drinking water services WHO and UNICEF state that a safe drinking water service is one that is located in an accessible location, available when needed, and uncontaminated. Additionally, WHO and UNICEF use the terms improved water source and unimproved water source as a water quality monitoring tool. The term "improved water source" refers to piped water on premises. Examples include a piped household water connection located inside the user's dwelling plot or yard, and other improved drinking water sources such as public taps or standpipes, tube wells or boreholes, protected dug wells, protected springs, and rainwater collection. Access to drinking water is included in Target 6.1 of Sustainable Development Goal 6 (SDG 6), which states: "By 2030, achieve universal and equitable access to safe and affordable drinking water for all." This target's single indicator, Indicator 6.1.1, which states "Proportion of population using safely managed drinking water services." In 2017, 844 million people still lacked even a basic drinking water service. In 2019, it was reported that 435 million people used unimproved sources for their drinking water, and 144 million still used surface water, such as lakes and streams. Drinking water can be sourced from the following water sources: surface water, groundwater, or rainwater, in each case after collection, treatment, and distribution. Desalinated seawater is another potential source for drinking water. People without access to safe, reliable, domestic water supplies face lower water security at specific times throughout the year due to cyclical changes in water quantity or quality. For example, where access to water on-premises is not available, drinking water quality at the point of use (PoU) can be much worse compared to the quality at the point of collection (PoC). Correct household practices around hygiene, storage, and treatment are therefore important. There are interactions between weather, water source, and management, and these in turn impact drinking water safety. Groundwater Groundwater provides critical freshwater supply, particularly in dry regions where surface water availability is limited. Globally, more than one-third of the water used originates from underground. In the mid-latitude arid and semi-arid regions lacking sufficient surface water supply from rivers and reservoirs, groundwater is critical for sustaining global ecology and meeting societal needs of drinking water and food production. The demand for groundwater is rapidly increasing with population growth, while climate change is imposing additional stress on water resources and raising the probability of severe drought occurrence. The anthropogenic effects on groundwater resources are mainly due to groundwater pumping and the indirect effects of irrigation and land use changes. Groundwater plays a central role in sustaining water supplies and livelihoods in sub-Saharan Africa. In some cases, groundwater is an additional water source that was not used previously. Reliance on groundwater is increasing in sub-Saharan Africa as development programs work towards improving water access and strengthening resilience to climate change. Lower-income areas typically install groundwater supplies without water quality treatment infrastructure or services. The assumption that untreated groundwater is typically suitable for drinking due to its relative microbiological safety compared to surface water underpins this practice, largely disregarding chemistry risks. Chemical contaminants occur widely in groundwater sources that are used for drinking but are not regularly monitored. Example priority parameters are fluoride, arsenic, nitrate, or salinity. Sanitation services Sanitation systems are grouped into several types. The ladder of sanitation services includes (from lowest to highest): open defecation, unimproved, limited, basic, safely managed. A distinction is made between sanitation facilities that are shared between two or more households (a "limited service") and those that are not shared (a "basic service"). The definition of improved sanitation facilities is facilities designed to hygienically separate excreta from human contact. With regard to toilets, improved sanitation includes the following types of toilets: flush toilet, connections to a piped sewer system, septic systems, pour-flush pit latrines, pit latrines with slabs, ventilated improved pit latrines, and composting toilets. Access to sanitation services is included in Target 6.2 of Sustainable Development Goal 6, which is: "By 2030, achieve access to adequate and equitable sanitation and hygiene for all and end open defecation, paying special attention to the needs of women and girls and those in vulnerable situations." This target has one indicator: Indicator 6.2.1 which states "proportion of population using (a) safely managed sanitation services and (b) a hand-washing facility with soap and water". In 2017, 4.5 billion people did not have toilets at home that could safely manage waste, despite improvements in access to sanitation over the past decades. Approximately 600 million people share a toilet or latrine with other households, and 892 million people practice open defecation. There are many barriers that make it difficult to achieve sanitation for all. These include social, institutional, technical and environmental challenges. Therefore, the problem of providing access to sanitation services cannot be solved by focusing on technology alone. Instead, it requires an integrated perspective that includes planning, using economic opportunities (e.g. from reuse of excreta), and behavior change interventions. Fecal sludge management and sanitation workers Sanitation services would not be complete without safe fecal sludge management (FSM), which is the storage, collection, transport, treatment, and safe end use or disposal of fecal sludge. Fecal sludge is defined very broadly as what accumulates in onsite sanitation systems (e.g. pit latrines, septic tanks and container-based solutions) and specifically is not transported through a sewer. Sanitation workers are the people needed for cleaning, maintaining, operating, or emptying a sanitation technology at any step of the sanitation chain. Hygiene Hygiene is a broad concept. "Hygiene refers to conditions and practices that help to maintain health and prevent the spread of diseases." Hygiene can comprise many behaviors, including hand washing, menstrual hygiene and food hygiene. In the context of WASH, hand washing with soap and water is regarded as a top priority in all settings and has been chosen as an indicator for national and global monitoring of hygiene access. "Basic hygiene facilities" are those where people have a hand washing facility with soap and water available on their premises. Hand washing facilities can consist of a sink with tap water, buckets with taps, tippy-taps, and portable basins. In the context of SDG 6, hygiene is included in the indicator for Target 6.2: "Proportion of population using [...] (b) a hand-washing facility with soap and water" In 2017, the global situation was reported as follows: Only 1 in 4 people in low-income countries had hand washing facilities with soap and water at home; only 14% of people in Sub-Saharan Africa have hand washing facilities. Worldwide, at least 500 million women and girls lack adequate, safe, and private facilities for managing menstrual hygiene. Approximately 40% of the world's population live without basic hand washing facilities with soap and water at home. Purposes The purposes of providing access to WASH services include achieving public health gains, improving human dignity in the case of sanitation, implementing the human right to water and sanitation, reducing the burden of collecting drinking water for women, reducing risks of violence against women, improving education and health outcomes at schools and health facilities, and reducing water pollution. Access to WASH services is also an important component of achieving water security. Improving access to WASH services can improve health, life expectancy, student learning, gender equality, and other important issues of international development. It can also assist with poverty reduction and socioeconomic development. Health aspects of lack of WASH services Categories of health impacts Health impacts resulting from a lack of safe sanitation systems fall into three categories: Direct impact (infections): The direct impacts include fecal–oral infections (through the fecal–oral route), helminth infections and insect vector diseases (see also waterborne diseases, which can contaminate drinking water). For example, lack of clean water and proper sanitation can result in feces-contaminated drinking water and cause life-threatening diarrhea for infants. Sequela (conditions caused by preceding infection): These impacts include stunting or growth faltering, consequences of stunting (obstructed labour, low birth weight), impaired cognitive function, pneumonia (related to repeated diarrhea in undernourished children), and anemia (related to hookworm infections). Broader well-being: Including anxiety, sexual assault (and related consequences), adverse birth outcomes as well as long-term problems such as school absence, poverty, decreased economic productivity, andantimicrobial resistance. WASH-attributable burden of diseases and injuries The WHO has investigated which proportion of death and disease worldwide can be attributed to insufficient WASH services. In their analysis they focus on the following four health outcomes: diarrhea, acute respiratory infections, malnutrition, and soil-transmitted Helminthiasis (STHs). These health outcomes are also included as an indicator for achieving Sustainable Development Goal 3 ("Good Health and Well-being"): Indicator 3.9.2 reports on the "mortality rate attributed to unsafe water, sanitation, and lack of hygiene". In 2023, WHO summarized the available data with the following key findings: "In 2019, use of safe WASH services could have prevented the loss of at least 1.4 million lives and 74 million disability-adjusted life years (DALYs) from four health outcomes. This represents 2.5% of all deaths and 2.9% of all DALYs globally." Of the four health outcomes studied, it was diarrheal disease that had the most striking correlation, namely the highest number of "attributable burden of disease": over 1 million deaths and 55 million DALYs from diarrheal diseases were linked with lack of WASH. Of these deaths, 564,000 deaths were linked to unsafe sanitation in particular. Acute respiratory infections were the second largest cause of WASH-attributable burden of disease in 2019, followed by malnutrition and soil-transmitted helminthiasis. The latter does not lead to such high death numbers (in comparison) but is fully connected to unsafe WASH; its "population-attributable fraction" is estimated to be 100%. The connection between lack of WASH and burden of disease is primarily one of poverty and poor access in developing countries: "the WASH-attributable mortality rates were 42, 30, 4.4 and 3.7 deaths per 100 000 population in low-income, lower-middle income, upper-middle income and high-income countries, respectively." The regions most affected are in the WHO Africa and South-East Asia regions. Here, between 66% and 76% of the diarrheal disease burden could be prevented if access to safe WASH services was provided. Most of the diseases resulting from lack of sanitation have a direct relation to poverty. For example, open defecation – which is the most extreme form of "lack of sanitation" – is a major factor in causing various diseases, most notably diarrhea and intestinal worm infections. An earlier report by World Health Organization which analyzed data up to 2016 had found higher values: "The WASH-attributable disease burden amounts to 3.3% of global deaths and 4.6% of global DALYs. Among children under 5 years, WASH-attributable deaths represent 13% of deaths and 12% of DALYs. Worldwide, 1.9 million deaths and 123 million DALYs could have been prevented in 2016 with adequate WASH." An even earlier study from 2002 had estimated even higher values, namely that up to 5 million people die each year from preventable waterborne diseases. These changes in the estimates of death and disease can partly be explained by the progress that has been achieved in some countries in improving access to WASH. For example, several large Asian countries (China, India, Indonesia) have managed to increase the "safely managed sanitation services" in their country from the year 2015 to 2020 by more than 10% points. List of diseases There are at least the following twelve diseases which are more likely to occur when WASH services are inadequate: Diarrheal diseases Respiratory infections Soil-transmitted helminth infections Malaria Trachoma Schistosomiasis Lymphatic filariasis Onchocerciasis Dengue Japanese encephalitis Protein–energy malnutrition Drowning There are also other diseases where adverse health outcomes are likely to be linked to inadequate WASH but which are not yet quantified. These include for example: Arsenicosis Fluorosis Legionellosis Leptospirosis Hepatitis A and Hepatitis E Cyanobacterial toxins Lead poisoning Scabies Spinal injury Poliomyelitis Neonatal conditions (see also infant mortality) and maternal outcomes (such as maternal deaths) Other diseases, e.g. most neglected tropical diseases Diarrhea, malnutrition and stunting Diarrhea is primarily transmitted through fecal–oral routes. In 2011, infectious diarrhea resulted in about 0.7 million deaths in children under five years old and 250 million lost school days. This equates to about 2000 child deaths per day. Children suffering from diarrhea are more vulnerable to become underweight (due to stunted growth). This makes them more vulnerable to other diseases such as acute respiratory infections and malaria. Chronic diarrhea can have a negative effect on child development (both physical and cognitive). Numerous studies have shown that improvements in drinking water and sanitation (WASH) lead to decreased risks of diarrhea. Such improvements might include for example, the use of water filters, provision of high-quality piped water and sewer connections. Diarrhea can be prevented - and the lives of 525,000 children annually be saved (estimate for 2017) - by improved sanitation, clean drinking water, and hand washing with soap. In 2008 the same figure was estimated as 1.5 million children. The combination of direct and indirect deaths from malnutrition caused by unsafe water, sanitation and hygiene (WASH) practices was estimated by the World Health Organization in 2008 to lead to 860,000 deaths per year in children under five years of age. The multiple inter-dependencies between malnutrition and infectious diseases make it very difficult to quantify the portion of malnutrition that is caused by infectious diseases which are in turn caused by unsafe WASH practices. Based on expert opinions and a literature survey, researchers at WHO arrived at the conclusion that approximately half of all cases of malnutrition (which often leads to stunting) in children under five is associated with repeated diarrhea or intestinal worm infections as a result of unsafe water, inadequate sanitation or insufficient hygiene. Neglected tropical diseases Water, sanitation and hygiene interventions help to prevent many neglected tropical diseases (NTDs), for example soil-transmitted helminthiasis. Approximately two billion people are infected with soil-transmitted helminths worldwide. This type of intestinal worm infection is transmitted via worm eggs in feces which in turn contaminate soil in areas where sanitation is poor. An integrated approach to NTDs and WASH benefits both sectors and the communities they are aiming to serve. This is especially true in areas that are endemic with more than one NTD. Since 2015, the World Health Organization (WHO) has a global strategy and action plan to integrate WASH with other public health interventions in order to accelerate elimination of NTDs. The plan aimed to intensify control or eliminate certain NTDs in specific regions by 2020. It refers to the NTD roadmap milestones that included, for example, eradication of dracunculiasis by 2015 and of yaws by 2020, elimination of trachoma and lymphatic filariasis as public health problems by 2020, intensified control of dengue, schistosomiasis and soil-transmitted helminthiases. The plan consists of four strategic objectives: improving awareness of benefits of joint WASH and NTD actions; monitoring WASH and NTD actions to track progress; strengthening evidence of how to deliver effective WASH interventions; and planning, delivering and evaluating WASH and NTD programs with involvement of all stakeholders. Additional health risks for women Women tend to face a higher risk of diseases and illness due to limited WASH access. In their third trimester, pregnant women face severe hardship walking to and from a water collection site. The consumption of unclean water leading to infection in the fetus accounts for 15% of deaths for women during pregnancy globally. Illnesses and diseases that can come from poor menstrual hygiene management become more likely when clean water and toilets are unavailable. In Bangladesh and India, women rely on old cloths to absorb menstrual blood and use water to clean and reuse them. Without access to clean water and hygiene, these women my experience unnecessary health problems in connection with their periods. Health risks for sanitation workers Effects of climate change on health risks Global climate change can increase the health risks for some of the infectious diseases mentioned above. See below in the section on negative impacts of climate change. Effectiveness of WASH interventions on health outcomes There is debate in the academic literature about the effectiveness on health outcomes when implementing WASH programs in low- and middle-income countries. Many studies provide poor quality evidence on the causal impact of WASH programs on health outcomes of interest. The nature of WASH interventions is such that high quality trials, such as randomized controlled trials (RCTs), are expensive, difficult and in many cases not ethical. Causal impact from such studies are thus prone to being biased due to residual confounding. Blind studies of WASH interventions also pose ethical challenges and difficulties associated with implementing new technologies or behavioral changes without the knowledge of the involved participants. Moreover, scholars suggest a need for longer-term studies of technology efficacy, greater analysis of sanitation interventions, and studies of combined effects from multiple interventions in order to more sufficiently gauge WASH health outcomes. Many scholars have attempted to summarize the evidence of WASH interventions from the limited number of high quality studies. Hygiene interventions, in particular those focusing on the promotion of handwashing, appear to be especially effective in reducing morbidity. A meta-analysis of the literature found that handwashing interventions reduced the relative risk of diarrhea by approximately 40%. Similarly, handwashing promotion has been found to be associated with a 47% decrease in morbidity. However, a challenge with WASH behavioral intervention studies is an inability to ensure compliance with such interventions, especially when studies rely on self-reporting of disease rates. This prevents researchers from concluding a causal relationship between decreased morbidity and the intervention. For example, researchers may conclude that educating communities about handwashing is effective at reducing disease, but cannot conclude that handwashing reduces disease. Point-of-use water supply and point-of-use water quality interventions also show similar effectiveness to handwashing, with those that include provision of safe storage containers demonstrating increased disease reduction in infants. Specific types of water quality improvement projects can have a protective effect on morbidity and mortality. A randomized control trial in India concluded that the provision of chlorine tablets for improving water quality led to a 75% decreased incidence of cholera among the study population. A quasi-randomized study on historical data from the United States also found that the introduction of clean water technologies in major cities was responsible for close to half the reduction in total mortality and over three-quarters of the reduction in infant mortality. Distributing chlorine products, or other water disinfectants, for use in the home may reduce instances of diarrhea. However, most studies on water quality improvement interventions suffer from residual confounding or poor adherence to the mechanism being studied. For instance, a study conducted in Nepal found that adherence to the use of chlorine tablets or chlorine solution to purify water was as low as 18.5% among program households. A study on a water well chlorination program in Guinea-Bissau in 2008 reported that families stopped treating water within their households because of the program which consequently increased their risk of cholera. It was concluded that well chlorination without proper promotion and education led to a false sense of security. Studies on the effect of sanitation interventions alone on health are rare. When studies do evaluate sanitation measures, they are mostly included as part of a package of different interventions. A pooled analysis of the limited number of studies on sanitation interventions suggest that improving sanitation has a protective effect on health. A UNICEF funded sanitation intervention (packaged into a broader WASH intervention) was also found to have a protective effect on under-five diarrhea incidence but not on household diarrhea incidence. Gender aspects of lack of WASH services Women and girls are particularly burdened from lack of proper WASH services. Inadequate access to water and sanitation affect women and girls in several ways because of social norms in some cultures that position them as principal household water collectors and managers, the inability to urinate easily outside of an unclean stall or where no toilets are nearby, and due to the effects of menstruation beginning during puberty. These effects include low participation in the labor market and community activities, adverse biomedical outcomes, psychosocial stress, and poor educational outcomes. Women and girls often bear higher health and social costs associated with water and sanitation insecurity than men and boys, such as higher exposure to water-related disease, discriminatory taboos, and unrealized economic productivity. Time required to collect water The lack of accessible, sufficient, clean and affordable water supply has adverse impacts specifically related to women in developing nations. It is estimated that 263 million people worldwide spent over 30 minutes per round trip to collect water from an improved source. In sub-Saharan Africa, women and girls carry water containers for an average of three miles each day, spending 40 billion hours per year on water collection (walking to the water source, waiting in line, walking back). The time to collect water can come at the expense of education, income generating activities, cultural and political involvement, and rest and recreation. For example, in low-income areas of Nairobi, women carry 44 pound containers of water back to their homes, taking anywhere between an hour and several hours to wait and collect the water. In many places of the world, getting and providing water is considered "women's work," so gender and water access are intricately linked. Water gathering and supply to family units remains primarily a woman's task in less developed countries where water gathering is considered a main chore. This water work is also largely unpaid household work based on patriarchal gender norms and often related to domestic work, such as laundry, cooking and childcare. Areas that rely on women to primarily collect water include countries in Africa, South Asia and in the Middle East. Violence against women Women and girls usually bear the responsibility for collecting water, which is often very time-consuming and arduous, and can also be dangerous for them. Women and girls who collect water may also face physical assault and sexual assault along the way (violence against women). This includes vulnerability to rape when collecting water from distant areas, domestic violence over the amount of water collected, and fights over scarce water supply. A study in India, for example, found that women felt intense fear of sexual violence when accessing water and sanitation services. A similar study in Uganda also found that women reported to feel a danger for their security whilst journeying to toilets, particularly at night. Gender norms for occupations Gender norms can negatively affect how men and women access water through such behavior expectations along gender lines—for example, when water collection is a woman's chore, men who collect water may face discrimination for performing perceived women's work. Women are likely to be deterred from entering water utilities in developing countries because "social norms prescribe that it is an area of work that is not suitable for them or that they are incapable of performing well". Nevertheless, a study by World Bank in 2019 has found that the proportion of female water professionals has grown in the past few years. In many societies, the task of cleaning toilets falls to women or children, which can increase their exposure to disease. In non-household settings Non-household settings for WASH include the following six types: schools, health care facilities, workplaces (including prisons), temporary use settings, mass gatherings, and dislocated populations. In schools More than half of all primary schools in the developing countries with available data do not have adequate water facilities and nearly two thirds lack adequate sanitation. Even where facilities exist, they are often in poor condition. Children are able to more fully participate in school when there is improved access to water. Lack of WASH facilities can prevent students from attending school, particularly female students. Strong cultural taboos around menstruation, which are present in many societies, coupled with a lack of Menstrual Hygiene Management services in schools, results in girls staying away from school during menstruation. Reasons for missing or poorly maintained water and sanitation facilities at schools in developing countries include lacking intersectoral collaboration; lacking cooperation between schools, communities and different levels of government; as well as a lack in leadership and accountability. Outcomes from improved WASH at schools WASH in schools, sometimes called SWASH or WinS, significantly reduces hygiene-related disease, increases student attendance and contributes to dignity and gender equality. WASH in schools contributes to healthy, safe and secure school environments. It can also lead to children becoming agents of change for improving water, sanitation and hygiene practices in their families and communities. For example, data from over 10,000 schools in Zambia was analyzed in 2017 and confirmed that improved sanitation provision in schools was correlated with high female-to-male enrollment ratios, and reduced repetition and drop-out ratios, especially for girls. Methods to improve WASH in schools Methods to improve the situation of WASH infrastructure at schools include, on a policy level: broadening the focus of the education sector, establishing a systematic quality assurance system, distributing and using funds wisely. Other practical recommendations include: have a clear and systematic mobilization strategy, support the education sector to strengthen intersectoral partnerships, establish a constant monitoring system which is located within the education sector, educate the educators and partner with the school management. The support provided by development agencies to the government at national, state and district levels is helpful to gradually create what is commonly referred to as an enabling environment for WASH in schools. Success also hinges on local-level leadership and a genuine collective commitment of school stakeholders towards school development. This applies to students and their representative clubs, headmaster, teachers and parents. Furthermore, other stakeholders have to be engaged in their direct sphere of influence, such as: community members, community-based organizations, educations official, local authorities. Group handwashing Supervised daily group handwashing in schools is an effective strategy for building good hygiene habits, with the potential to lead to positive health and education outcomes for children. This has for example been implemented in the "Essential Health Care Program" by the Department of Education in the Philippines. Mass deworming twice a year, supplemented by washing hands daily with soap and brushing teeth daily with fluoride, is at the core of this national program. It has also been successfully implemented in Indonesia. In healthcare facilities The provision of adequate water, sanitation and hygiene is an essential part of providing basic health services in healthcare facilities. WASH in healthcare facilities aids in preventing the spread of infectious diseases as well as protects staff and patients. WASH services in health facilities in developing countries are currently often lacking. According to the World Health Organization, data from 54 countries in low and middle income settings representing 66,101 health facilities show that 38% of health care facilities lack improved water sources, 19% lack improved sanitation while 35% lack access to water and soap for handwashing. The absence of basic WASH amenities compromises the ability to provide routine services and hinders the ability to prevent and control infections. The provision of water in health facilities was the lowest in Africa, where 42% of healthcare facilities lack an improved source of water on-site or nearby. The provision of sanitation is lowest in the Americas with 43% of health care facilities lacking adequate services. In 2019, WHO estimated that: "One in four health care facilities lack basic water services, and one in five have no sanitation service – impacting 2.0 and 1.5 billion people, respectively." Furthermore, it is estimated that "health care facilities in low-income countries are at least three times as likely to have no water service as facilities in higher resource settings". This is thought to contribute to the fact that maternal sepsis is twice as great in developing countries as it is in high income countries. Barriers to providing WASH in health care facilities include: Incomplete standards, inadequate monitoring, disease-specific budgeting, disempowered workforce, poor WASH infrastructure. The improvement of WASH standards within health facilities needs to be guided by national policies and standards as well as an allocated budget to improve and maintain services. A number of solutions exist that can considerably improve the health and safety of both patients and service providers at health facilities: Availability of safe water for drinking but also for use in surgery and childbirth deliveries, food preparation, bathing and showering: There is a need for improved water pump systems within health facilities. Improved handwashing practices among healthcare staff must be implemented. This requires functional hand washing stations at strategic points of care within the health facilities, i.e. at points of care and at toilets. Waste system management: Proper health care waste management and the safe disposal of excreta and waste water is crucial to preventing the spread of disease. Hygiene promotion for patients, visitors and staff. Accessible and clean toilets, separated by gender, in sufficient numbers for staff, patients and visitors. Improving access to hand washing and sanitation facilities in healthcare settings will significantly reduce infection and mortality rates, particularly in maternal and child health. In prisons In developing countries, prison buildings are very often overcrowded and dilapidated. A report by ICRC states that "Measures depriving persons of their freedom must in no way, whatever the circumstances, be made more severe by treatment or material conditions of detention which undermine the dignity and the rights of the individual."The water supply systems and sanitary facilities in prisons are often insufficient to meet the needs of the prison population in cases where the number of detainees exceeds a prison's capacity. Overuse of the facilities results in rapid deterioration. The budget allocated by the State for prisons is often insufficient to cover the detainees' needs in terms of food and medical care, let alone upkeep of water and sanitation facilities. Nevertheless, even with limited funds, it is possible to maintain or renovate decaying infrastructure with the right planning approaches and suitable low-cost water supply and sanitation options. Challenges in WASH implementation Equitable access to drinking water supply There are inequalities in access to water, sanitation and hygiene services. Such inequalities are for example related to income level and gender. In 2019 in 24 countries where disaggregated data was available, basic water coverage among the richest wealth quintile was at least twice as high as coverage among the poorest quintile. For example, in Bangladesh, minority ethnic groups have lower levels of access to WASH than the rest of the Bengali population. This is due to "structural racial discrimination" in Bangladesh. Access to WASH services also varies internally within nations depending on socioeconomic status, political power, and level of urbanization. In 2004 it was found that urban households are 30% and 135% more likely to have access to improved water sources and sanitation respectively, as compared to rural areas. The human rights to water and sanitation prohibit discrimination on the grounds of "race, color, sex, language, religion, political or other opinion, national or social origin, property, birth, disability or other status". These are all dimensions of inequality in WASH services. Urban low income areas There are three main barriers to improvement of urban services in slum areas: Firstly, insufficient supply, especially of networked services. Secondly, there are usually demand constraints that limit people's access to these services (for example due to low willingness to pay). Thirdly, there are institutional constraints that prevent the poor from accessing adequate urban services. Polluted water sources Water supply sources include surface water and groundwater. These important water resources are often at risk of being polluted or overused. Failures of WASH systems over time The failures of water supply system (such as water points, wells and boreholes) and sanitation systems have been well documented. This has been attributed to financial costs, inadequate technical training for operations and maintenance, poor use of new facilities and taught behaviors, and a lack of community participation and ownership. The poorest populations often cannot afford fees required for operation and maintenance of WASH infrastructure, preventing them from benefitting even when systems do exist. Contamination of water in distribution systems is a challenge and can contribute to the spread of waterborne diseases. Working conditions of sanitation workers Climate change aspects Greenhouse gas emissions Water and sanitation services contribute to greenhouse gas emissions. These emissions are grouped into three scopes in the international greenhouse gas protocol: direct emissions, as well as two types of indirect emissions (see below). Direct emissions (Scope 1) Scope 1 includes "direct emissions resulting directly from the activity". In the WASH sector, this is methane and nitrous oxide emissions during wastewater and sewage sludge treatment. Sanitation services produce about 2–6% of global human-caused methane emissions. Septic tanks, pit latrines, anaerobic lagoons, anaerobic digesters are all anaerobic treatment processes that emit methane which may or may not be captured (in the case of septic tanks it is usually not captured). It has been estimated, using data from 2012 and 2013, that "wastewater treatment in centralized facilities contributes alone some 3% of global nitrous oxide emissions and 7% of anthropogenic methane emissions". Data from 2023 from centralized sewage treatment plants in the United States indicate that methane emissions are about twice the estimates provided by IPCC in 2019, i.e. 10.9 ± 7.0 compared to 4.3-6.1 MMT (million metric tons) CO2-eq/yr. Current methods for estimating sanitation emissions underestimate the significance of methane emissions from non-sewered sanitation systems (NSSS). This is despite the fact that such sanitation systems are prevalent in many countries. NSSS play a vital role in the safe management of fecal sludge and account for approximately half of all existing sanitation provisions. The global methane emissions from NSSS in 2020 was estimated to be 377 Mt CO2e/year or 4.7% of global anthropogenic methane emissions. This is comparable to the greenhouse gas emissions from conventional wastewater treatment plants. Therefore, the GHG emissions from the non-sewered sanitation systems are a non-negligible source. India and China contribute extensively to methane emissions of NSSS because of their large populations and NSSS utilization. Indirect emissions associated with the energy required (Scope 2) Scope 2 includes "indirect emissions associated with the energy required by the activity". Companies that deal with water and wastewater services need energy for various processes. They use the energy mix that is available in the country. The higher the proportion of fossil fuels in the energy mix is, the higher the GHG emissions under Scope 2 will be high too. The processes that need energy include: water abstraction (e.g. groundwater pumping), drinking water storage, water conveyance, water treatment, water distribution, treatment of wastewater, water end use (e.g. water heating), desalination and wastewater reuse. For example, electrical energy is needed for pumping of sewage and for mechanical aeration in activated sludge treatment plants. When looking at the emissions from the sanitation and wastewater sector most people focus on treatment systems, particularly treatment plants. This is because treatment plants require considerable energy input and are estimated to account for 3% of global electricity consumption. This makes sense for high-income countries, where wastewater treatment is the biggest energy consumer compared to other activities of the water sector. The aeration processes that are used in many secondary treatment processes are particularly energy intensive (using about 50% of the total energy required for treatment). The amount of energy needed to treat wastewater depends on several factors: wastewater quantity and quality (i.e. how much and how polluted is it), treatment level required which in turn influences the type of treatment process that gets selected. The energy efficiency of the treatment process is another factor. Indirect emissions related to the activity but caused by other organizations (Scope 3) Scope 3 includes "indirect emissions related to the activity but caused by other organizations". The indirect emissions under Scope 3 are difficult to assess in a standardized way. They include for example emissions from constructing infrastructure, from the manufacture of chemicals that are needed in the treatment process and from the management of the by-product sewage sludge. Reducing greenhouse gas emissions Solutions exist to reduce the greenhouse gas emissions of water and sanitation services. These solutions into three categories which partly overlap: Firstly "reducing water and energy consumption through lean and efficient approaches"; secondly "embracing circular economy to produce energy and valuable products"; and thirdly by "planning to reduce GHG emissions through strategic decisions". The mentioned lean and efficient approaches include for example finding ways to reduce water loss from water networks and to reduce infiltration of rainwater or groundwater into sewers. Also, incentives can to encourage households and industries to reduce their water consumption and their energy requirements for water heating. There is another method to reduce the energy requirements for the treatment of raw water to make drinking water out of it: protecting the quality of the source water better. Methods that fall into the category of circular economy include: Reusing water, nutrients and materials; Low-carbon energy production (e.g. solar power on roofs of utility buildings, recovery of waste heat from wastewater, producing hydro-electricity by installing micro-turbines, producing energy from biosolids and sewage sludge. Strategic decisions around reducing GHG emissions include: awareness raising and education, governance that supports changing practices, providing economic incentive to conserve water and reduce consumption, and finally choosing low-carbon energy and supplies. Negative impacts of climate change The effects of climate change can have negative impacts on existing sanitation services in several ways, for example by damage and loss of services from floods and reduced carrying capacity of waters receiving wastewater. The weather and climate-related aspects (variability, seasonality and extreme weather events) have always had an impact on the delivery of sanitation services. But now, extreme weather events, such as floods and droughts, are generally increasing in frequency and intensity due to climate change in many regions. They affect the operation of water supply, storm drainage and sewerage infrastructure, and wastewater treatment plants. Changes in the frequency and intensity of climate extremes could compound current challenges as water availability becomes more uncertain, and health risks increase due to contaminated water sources. The effects of climate change can result in a decrease of water availability, an increase of water necessity, damage to WASH facilities, and increased water contamination from pollutants. Due to these impacts, climate change can "exacerbate many WASH-related risks and diseases". Climate change poses increased risks to WASH systems, particular in Sub-Saharan Africa where access to safely managed basic sanitation is low. In that region, it is the poorly managed WASH systems, for example in informal settlements, which make people more vulnerable to the effects of climate change than people elsewhere. In terms of the water cycle, climate change can affect the amounts of soil infiltration, deeper percolation, and hence groundwater recharge. Also, rising temperature increases evaporative demand over land, which limits the amount of water to replenish groundwater. Influence of climate change on waterborne diseases Climate change adaptation Adaptation efforts in the WASH sector include for example protection of local water resources (as these resources become source water for drinking water supply) and investigating improvements to the water supply and storage strategy. It might also be necessary to adjust the utility's planning and operation. Climate change adaptation policies need to consider the risks from extreme weather events. The required adaptation measures need to consider measures for droughts and those for floods. Adaptation measures for droughts include for example: reduce leakages in a pro-active manner, communicate restrictions on water use to consumers. Adaptation measures for floods include for example: Review the siting of the water and wastewater treatment plants in floodplains, minimize the impact of floodwater on operational equipment. Nature-based solutions (NbS) can play an important role for climate change adaptation approaches of water and sanitation services. This includes ecological restoration (which can improve infiltration and thus reduce flooding), ecological engineering for wastewater treatment, green infrastructure for stormwater management, and measures for natural water retention. Most National Adaptation Plans published by the UN Framework Convention for Climate Change include measures to improve sanitation and hygiene. Engineers and planners need to adapt design standards for water and sanitation systems to account for the changing climate conditions. Otherwise these infrastructure systems will be more and more vulnerable in future. The same applies for other key infrastructure systems such as transport, energy and communications. Improving climate resilience Climate-resilient water services (or climate-resilient WASH) are services that provide access to high quality drinking water during all seasons and even during extreme weather events. Climate resilience in general is the ability to recover from, or to mitigate vulnerability to, climate-related shocks such as floods and droughts. Climate resilient development has become the new paradigm for sustainable development. This concept thus influences theory and practice across all sectors globally. This is particularly true in the water sector, since water security is closely connected to climate change. On every continent, governments are now adopting policies for climate resilient economies. International frameworks such as the Paris Agreement and the Sustainable Development Goals are drivers for such initiatives. Several activities can improve water security and increase resilience to climate risks: Carrying out a detailed analysis of climate risk to make climate information relevant to specific users; developing metrics for monitoring climate resilience in water systems (this will help to track progress and guide investments for water security); and using new institutional models that improve water security. Climate resilient policies can be useful for allocating water, especially when regional water availability may change in future. This requires a good understanding of the current and future hydroclimatic situation. For example, a more accurate prediction of future changes in climate variability leads to a better response to their possible impacts. To build climate resilience into water systems, people need to have access to climate information that is appropriate for their local context. Climate information products are useful if they cover a wide range of temporal and spatial scales, and provide information on regional water-related climate risks. For example, government staff need easy access to climate information to achieve better water management. Four important activities to achieve climate resilient WASH services include: First, a risk analysis is performed to look at possible implications of extreme weather events as well as preventive actions. Such preventive actions can include for example elevating the infrastructure to be above expected flood levels. Secondly, managers assess the scope for reducing greenhouse gas emissions and put in place suitable options, e.g. using more renewable energy sources. Thirdly, the water utilities ensure that water sources and sanitation services are reliable at all times during the year, also during times of droughts and floods. Finally, the management and service delivery models are strengthened so that they can withstand a crisis. To put climate resilience into practice and to engage better with politicians, the following guide questions are useful: "resilience of what, to what, for whom, over what time frame, by whom and at what scale?". For example, "resilience of what?" means thinking beyond infrastructure but to also include resilience of water resources, local institutions and water users. Another example is that "resilience for whom?" speaks about reducing vulnerability and preventing negative developments: Some top-down interventions that work around power and politics may undermine indigenous knowledge and compromise community resilience. Adaptive capacity for climate resilience Adaptive capacity in water management systems can help to absorb some of the impacts of climate-related events and increase climate resilience. Stakeholders at various scales, i.e. from small urban utilities to national governments, need to have access to reliable information which details regional climate and climate change. For example, context-specific climate tools can help national policy makers and sub-national practitioners to make informed decisions to improve climate resilience. A global research program called REACH (led by the University of Oxford and funded by the UK Government's Foreign, Commonwealth & Development Office) is developing and using such climate tools for Kenya, Ethiopia and Bangladesh during 2015 to 2024. Approaches for planning and implementation National WASH plans and monitoring UN-Water carries out the Global Analysis and Assessment of Sanitation and Drinking-Water (GLAAS) initiative. This work examines the "extent to which countries develop and implement national policies and plans for WASH, conduct regular monitoring, regulate and take corrective action as needed, and coordinate these parallel processes with sufficient financial resources and support from strong national institutions." Many countries' WASH plans are not supported by the necessary financial and human resources. This hinders their implementation and intended outcomes for WASH service delivery. As of 2022, it is becoming more common for countries to include "climate change preparedness approaches" in their national WASH plans. Preparedness in this context means working on mitigation, adaptation and resilience of WASH systems. Still, most national policies on WASH services do not set out how to address climate risks and how to increase the resilience of infrastructure and management. Women's empowerment There has been a growing understanding of the role of gender in development in recent decades (often called gender mainstreaming). Women's empowerment plays an important role in reducing gender disparities and related adverse outcomes across all sectors, including the WASH sector. Women's empowerment is particularly crucial in WASH, as prevalent social norms assign the majority of water collection roles to women in many developing countries. Empowerment is largely described in the literature as both a process by which WASH services could be improved as well as the result of improved WASH services. The Empowerment in WASH Index (EWI) was developed in 2019 to guide WASH practitioners in measuring and monitoring gender outcomes, empowerment, and inclusivity in WASH-related interventions. National indices and tools also exist to capture changes in gender disparities at the national level: Gender Empowerment Measure (GEM), Gender Inequality Index (GII), and Gender Development Index (GDI). A scoping review of the literature found five key interrelated dimensions of empowerment in the WASH sector: Access to information (knowledge sharing, awareness creation, and information dissemination), Participation (community engagement, partnerships, and involvement in the design and governance of WASH projects), Capacity building (leveraging of human capital, organizational resources, and social capital to solve collective problem) Leadership and accountability, and Decision-making and inclusiveness. A qualitative study in Asutifi North District in Ghana conceptualized empowerment in terms of four major themes: availability of resources, WASH information, social and cultural structures, and agency (the ability to define and act on individual or shared goals, and to put them into effect). The Dublin Statement on Water and Sustainable Development in 1992 included "Women Play a central part in the provision management and safeguarding of water" as one of four principles. In 1996, the World Bank Group published a Toolkit on Gender in Water and Sanitation. Gender-sensitive approaches to water and sanitation have proven to be cost effective. History The history of water supply and sanitation is the topic of a separate article. The abbreviation WASH was used from the year 1988 onwards as an acronym for the Water and Sanitation for Health Project of the United States Agency for International Development. At that time, the letter "H" stood for health, not hygiene. Similarly, in Zambia the term WASHE was used in a report in 1987 and stood for Water Sanitation Health Education. An even older USAID WASH project report dates back to as early as 1981. From about 2001 onwards, international organizations active in the area of water supply and sanitation advocacy, such as the Water Supply and Sanitation Collaborative Council and the International Water and Sanitation Centre (IRC) in the Netherlands began to use WASH as an umbrella term for water, sanitation and hygiene. WASH has since then been broadly adopted as a handy acronym for water, sanitation and hygiene in the international development context. The term WatSan was also used for a while, especially in the emergency response sector such as with IFRC and UNHCR, but has not proven as popular as WASH. Society and culture Global goals Since 1990, the Joint Monitoring Program for Water Supply and Sanitation (JMP) of WHO and UNICEF has regularly produced estimates of global WASH progress. The JMP was already responsible for monitoring the UN's Millennium Development Goal (MDG) Target 7.C, which aimed to "halve, by 2015, the proportion of the population without sustainable access to safe drinking water and basic sanitation". This has been replaced in 2015 by the Sustainable Development Goal 6 (SDG 6), which is to "ensure availability and sustainable management of water and sanitation for all" by 2030. To establish a reference point from which progress toward achieving the SDGs could be monitored, the JMP produced "Progress on Drinking Water, Sanitation and Hygiene: 2017 Update and SDG Baselines". Expanding WASH coverage and monitoring in non-household settings such as schools, healthcare facilities, and work places, is included in Sustainable Development Goal 6. WaterAid International is a non-governmental organization (NGO) that works on improving the availability of safe drinking water in some the world's poorest countries. Sanitation and Water for All is a partnership that brings together national governments, donors, UN agencies, NGOs and other development partners. They work to improve sustainable access to sanitation and water supply. In 2014, 77 countries had already met the MDG sanitation target, 29 were on track and, 79 were not on-track. Awards Important awards for individuals or organizations working on WASH include the Stockholm Water Prize since 1991 and the Sarphati Sanitation Awards since 2013, for sanitation entrepreneurship. United Nations organs UNICEF - UNICEF's declared strategy is "to achieve universal and equitable access to safe and affordable drinking water for all". UNICEF includes WASH initiatives in their work with schools in over 30 countries. UN-Water - an interagency mechanism which "coordinates the efforts of UN entities and international organizations working on water and sanitation issues". Awareness raising through observance days The United Nation's International Year of Sanitation in 2008 helped to increase attention for funding of sanitation in WASH programs of many donors. For example, the Bill and Melinda Gates Foundation has increased their funding for sanitation projects since 2009, with a strong focus on reuse of excreta. Awareness raising for the importance of WASH takes place through several United Nations international observance days, namely World Water Day, Menstrual Hygiene Day, World Toilet Day and Global Handwashing Day. By country and region
Biology and health sciences
Fields of medicine
Health
30778041
https://en.wikipedia.org/wiki/Particle
Particle
In the physical sciences, a particle (or corpuscule in older texts) is a small localized object which can be described by several physical or chemical properties, such as volume, density, or mass. They vary greatly in size or quantity, from subatomic particles like the electron, to microscopic particles like atoms and molecules, to macroscopic particles like powders and other granular materials. Particles can also be used to create scientific models of even larger objects depending on their density, such as humans moving in a crowd or celestial bodies in motion. The term particle is rather general in meaning, and is refined as needed by various scientific fields. Anything that is composed of particles may be referred to as being particulate. However, the noun particulate is most frequently used to refer to pollutants in the Earth's atmosphere, which are a suspension of unconnected particles, rather than a connected particle aggregation. Conceptual properties The concept of particles is especially useful when modelling nature, as the full treatment of many phenomena can be complex and also involve difficult computation. It can be used to make simplifying assumptions concerning the processes involved. Francis Sears and Mark Zemansky, in University Physics, give the example of calculating the landing location and speed of a baseball thrown in the air. They gradually strip the baseball of most of its properties, by first idealizing it as a rigid smooth sphere, then by neglecting rotation, buoyancy and friction, ultimately reducing the problem to the ballistics of a classical point particle. The treatment of large numbers of particles is the realm of statistical physics. Size The term "particle" is usually applied differently to three classes of sizes. The term macroscopic particle, usually refers to particles much larger than atoms and molecules. These are usually abstracted as point-like particles, even though they have volumes, shapes, structures, etc. Examples of macroscopic particles would include powder, dust, sand, pieces of debris during a car accident, or even objects as big as the stars of a galaxy. Another type, microscopic particles usually refers to particles of sizes ranging from atoms to molecules, such as carbon dioxide, nanoparticles, and colloidal particles. These particles are studied in chemistry, as well as atomic and molecular physics. The smallest particles are the subatomic particles, which refer to particles smaller than atoms. These would include particles such as the constituents of atoms – protons, neutrons, and electrons – as well as other types of particles which can only be produced in particle accelerators or cosmic rays. These particles are studied in particle physics. Because of their extremely small size, the study of microscopic and subatomic particles falls in the realm of quantum mechanics. They will exhibit phenomena demonstrated in the particle in a box model, including wave–particle duality, and whether particles can be considered distinct or identical is an important question in many situations. Composition Particles can also be classified according to composition. Composite particles refer to particles that have composition – that is particles which are made of other particles. For example, a carbon-14 atom is made of six protons, eight neutrons, and six electrons. By contrast, elementary particles (also called fundamental particles) refer to particles that are not made of other particles. According to our current understanding of the world, only a very small number of these exist, such as leptons, quarks, and gluons. However it is possible that some of these might be composite particles after all, and merely appear to be elementary for the moment. While composite particles can very often be considered point-like, elementary particles are truly punctual. Stability Both elementary (such as muons) and composite particles (such as uranium nuclei), are known to undergo particle decay. Those that do not are called stable particles, such as the electron or a helium-4 nucleus. The lifetime of stable particles can be either infinite or large enough to hinder attempts to observe such decays. In the latter case, those particles are called "observationally stable". In general, a particle decays from a high-energy state to a lower-energy state by emitting some form of radiation, such as the emission of photons. N-body simulation In computational physics, N-body simulations (also called N-particle simulations) are simulations of dynamical systems of particles under the influence of certain conditions, such as being subject to gravity. These simulations are common in cosmology and computational fluid dynamics. N refers to the number of particles considered. As simulations with higher N are more computationally intensive, systems with large numbers of actual particles will often be approximated to a smaller number of particles, and simulation algorithms need to be optimized through various methods. Distribution of particles Colloidal particles are the components of a colloid. A colloid is a substance microscopically dispersed evenly throughout another substance. Such colloidal system can be solid, liquid, or gaseous; as well as continuous or dispersed. The dispersed-phase particles have a diameter of between approximately 5 and 200 nanometers. Soluble particles smaller than this will form a solution as opposed to a colloid. Colloidal systems (also called colloidal solutions or colloidal suspensions) are the subject of interface and colloid science. Suspended solids may be held in a liquid, while solid or liquid particles suspended in a gas together form an aerosol. Particles may also be suspended in the form of atmospheric particulate matter, which may constitute air pollution. Larger particles can similarly form marine debris or space debris. A conglomeration of discrete solid, macroscopic particles may be described as a granular material.
Physical sciences
Particle physics: General
null
30780223
https://en.wikipedia.org/wiki/Coptotermes%20gestroi
Coptotermes gestroi
Coptotermes gestroi, commonly known as the Asian subterranean termite is a small species of termite that lives underground. Both this species and the Formosan subterranean termite (Coptotermes formosanus) are destructive pests native to Asia, but have spread to other parts of the world including the United States. In Asia, this species is known as the Philippine milk termite. The termite species Coptotermes havilandi was determined by Kirton and Brown in 2003 to be identical to Coptotermes gestroi, so following the principle of priority, the older name is now used. Distribution C. gestroi is endemic to Southeast Asia, but has spread to many other parts of the world over the course of the last century. It reached the Marquesas Islands in 1932, Mauritius in 1936, and Réunion in 1957. It reached Barbados in 1937 and spread to many islands in the West Indies. It also occurs in southern Mexico. It was discovered in Fiji in 2009. It was found in a single house in Hawaii in 1963 and was next detected there in 1999 and again in 2000, on the island of Oahu. The species is the subject of a research project at the University of Hawaii. In 1996, a colony was found to be infesting a church and store in Miami, Florida, and another infestation was discovered in 1999 in Key West. Further discoveries were made in 2002 and 2006 and the species appears to have become established in Broward and Miami-Dade Counties. It has also been found on some boats moored off the coast of Florida and it is thought that the termite may have arrived in Florida via this means, with sexually mature adults reaching the mainland after nuptial flights. In the West Indies, it has become established in some natural woodland habitats, but in Florida, it seems to be restricted to manmade structures, trees growing close to them, and boats. In the mainland United States, this species is likely to remain restricted to southern Florida because it is a tropical species and can only flourish with sufficient warmth. Chouvenc & Helmick 2015 find that C. gestroi readily hybridizes with another invasive termite in Florida, C. formosanus. Description The body of the worker termite is small, white, and translucent as are the limbs. The soldier is larger and also white, but the ovoid head, the forward-pointing mandibles, the prothorax, and the front segments of the abdomen are dark brown. Two small pale spots are on the head adjacent to the antennae. On the forehead is an opening called a fontanelle which can extrude a white defensive secretion. In appearance, C. gestroi is very similar to C. formosanus, but they can be differentiated under the microscope, with the number of hairs on the head of the soldier differing in the two species. Castes Life history Like other species of termites, an Asian subterranean termite colony contains three primary castes: the workers, soldiers, and reproductives. The workers are responsible for feeding the colony and caring for the young, and the soldiers are responsible for its defence. The king mates with the queen, whose chief function over a life of many years is the continuous laying of eggs. Her abdomen increases in size enormously in comparison to that of the king. The workers feed her, as she is unable to feed herself. In a mature colony, some eggs develop into winged reproductives, known as alates. They emerge above ground and form a swarm containing many thousands of individuals. This usually happens in the evening or at night in the spring. When a swarm is found inside a building, this may be the first sign to the owners of the presence of termites in the structure. On returning to the ground, the alates shed their wings. They are unlikely to find a suitable place to start a colony inside a building, but in the open, each female looks for an appropriate site with damp soil and moist timber. In a suitable crevice, a female and a male form a nursery chamber and 15 to 30 eggs are laid. These are reared by the king and queen and a second batch of eggs is laid a few weeks later. Workers from the first batch care for these. It may take several years for the colony to build up to sufficient numbers for winged reproductives to be formed. In a research study in Thailand, the foraging population of an average colony was 1.13 to 2.75 million individuals. Damage These termites are voracious feeders and consume wood, cardboard, and paper and sometimes even fabric. They feed on all sorts of cellulose-containing materials and drill holes in such materials as rubber, plastic, and styrofoam in their search for food. They also attack living trees by consuming the heartwood which weakens the trees and can bring them down in a storm. They live underground and enter buildings through cracks, expansion joints, and utility conduits. They sometimes form foraging tubes along the surface of the ground and the outside surfaces of structures. They eat structural timbers from the inside outwards, leaving a thin film of surface wood which may display a blistered appearance. In Singapore and Malaysia, this species is responsible for 80% to 90% of the damage caused to manmade structures by insects and it is the commonest species of termite found in built-up areas.
Biology and health sciences
Cockroaches & Termites (Blattodea)
Animals
41560816
https://en.wikipedia.org/wiki/Acid%20anhydride
Acid anhydride
An acid anhydride is a type of chemical compound derived by the removal of water molecules from an acid. In organic chemistry, organic acid anhydrides contain the functional group . Organic acid anhydrides often form when one equivalent of water is removed from two equivalents of an organic acid in a dehydration reaction. In inorganic chemistry, an acid anhydride refers to an acidic oxide, an oxide that reacts with water to form an oxyacid (an inorganic acid that contains oxygen or carbonic acid), or with a base to form a salt.
Physical sciences
Concepts
Chemistry
41561062
https://en.wikipedia.org/wiki/Base%20anhydride
Base anhydride
A base anhydride is an oxide of a chemical element from group 1 or 2 (the alkali metals and alkaline earth metals, respectively). They are obtained by removing water from the corresponding hydroxide base. If water is added to a base anhydride, a corresponding hydroxide salt can be [re]-formed. Base anhydrides are not Brønsted–Lowry bases because they are not proton acceptors. However, they are Lewis bases, because they will share an electron pair with some Lewis acids, most notably acidic oxides. They are potent alkalis and will produce alkali burns on skin, because their affinity for water (that is, their affinity for being slaked) makes them react with body water. Examples Quicklime (calcium oxide) is a base anhydride. It reacts with water to become hydrated lime (calcium hydroxide), which is a strong base, chemically akin to lye. This reaction is exothermic. CaO + H2O → Ca(OH)2 (ΔHr = −63.7kJ/mol of CaO) Sodium oxide reacts readily and irreversibly with water to give sodium hydroxide: Na2O + H2O → 2 NaOH
Physical sciences
Concepts
Chemistry
38807433
https://en.wikipedia.org/wiki/Deep-focus%20earthquake
Deep-focus earthquake
A deep-focus earthquake in seismology (also called a plutonic earthquake) is an earthquake with a hypocenter depth exceeding 300 km. They occur almost exclusively at convergent boundaries in association with subducted oceanic lithosphere. They occur along a dipping tabular zone beneath the subduction zone known as the Wadati–Benioff zone. Discovery Preliminary evidence for the existence of deep-focus earthquakes was first brought to the attention of the scientific community in 1922 by Herbert Hall Turner. In 1928, Kiyoo Wadati proved the existence of earthquakes occurring well beneath the lithosphere, dispelling the notion that earthquakes occur only with shallow focal depths. Seismic characteristics Deep-focus earthquakes give rise to minimal surface waves. Their focal depth causes the earthquakes to be less likely to produce seismic wave motion with energy concentrated at the surface. The path of deep-focus earthquake seismic waves from focus to recording station goes through the heterogeneous upper mantle and highly variable crust only once. Therefore, the body waves undergo less attenuation and reverberation than seismic waves from shallow earthquakes, resulting in sharp body wave peaks. Focal mechanisms The pattern of energy radiation of an earthquake is represented by the moment tensor solution, which is graphically represented by beachball diagrams. An explosive or implosive mechanism produces an isotropic seismic source. Slip on a planar fault surface results in a double-couple source. Uniform outward motion in a single plane due to normal shortening is known as a compensated linear vector dipole source. Deep-focus earthquakes have been shown to contain a combination of these sources. The focal mechanisms of deep-focus earthquakes depend on their positions in subducting tectonic plates. At depths greater than 400 km, down-dip compression dominates, while at depths of 250–300 km (also corresponding to a minimum in earthquake numbers vs. depth), the stress regime is more ambiguous but closer to down-dip tension. Physical process Shallow-focus earthquakes are the result of the sudden release of strain energy built up over time in rock by brittle fracture and frictional slip over planar surfaces. However, the physical mechanism of deep focus earthquakes is poorly understood. Subducted lithosphere subject to the pressure and temperature regime at depths greater than 300 km should not exhibit brittle behavior, but should rather respond to stress by plastic deformation. Several physical mechanisms have been proposed for the nucleation and propagation of deep-focus earthquakes; however, the exact process remains an outstanding problem in the field of deep-earth seismology. The following four subsections outline proposals which could explain the physical mechanism allowing deep focus earthquakes to occur. With the exception of solid-solid phase transitions, the proposed theories for the focal mechanism of deep earthquakes hold equal footing in current scientific literature. Solid-solid phase transitions The earliest proposed mechanism for the generation of deep-focus earthquakes is an implosion due to a phase transition of material to a higher-density, lower-volume phase. The olivine-spinel phase transition is thought to occur at a depth of 410 km in the interior of the earth. This hypothesis proposes that metastable olivine in oceanic lithosphere subducted to depths greater than 410 km undergoes a sudden phase transition to spinel structure. The increase in density due to the reaction would cause an implosion giving rise to the earthquake. This mechanism has been largely discredited due to the lack of a significant isotropic signature in the moment tensor solution of deep-focus earthquakes. Dehydration embrittlement Dehydration reactions of mineral phases with high water content would increase the pore pressure in a subducted oceanic lithosphere. The increase in pore pressure is attributed to the release of fluids in-situ within the rock, thus raising its overall pressure. This effect reduces the effective normal stress in the slab and allows slip to occur on pre-existing fault planes at significantly greater depths than would normally be possible. Several workers suggest that this mechanism does not play a significant role in seismic activity beyond 350 km depth due to the fact that most dehydration reactions will have reached completion by a pressure corresponding to depths of 150–300 km (5-10 GPa). Transformational faulting or anticrack faulting Transformational faulting, also known as anticrack faulting, is the result of the phase transition of a mineral to a higher-density phase occurring in response to shear stress in a fine-grained shear zone. The transformation occurs along the plane of maximal shear stress. Rapid shearing can then occur along these planes of weakness, giving rise to an earthquake in a mechanism similar to a shallow-focus earthquake. Metastable olivine subducted past the olivine-wadsleyite transition at 320–410 km depth (depending on temperature) is a potential candidate for such instabilities. Arguments against this hypothesis include the requirements that the faulting region should be very cold, and contain very little mineral-bound hydroxyl. Higher temperatures or higher hydroxyl contents preclude the metastable preservation of olivine to the depths of the deepest earthquakes. Shear instability / thermal runaway A shear instability arises when heat is produced by plastic deformation faster than it can be conducted away. The result is thermal runaway, a positive feedback loop of heating, material weakening, and strain localisation within the shear zone. Continued weakening may result in partial melting along zones of maximal shear stress. Plastic shear instabilities leading to earthquakes have not been documented in nature, nor have they been observed in natural materials in the laboratory. Their relevance to deep earthquakes therefore lies in mathematical models which use simplified material properties and rheologies to simulate natural conditions. Deep-focus earthquake zones Major zones Eastern Asia / Western Pacific On the border of the Pacific plate and the Okhotsk and Philippine Sea plates is one of the most active deep-focus earthquake regions in the world, creating many large earthquakes including the 8.3 2013 Okhotsk Sea earthquake. As with many places, earthquakes in this region are caused by internal stresses on the subducted Pacific plate as it is pushed deeper into the mantle. Philippines A subduction zone makes up most of the border of Philippine Sea plate and Sunda plate, the fault being partially responsible for the uplift of the Philippines. The deepest sections of the Philippine Sea plate cause earthquakes as deep as below the surface. Notable deep-focus earthquakes in this region include a 7.7 earthquake in 1972 and the 7.6, 7.5, and 7.3 2010 Mindanao earthquakes. Indonesia The Australian plate subducts under the Sunda plate, creating uplift over much of southern Indonesia, as well as earthquakes at depths of up to . Notable deep-focus earthquakes in this region include a 7.9 earthquake in 1996 and a 7.5 earthquake in 2007. Papua New Guinea / Fiji / New Zealand By far the most active deep focus faulting zone in the world is that caused by the Pacific plate subducting under the Australian plate, Tonga plate, and Kermadec plate. Earthquakes have been recorded at depths of over , the deepest in the planet. The large area of subduction results in a broad swath of deep-focus earthquakes centered from Papua New Guinea to Fiji to New Zealand, although the angle of the plates' collision causes the area between Fiji and New Zealand to be the most active, with earthquakes of 4.0 or above occurring on an almost daily basis. Notable deep-focus earthquakes in this region include a 8.2 and 7.9 earthquake in 2018, and a 7.8 earthquake in 1919. Andes The subduction of the Nazca plate under the South American plate, in addition to creating the Andes mountain range, has also created a number of deep faults under the surfaces of Colombia, Peru, Brazil, Bolivia, Argentina, and even as far east as Paraguay. Earthquakes frequently occur in the region at depths of up to beneath the surface. Several large earthquakes have taken place here, including the 8.2 1994 Bolivia earthquake (631 km deep), the 8.0 1970 Colombia earthquake (645 km deep), and 7.9 1922 Peru earthquake (475 km deep). Minor zones Granada, Spain Roughly under the city Granada in southern Spain, several large earthquakes have been recorded in modern history, notably including a 7.8 earthquake in 1954, and a 6.3 earthquake in 2010. The exact cause for the earthquakes remains unknown. Tyrrhenian Sea The Tyrrhenian Sea west of Italy is host to a large number of deep-focus earthquakes as deep as below the surface. However, very few earthquakes occur in the region less than deep, the majority originating from a depth of around . Due to the lack of shallow earthquakes, the faulting is believed to originate from an ancient subduction zone that began subducting less than 15 million years ago, and largely finished around 10 million years ago, no longer visible on the surface. Due to the calculated subduction rate, the cause for subduction was likely to be internal stressing on the Eurasian plate, rather than due to the collision of the African and Eurasian plates, the cause of modern-day subduction for the nearby Aegean Sea and Anatolian microplates. Afghanistan In northeastern Afghanistan, a number of medium-intensity deep focus earthquakes of depths of up to occasionally occur. They are caused by the collision and subduction of the Indian plate under the Eurasian plate, the deepest earthquakes centered on the furthest-subducted sections of the plate. South Sandwich Islands The South Sandwich Islands between South America and Antarctica are host to a number of earthquakes up to in depth. They are caused by the subduction of the South American plate under the South Sandwich plate. Notable deep-focus earthquakes The strongest deep-focus earthquake in seismic record was the magnitude 8.3 Okhotsk Sea earthquake that occurred at a depth of in 2013. The deepest earthquake ever recorded was a small 4.2 earthquake in Vanuatu at a depth of in 2004. However, although unconfirmed, an aftershock of the 2015 Ogasawara earthquake was found to have occurred at a depth of .
Physical sciences
Seismology
Earth science
29287934
https://en.wikipedia.org/wiki/Hydrophile
Hydrophile
A hydrophile is a molecule or other molecular entity that is attracted to water molecules and tends to be dissolved by water. In contrast, hydrophobes are not attracted to water and may seem to be repelled by it. Hygroscopics are attracted to water, but are not dissolved by water. Molecules A hydrophilic molecule or portion of a molecule is one whose interactions with water and other polar substances are more thermodynamically favorable than their interactions with oil or other hydrophobic solvents. They are typically charge-polarized and capable of hydrogen bonding. This makes these molecules soluble not only in water but also in polar solvents. Hydrophilic molecules (and portions of molecules) can be contrasted with hydrophobic molecules (and portions of molecules). In some cases, both hydrophilic and hydrophobic properties occur in a single molecule. An example of these amphiphilic molecules is the lipids that comprise the cell membrane. Another example is soap, which has a hydrophilic head and a hydrophobic tail, allowing it to dissolve in both water and oil. Hydrophilic and hydrophobic molecules are also known as polar molecules and nonpolar molecules, respectively. Some hydrophilic substances do not dissolve. This type of mixture is called a colloid. An approximate rule of thumb for hydrophilicity of organic compounds is that solubility of a molecule in water is more than 1 mass % if there is at least one neutral hydrophile group per 5 carbons, or at least one electrically charged hydrophile group per 7 carbons. Hydrophilic substances (ex: salts) can seem to attract water out of the air. Sugar is also hydrophilic, and like salt is sometimes used to draw water out of foods. Sugar sprinkled on cut fruit will "draw out the water" through hydrophilia, making the fruit mushy and wet, as in a common strawberry compote recipe. Chemicals Liquid hydrophilic chemicals complexed with solid chemicals can be used to optimize solubility of hydrophobic chemicals. Liquid chemicals Examples of hydrophilic liquids include ammonia, alcohols, some amides such as urea and some carboxylic acids such as acetic acid. Alcohols Hydroxyl groups (-OH), found in alcohols, are polar and therefore hydrophilic (water liking) but their carbon chain portion is non-polar which make them hydrophobic. The molecule increasingly becomes overall more nonpolar and therefore less soluble in the polar water as the carbon chain becomes longer. Methanol has the shortest carbon chain of all alcohols (one carbon atom) followed by ethanol (two carbon atoms), and 1-propanol along with its isomer 2-propanol, all being miscible with water. Tert-Butyl alcohol, with four carbon atoms, is the only one among its isomers to be miscible with water. Solid chemicals Cyclodextrins Cyclodextrins are used to make pharmaceutical solutions by capturing hydrophobic molecules as guest hosts. Because inclusion compounds of cyclodextrins with hydrophobic molecules are able to penetrate body tissues, these can be used to release biologically active compounds under specific conditions. For example, testosterone is complexed with hydroxy-propyl-beta-cyclodextrin (HPBCD), 95% absorption of testosterone was achieved in 20 minutes via the sublingual route but HPBCD was not absorbed, whereas hydrophobic testosterone is usually absorbed less than 40% via the sublingual route. Membrane filtration Hydrophilic membrane filtration is used in several industries to filter various liquids. These hydrophilic filters are used in the medical, industrial, and biochemical fields to filter elements such as bacteria, viruses, proteins, particulates, drugs, and other contaminants. Common hydrophilic molecules include colloids, cotton, and cellulose (which cotton consists of). Unlike other membranes, hydrophilic membranes do not require pre-wetting: they can filter liquids in their dry state. Although most are used in low-heat filtration processes, many new hydrophilic membrane fabrics are used to filter hot liquids and fluids.
Physical sciences
Supramolecular chemistry
Chemistry
29289333
https://en.wikipedia.org/wiki/Foreskin
Foreskin
In male human anatomy, the foreskin, also known as the prepuce (), is the double-layered fold of skin, mucosal and muscular tissue at the distal end of the human penis that covers the glans and the urinary meatus. The foreskin is attached to the glans by an elastic band of tissue, known as the frenulum. The outer skin of the foreskin meets with the inner preputial mucosa at the area of the mucocutaneous junction. The foreskin is mobile, fairly stretchable and sustains the glans in a moist environment. Except for humans, a similar structure known as a penile sheath appears in the male sexual organs of all primates and the vast majority of mammals. In humans, foreskin length varies widely and coverage of the glans in a flaccid and erect state can also vary. The foreskin is fused to the glans at birth and is generally not retractable in infancy and early childhood. Inability to retract the foreskin in childhood should not be considered a problem unless there are other symptoms. Retraction of the foreskin is not recommended until it loosens from the glans before or during puberty. In adults, it is typically retractable over the glans, given normal development. The male prepuce is anatomically homologous to the clitoral hood in females. In some cases, the foreskin may become subject to a pathological condition. Structure External The outside of the foreskin is a continuation of the shaft skin of the penis and is covered by a keratinized stratified squamous epithelium. The inner foreskin is a continuation of the epithelium that covers the glans and is made up of glabrous squamous mucous membrane, like the inside of the eyelid or the mouth. The mucosal aspect of the prepuce has a great capacity for self-repair. The area of the outer foreskin measures 7–100 cm2, and the inner foreskin measures 18–68 cm.2 The mucocutaneous zone occurs where the outer and inner foreskin meet. The foreskin is free to move after it separates from the glans, which usually occurs before or during puberty. The inner foreskin is attached to the glans by the frenulum, a highly vascularized tissue of the penis. The World Health Organization states that "the frenulum forms the interface between the outer and inner foreskin layers, and when the penis is not erect, it tightens to narrow the foreskin opening. Subcutaneous The human foreskin is a laminar structure made up of outer skin, mucosal epithelium, lamina propia, dartos fascia and dermis. The superficial dartos fascia, formerly called the peripenic muscle, is one of the two sheaths of smooth muscle tissue found below the penile skin, along with the underlying Buck's fascia or deep fascia of the penis. The dartos fascia extents within the skin of the prepuce and contains an abuncance of elastic fibers. These fibers form a whorl at the tip of the foreskin, known as the preputial orifice, which is narrow during infancy and childhood. The dartos fascia is sensitive to temperature and reacts to temperature changes by expanding and contracting. The fascia is only loosely connected with the underlying tissue, so that it provides the mobility and elasticity of the penile skin. Langerhans cells are immature dendritic cells that are found in all areas of the penile epithelium, but are most superficial in the inner surface of the foreskin. As a continuation of the human shaft skin, the prepuce receives somatosensory innervation from the bilateral dorsal nerve of the penis and branches of the perineal nerve, and autonomic innervation from the pelvic plexus. The somatosensory receptors that are found in the prepuce are both nociceptors and mechanoreceptors, with a predominace of Meissner's corpuscles. Blood supply to the prepuce is provided by the preputial artery, a division of the axial and dorsal artery of the penis. The axial and dorsal arteries that run within the penile skin unite through perforating branches and give off the preputial arteries before they reach the corona of the glans. The preputial vein, an extension of the superficial dorsal vein, receives blood from the prepuce and connects to the larger dorsal veins of the penis that drain the rest of the penile shaft. Development Gestation The penis develops from a primordial phallic structure that forms in the embryo during the early weeks of pregnancy, known as the genital tubercle. Initially undifferentiated, the tubercle develops into a penis depending on the exposure to male hormones secreted by the testicles. The differentiation of the external sexual organs will be evident between twelve and sixteen weeks of gestation. Preputial development is initiated at around eleven weeks or earlier and continues up to eighteen weeks. Historically, the theories regarding the stages of preputial development during gestation fall into two main ideas. The earliest report by Schweigger-Seidel (1866) and later Hunter (1935) suggested the formation of the prepuce out of dorsal skin and its progressive distal extension to completely cover and eventually fuse with the epithelium of the glans. Glenister (1956) expanded the theory suggesting that the preputial fold results as an ingrowth of the cellular lamina, which rolls outwards over the glans, but with the resultant preputial lamina also expanding backwards to form an ingrowing fold at the coronal sulcus. By eleven and twelve weeks of gestation, the process of preputial formation is evident as a thickening of the epidermis that separates from the penis creating a raised fold, known as the preputial fold. On the underside of this structure forms the preputial lamina, which expands dorsolaterally over the base of the developing glans. At thirteen weeks, the prepuce has not yet extended to the distal tip of the glans covering only a part of its surface. By sixteen weeks, the bilateral preputial folds cover most of the glans and the ventral sides of the prepuce fuse in the midline. The penile raphe, the continuation of the perineal raphe in human males, occurs on the ventral side of the penis as a manifestation of the fusion of the urethral and preputial folds. The dorsal nerve of the penis, which is present as early as nine weeks of gestation, completely expands through branches to the distal end of the glans and prepuce by sixteen weeks. At nineteen weeks, foreskin development is complete. Towards the end of the second trimester, the glans and the prepuce have completely fused together by the preputial, sometimes referred to as balanopreputial lamina. At birth, this shared membrane is physiologically adherent to the glans preventing retraction in infancy and early childhood. The phenomenon of non-retractile foreskin in children naturally starts to resolve in varying ages; in childhood, preadolescence or puberty. Retraction During the first years of life, the inner foreskin is fused to the glans making them hard to manually separate. At that time, forced retraction can cause pain or microtearing and is thus not recommended. The two surfaces may begin to separate from early childhood, but complete separation and retraction is a process that normally occurs over time. The phenomenon of non retractile or tight foreskin in childhood, sometimes referred to as physiologic phimosis, may completely resolve before, during or even after puberty. When the foreskin starts to become retractile, a pediatrician can recommend careful retraction at home and rinsing with water during bath. Mild soap may be used, but can be avoided, if it causes irritation. If full retraction is hard to achieve, the child may only wash the exposed area of the glans. Since there is no specific age when non-retractile foreskin begins to resolve, the time of foreskin retraction can vary considerably among children. During puberty, as the male begins to sexually mature, foreskin retractability gradually increases allowing more comfortable exposure of the glans when needed. Gentle washing under the foreskin during shower and maintaining good genital hygiene is sufficient to prevent smegma buildup. Smegma is an oily secretion in the genitals of both sexes that maintains the moist texture of the mucosal surfaces and prevents friction. In boys, it helps resolve the natural adhension of the glans and inner prepuce. By the end of puberty, most boys have a fully retractable foreskin. Variability In children, the foreskin usually covers the glans completely but in adults it may not. During erection, the degree of automatic foreskin retraction varies considerably; in some adults, when the foreskin is longer than the erect penis, it will not spontaneously retract upon erection. In this case, the foreskin remains covering all or some of the glans until retracted manually or by sexual activity. The foreskin can be classified as long, when the preputial orifice extents beyond the glans, medium, when the preputial orifice is located around the meatus, and short, when most of the glans is exposed. The variation of long foreskin was regarded by Chengzu (2011) as 'prepuce redundant'. Frequent retraction and washing under the foreskin is suggested for all adults, particularly for those with a long or 'redundant' foreskin. Some males, according to Xianze (2012), may be reluctant for their glans to be exposed because of discomfort when it chafes against clothing, although the discomfort on the glans was reported to diminish within one week of continuous exposure. Guochang (2010) states that for those whose foreskins are too tight to retract or have some adhesions, forcible retraction should be avoided since it may cause injury. Evolution and function The foreskin is part of the human phylogenetic heritage and is present in the vast majority of mammals. Non-human primates, such as the chimpanzees, have prepuces that partially or completely cover the glans penis. In primates, the foreskin is present in the genitalia of both sexes and likely has been present for millions of years of evolution. The World Health Organization (WHO) stated in 2007 that there was "debate about the role of the foreskin, with possible functions including keeping the glans moist, protecting the developing penis in utero, or enhancing sexual pleasure due to the presence of nerve receptors". In 2009, the World Health Organization called it a "myth" that circumcision has an effect on sexual pleasure. The view is echoed by other major medical organizations. The foreskin contains Meissner's corpuscles, which are one of a group of nerve endings involved in fine-touch sensitivity. Compared to other hairless skin areas on the body, the Meissner's index was highest in the finger tip (0.96) and lowest in the foreskin (0.28) which suggested that the foreskin has the least sensitive hairless tissue of the body. The foreskin helps to provide sufficient skin during an erection. In infants, it protects the glans from ammonia and feces in diapers, which reduces the incidence of meatal stenosis. And the foreskin helps prevent the glans from getting abrasions and trauma throughout life. In modern times, there is controversy regarding whether the foreskin is a vital or vestigial structure. In 1949, British physician Douglas Gairdner noted that the foreskin plays an important protective role in newborns. He wrote, "It is often stated that the prepuce is a vestigial structure devoid of function... However, it seems to be no accident that during the years when the child is incontinent the glans is completely clothed by the prepuce, for, deprived of this protection, the glans becomes susceptible to injury from contact with sodden clothes or napkin". During the physical act of sex, the foreskin reduces friction, which can reduce the need for additional sources of lubrication. The College of Physicians and Surgeons of British Columbia has written that the foreskin is "composed of an outer skin and an inner mucosa that is rich in specialized sensory nerve endings and erogenous tissue". In the March 2017 publication of the Global Health Journal: Science and Practice, Morris and Krieger wrote, "The variability in foreskin size is consistent with the foreskin being a vestigial structure". Clinical significance The foreskin can be involved in balanitis, phimosis, sexually transmitted infection and penile cancer. The American Academy of Pediatricians' now expired 2012 technical report on circumcision found that the foreskin can harbor micro-organisms that may increase the risk of urinary tract infections in some infants and contribute to the transmission of some sexually transmitted infections in adults. In some cases of recurrent pathologies, excessive soap washing may irritate the mucosa, therefore washing of the area should be done gently. Frenulum breve is a frenulum that is insufficiently long to allow the foreskin to fully retract, which may lead to discomfort during intercourse. Phimosis is a condition where the foreskin of an adult cannot be retracted properly. Phimosis can be treated by using topical steroid ointments and using lubricants during sex; for severe cases circumcision may be necessary. Posthitis is an inflammation of the foreskin. A condition called paraphimosis may occur if a tight foreskin becomes trapped behind the glans and swells as a restrictive ring. This can cut off the blood supply, resulting in ischemia of the glans penis. Lichen sclerosus is a chronic, inflammatory skin condition that most commonly occurs in adult women, although it may also be seen in men and children. Topical clobetasol propionate and mometasone furoate were proven effective in treating genital lichen sclerosus. Some birth defects of the foreskin can occur; all of them are rare. In aposthia there is no foreskin at birth, in micropathia the foreskin does not cover the glans, and in macroposthia, also called and congenital megaprepuce, the foreskin extends well past the end of the glans. It has been found that larger foreskins place men who are not circumcised at an increased risk of HIV infection most likely due to the larger surface area of inner foreskin and the high concentration of Langerhans cells. Society and culture Modifications Circumcision is the removal of the foreskin, either partially or completely. It is most commonly performed as an elective procedure for prophylactic, cultural, or religious reasons. Circumcision may also be performed on children or adults to treat phimosis, balanitis, and other pathologies. The ethics of circumcision in children is a source of controversy. Some men use weights or other devices to stretch the skin of the penis to regrow a foreskin; the resulting tissue does cover the glans but does not fully replicate the features of a foreskin. Other cultural or aesethetic practices include genital piercings involving the foreskin and slitting the foreskin. Preputioplasty is the most common foreskin reconstruction technique, most often done when a boy is born with a foreskin that is too small; a similar procedure is performed to relieve a tight foreskin without resorting to circumcision. Foreskin-based products Foreskins obtained from circumcision procedures are frequently used by biochemical and micro-anatomical researchers to study the structure and proteins of human skin. In particular, foreskins obtained from newborns have been found to be useful in the manufacturing of more human skin. Foreskins of babies are also used for skin graft tissue, and for β-interferon-based drugs. Foreskin-derived fibroblasts have been used in biomedical research, and cosmetic applications. History The foreskin was considered a sign of beauty, civility, and masculinity throughout the Greco-Roman world. In ancient Greece, foreskins were valued, especially those that were longer. The earliest known illustrative depiction of the foreskin dates back to Egyptian kingdoms. The foreskin has also been depicted in art from different historical ages:
Biology and health sciences
Reproductive system
Biology
24670148
https://en.wikipedia.org/wiki/Herschel%20graph
Herschel graph
In graph theory, a branch of mathematics, the Herschel graph is a bipartite undirected graph with 11 vertices and 18 edges. It is a polyhedral graph (the graph of a convex polyhedron), and is the smallest polyhedral graph that does not have a Hamiltonian cycle, a cycle passing through all its vertices. It is named after British astronomer Alexander Stewart Herschel, because of Herschel's studies of Hamiltonian cycles in polyhedral graphs (but not of this graph). Definition and properties The Herschel graph has three vertices of degree four (the three blue vertices aligned vertically in the center of the illustration) and eight vertices of degree three. Each two distinct degree-four vertices share two degree-three neighbors, forming a four-vertex cycle with these shared neighbors. There are three of these cycles, passing through six of the eight degree-three vertices (red in the illustration). Two more degree-three vertices (blue) do not participate in these four-vertex cycles; instead, each is adjacent to three of the six red vertices. The Herschel graph is a polyhedral graph; this means that it is a planar graph, one that can be drawn in the plane with none of its edges crossing, and that it is 3-vertex-connected: the removal of any two of its vertices leaves a connected subgraph. It is a bipartite graph: when it is colored with five blue and six red vertices, as illustrated, each edge has one red endpoint and one blue endpoint. It has order-6 dihedral symmetry, for a total of 12 members of its automorphism group. The degree-four vertices can be permuted arbitrarily, giving six permutations, and in addition, for each permutation of the degree-four vertices, there is a symmetry that keeps these vertices fixed and exchanges pairs of degree-three vertices. Polyhedron By Steinitz's theorem, every graph that is planar and 3-vertex-connected has a convex polyhedron with the graph as its skeleton. Because the Herschel graph has these properties, it can be represented in this way by a convex polyhedron, an enneahedron having polyhedron has nine quadrilaterals as its faces. This can be chosen so that each graph automorphism corresponds to a symmetry of the polyhedron, in which case three of the faces will be rhombi or squares, and the other six will be kites. The dual polyhedron is a rectified triangular prism, which can be formed as the convex hull of the midpoints of the edges of a triangular prism. When constructed in this way, it has three square faces on the same planes as the square faces of the prism, two equilateral triangle faces on the planes of the triangular ends of the prism, and six more isosceles triangle faces. This polyhedron has the property that its faces cannot be numbered in such a way that consecutive numbers appear on adjacent faces, and such that the first and last numbers are also on adjacent faces, because such a numbering would necessarily correspond to a Hamiltonian cycle in the Herschel graph. Polyhedral face numberings of this type are used as "spindown life counters" in the game Magic: The Gathering, to track player lives, by turning the polyhedron to an adjacent face whenever a life is lost. A card in the game, the Lich, allows players to return from a nearly-lost state with a single life to their initial number of lives. Because the dual polyhedron for the Herschel graph cannot be numbered in such a way that this step connects adjacent faces, name the canonical polyhedron realization of this dual polyhedron as "the Lich's nemesis". Hamiltonicity As a bipartite graph that has an odd number of vertices, the Herschel graph does not contain a Hamiltonian cycle (a cycle of edges that passes through each vertex exactly once). For, in any bipartite graph, any cycle must alternate between the vertices on either side of the bipartition, and therefore must contain equal numbers of both types of vertex and must have an even length. Thus, a cycle passing once through each of the eleven vertices cannot exist in the Herschel graph. A graph is called Hamiltonian whenever it contains a Hamiltonian cycle, so the Herschel graph is not Hamiltonian. It has the smallest number of vertices, the smallest number of edges, and the smallest number of faces of any non-Hamiltonian polyhedral graph. There exist other polyhedral graphs with 11 vertices and no Hamiltonian cycles (notably the Goldner–Harary graph) but none with fewer edges. All but three of the vertices of the Herschel graph have degree three. A graph is called cubic or 3-regular when all of its vertices have degree three. P. G. Tait conjectured that a polyhedral 3-regular graph must be Hamiltonian; this was disproved when W. T. Tutte provided a counterexample, the Tutte graph, which is much larger than the Herschel graph. A refinement of Tait's conjecture, Barnette's conjecture that every bipartite 3-regular polyhedral graph is Hamiltonian, remains open. Every maximal planar graph that does not have a Hamiltonian cycle has a Herschel graph as a minor. The Herschel graph is conjectured to be one of three minor-minimal non-Hamiltonian 3-vertex-connected graphs. The other two are the complete bipartite graph and a graph formed by splitting both the Herschel graph and into two symmetric halves by three-vertex separators and then combining one half from each graph. The Herschel graph also provides an example of a polyhedral graph for which the medial graph has no Hamiltonian decomposition into two edge-disjoint Hamiltonian cycles. The medial graph of the Herschel graph is a 4-regular graph with 18 vertices, one for each edge of the Herschel graph; two vertices are adjacent in the medial graph whenever the corresponding edges of the Herschel graph are consecutive on one of its faces. It is 4-vertex-connected and essentially 6-edge-connected. Here, a graph is -vertex-connected or -edge-connected if the removal of fewer than vertices or edges (respectively) cannot disconnected it. Planar graphs cannot be 6-edge-connected, because they always have a vertex of degree at most five, and removing the neighboring edges disconnects the graph. The "essentially 6-edge-connected" terminology means that this trivial way of disconnecting the graph is ignored, and it is impossible to disconnect the graph into two subgraphs that each have at least two vertices by removing five or fewer edges. History The Herschel graph is named after Alexander Stewart Herschel, a British astronomer, who wrote an early paper concerning William Rowan Hamilton's icosian game. This is a puzzle involving finding Hamiltonian cycles on a polyhedron, usually the regular dodecahedron. The Herschel graph describes the smallest convex polyhedron that can be used in place of the dodecahedron to give a game that has no solution. Herschel's paper described solutions for the Icosian game only on the graphs of the regular tetrahedron and regular icosahedron; it did not describe the Herschel graph. The name "the Herschel graph" makes an early appearance in a graph theory textbook by John Adrian Bondy and U. S. R. Murty, published in 1976. The graph itself was described earlier, for instance by H. S. M. Coxeter.
Mathematics
Graph theory
null
5339220
https://en.wikipedia.org/wiki/Automobile%20repair%20shop
Automobile repair shop
An automobile repair shop (also known regionally as a garage or a workshop) is an establishment where automobiles are repaired by auto mechanics and technicians. The customer interface is typically a service advisor, traditionally called a service writer. Types Automotive garages and repair shops can be divided into following categories: Service station First appearing in the early 1900s, many filling stations offered vehicle repair services as part of their full service operation. This once popular trend has declined significantly over the years as many locations found it more profitable to exchange vehicle service bays for grocery isles, which ultimately led to the emergence of the quick oil change industry. Lubrication/safety shop Commonly referred to as a quick lube or express service shop, this type of facility specializes in preventive maintenance and safety inspections rather than repairs. Product sales are typically limited to automotive fluids, belts and hoses. With a focus on basic procedures, labor is often performed by entry-level technicians which simplifies the business overhead resulting in a less expensive service as compared to a traditional automotive workshop. New car dealership In the United States, new car dealerships have service departments that are certified by their respective OEM (Original Equipment Manufacturer) to perform warranty and recall repairs. Customer-pay repairs can also be completed, however most service departments tend to only work on the vehicle brand of which they are a dealer. Dealership technicians must complete additional training provided by the OEM, and in doing so become specialized and certified for that particular vehicle make. Independent auto repair shop Independent auto repair shops are businesses that are independently owned and operated. In states regulating a smog or emission test, often, independent auto repair shops offer these tests as well. These may also include regional or national chains and franchises. It is rather common for a dealership technician to start this type of competing business after leaving the employment of a new car dealership. Independent automobile repair shops in the US may also achieve OEM certification through manufacturer sponsored programs. European Union law (The EC Block Exemption Regulation 1400/2002 (October 2003)) permits motorists more flexibility in selecting where their car is serviced. Maintenance and service work does not have to be done by the dealership providing that the independent garage uses Original Equipment 'Matching Quality' parts and follows the manufacturer's service schedules. The Block Exemption Regulation (BER) covers service and maintenance during the warranty period and prohibits vehicle manufacturers' warranties from including restrictive conditions. Fleet shop A shop that is dedicated to repairing and maintaining a particular group of vehicles is called a fleet shop. Common examples of a fleet include taxi cabs, police cars, mail trucks and rental vehicles. Similar to a lubrication/safety shop, a fleet shop focuses primarily on preventative maintenance and safety inspections, and will often outsource larger or more complex repairs to another repair facility. Engine machine shop Shops that specialize in cylinder head and cylinder block machining are called engine machine shops. These facilities utilize large electromechanical machines that are not found in the average automotive repair shop. In the US, engine machining is typically performed by an ASE certified machinist in order to correct worn or damaged engine components as an alternative to component replacement. Performance engine building is another popular service frequently offered by this type of workshop. Tire and wheel shop Some repair shops specialize in tires and wheels. These businesses usually have a large inventory of tires and aftermarket wheels, some of which may be on display while others require special ordering. In addition to parts, common labor services include tire rotation, balancing and repair as well as wheel alignment which can prevent premature tire wear. In the Philippines, roadside tire repair shops are called vulcanizing shops in Philippine English. They specialize in quickly and cheaply repairing flat tires by patching punctures with a rubber compound patch. Muffler shop A muffler shop, also called an exhaust shop, is a business model that concentrates solely on the engine exhaust system. These facilities utilize large tubing benders which allow a technician to fabricate a new exhaust system out of otherwise straight lengths of pipe. Welding is often necessary in this line of work. Auto body Automotive repair shops that specialize in bodywork repair are known as body shops. Auto body technicians can perform paintwork repairs to scratches, scuffs and dents, as well as repairs to the bodies of vehicles damaged by collisions. Many body shops now offer paintless dent repair and auto glass replacement. Automotive repair shops that specialize in auto glass repair are known as auto glass repair shops. They offer auto glass repairs to chips, cracks and shattered glass. The types of glass they repair include windshields, car windows, quarter glass and rear windows. This type of damage is often caused by hail, stones, wild animals, fallen trees, automobile theft and vandalism. Mobile mechanics Mobile mechanics provide doorstep repair services and home delivery of new and used auto parts of different late model and classic cars whose parts are not widely available in the market. In countries such as the UK, the mobile car body repair sectors has experienced high growth by way of mobile SMART Repair companies providing mobile car body repair services, such as Bumper Repairs, auto body repair, paintless dent repair and paintwork defect repairs to private and commercial consumers, typically within the industry framework of refinishing vehicle damage on a localised basis, where the area of damage being repaired is not in excess of an A4 sheet of paper.
Technology
Concepts of ground transport
null
5340299
https://en.wikipedia.org/wiki/Glass%20knifefish
Glass knifefish
Glass knifefishes are fishes in the family Sternopygidae in the order Gymnotiformes. Species are also known as rattail knifefishes. These fishes inhabit freshwater streams and rivers in Panama and South America. Many species are specialized for life in the deep (more than ) swiftly moving waters of large river channels, like that of the Amazon and its major tributaries where they have been observed swimming vertically. Sternopygus species inhabit both streams and rivers. Many species are highly compressed laterally and translucent in life. These fish have villiform (brush-like) teeth on the upper and lower jaws. The snout is relatively short. The eyes are relatively large, with a diameter equal to or greater than the distance between nares. The anal fin originates at the isthmus (the strip of flesh on the ventral surface between the gill covers). The maximum length is in Sternopygus macrurus. Eigenmannia vicentespelaea is the only cave-dwelling gymnotiform. Humboldtichthys kirschbaumi (formerly genus Ellisella) from Upper Miocene of Bolivia is the only fossil gymnotiform. These fish have a tone-like electric organ discharge (EOD) that occurs monophasically. Some of these species are aquarium fishes. Genera There are 30 living species of glass knifefish, grouped into six genera: Archolaemus Distocyclus Eigenmannia †Humboldtichthys (fossil, Upper Miocene) Japigny Rhabdolichops Sternopygus
Biology and health sciences
Gymnotiformes
Animals
5340351
https://en.wikipedia.org/wiki/Crystal%20engineering
Crystal engineering
Crystal engineering studies the design and synthesis of solid-state structures with desired properties through deliberate control of intermolecular interactions. It is an interdisciplinary academic field, bridging solid-state and supramolecular chemistry. The main engineering strategies currently in use are hydrogen- and halogen bonding and coordination bonding. These may be understood with key concepts such as the supramolecular synthon and the secondary building unit. History of term The term 'crystal engineering' was first used in 1955 by R. Pepinsky but the starting point is often credited to Gerhard Schmidt in connection with photodimerization reactions in crystalline cinnamic acids. Since this initial use, the meaning of the term has broadened considerably to include many aspects of solid state supramolecular chemistry. A useful modern definition is that provided by Gautam Desiraju, who in 1988 defined crystal engineering as "the understanding of intermolecular interactions in the context of crystal packing and the utilization of such understanding in the design of new solids with desired physical and chemical properties." Since many of the bulk properties of molecular materials are dictated by the manner in which the molecules are ordered in the solid state, it is clear that an ability to control this ordering would afford control over these properties. Non-covalent control of structure Crystal engineering relies on noncovalent bonding to achieve the organization of molecules and ions in the solid state. Much of the initial work on purely organic systems focused on the use of hydrogen bonds, although coordination and halogen bonds provide additional control in crystal design. Molecular self-assembly is at the heart of crystal engineering, and it typically involves an interaction between complementary hydrogen bonding faces or a metal and a ligand. "Supramolecular synthons" are building blocks that are common to many structures and hence can be used to order specific groups in the solid state. Design of multi-component crystals The intentional synthesis of cocrystals is most often achieved with strong heteromolecular interactions. The main relevance of multi-component crystals is focused upon designing pharmaceutical cocrystals. Pharmaceutical cocrystals are generally composed of one API (Active Pharmaceutical Ingredient) with other molecular substances that are considered safe according to the guidelines provided by WHO (World Health Organization). Various properties (such as solubility, bioavailability, permeability) of an API can be modulated through the formation of pharmaceutical cocrystals. In two dimensions 2D architectures (i.e., molecularly thick architectures) is a branch of crystal engineering. The formation (often referred as molecular self-assembly depending on its deposition process) of such architectures lies in the use of solid interfaces to create adsorbed monolayers. Such monolayers may feature spatial crystallinity. However the dynamic and wide range of monolayer morphologies ranging from amorphous to network structures have made of the term (2D) supramolecular engineering a more accurate term. Specifically, supramolecular engineering refers to "(The) design (of) molecular units in such way that a predictable structure is obtained" or as "the design, synthesis and self-assembly of well defined molecular modules into tailor-made supramolecular architectures". scanning probe microscopic techniques enable visualization of two dimensional assemblies. Polymorphism Polymorphism, the phenomenon wherein the same chemical compound exists in more than one crystal forms, is relevant commercially because polymorphic forms of drugs may be entitled to independent patent protection. The importance of crystal engineering to the pharmaceutical industry is expected to grow exponentially. Polymorphism arises due to the competition between kinetic and thermodynamic factors during crystallization. While long-range strong intermolecular interactions dictate the formation of kinetic crystals, the close packing of molecules generally drives the thermodynamic outcome. Understanding this dichotomy between the kinetics and thermodynamics constitutes the focus of research related to the polymorphism. In organic molecules, three types of polymorphism are mainly observed. Packing polymorphism arises when molecules pack in different ways to give different structures. Conformational polymorphism, on the other hand is mostly seen in flexible molecules where molecules have multiple conformational possibilities within a small energy window. As a result, multiple crystal structures can be obtained with the same molecule but in different conformations. The rarest form of polymorphism arises from the differences in the primary synthon and this type of polymorphism is called as synthon polymorphism. Crystal structure prediction Crystal structure prediction (CSP) is a computational approach to generate energetically feasible crystal structures (with corresponding space group and positional parameters) from a given molecular structure. The CSP exercise is considered most challenging as "experimental" crystal structures are very often kinetic structures and therefore are very difficult to predict. In this regard, many protocols have been proposed and are tested through several blind tests organized by CCDC since 2002. A major advance in the CSP happened in 2007 while a hybrid method based on tailor made force fields and density functional theory (DFT) was introduced. In the first step, this method employs tailor made force fields to decide upon the ranking of the structures followed by a dispersion corrected DFT method to calculate the lattice energies precisely. Apart from the ability of predicting crystal structures, CSP also gives computed energy landscapes of crystal structures where many structures lie within a narrow energy window. This kind of computed landscapes lend insights into the study on polymorphism, design of new structures and also help to design crystallization experiments. Property design The design of crystal structures with desired properties is the ultimate goal of crystal engineering. Crystal engineering principles have been applied to the design of non-linear optical materials, especially those with second harmonic generation (SHG) properties. Using supramolecular synthons, supramolecular gels have been designed. Mechanical properties of crystalline materials Designing a crystalline material with targeted properties requires an understanding of the material's molecular and crystal features in relation to its mechanical properties. Four mechanical properties are of interest for crystalline materials: plasticity, elasticity, brittleness, and shear strength). Intermolecular interactions Manipulation of the intermolecular interaction network is a means for controlling bulk properties. During crystallization, intermolecular interactions form according to an electrostatic hierarchy. Strong hydrogen bonds are the primary director for crystal organization. Crystal architecture Typically, the strongest intermolecular interactions form the molecular layers or columns and the weakest intermolecular interactions form the slip plane. For example, long chains or layers of acetaminophen molecules form due to the hydrogen bond donors and acceptors that flank the benzene ring. The weaker interactions between the chains or layers of acetaminophen required less energy to break than the hydrogen bonds. As a result, a slip plane is formed. A supramolecular synthon is a pair of molecules that form relatively strong intermolecular interactions in the early phases of crystallization; these molecule pairs are the basic structural motif found in a crystal lattice. Defects or imperfections Lattice defects, such as point defects, tilt boundaries, or dislocations, create imperfections in crystal architecture and topology. Any disruption to the crystal structure alters the mechanism or degree of molecular movement, thereby changing the mechanical properties of the material. Examples of point imperfections include vacancies, substitutional impurities, interstitial impurities, Frenkel’s defects, and Schottky’s defects. Examples of line imperfections include edge and screw dislocations. Assessing Crystal Structure Crystallographic methods, such as X-ray diffraction, are used to elucidate the crystal structure of a material by quantifying distances between atoms. The X-ray diffraction technique relies on a particular crystal structure creating a unique pattern after X-rays are diffracted through the crystal lattice. Microscopic methods, such as optical, electron, field ion, and scanning tunneling microscopy, can be used to visualize the microstructure, imperfections, or dislocations of a material. Ultimately, these methods elaborate on the growth and assembly of crystallites during crystallization, which can be used to rationalize the movement of crystallites in response to an applied load. Calorimetric methods, such as differential scanning calorimetry, use induce phase transitions in order to quantify the associated changes in enthalpy, entropy, and Gibb's free energy. The melting and fusion phase transitions are dependent on the lattice energy of the crystalline material, which can be used to determine percent crystallinity of the sample. Raman spectroscopy is a method that uses light scattering to interact with bonds in a sample. This technique provides information about chemical bonds, intermolecular interactions, and crystallinity. Assessing mechanical properties Nanoindentation is a standard and widely-accepted method for measuring mechanical properties within the crystal engineering field. The method quantifies hardness, elasticity, packing anisotropy, and polymorphism of a crystalline material. Hirshfeld surfaces are visual models of electron density at a specific isosurface that aid in visualizing and quantifying intermolecular interactions. An advantage to using Hirshfeld surfaces in crystal engineering is that these surface maps are embedded with information about a molecular and its neighbors. The insight into molecular neighbors can be applied to assessment or prediction of molecular properties. An emerging method for topography and slip plane analysis using energy frameworks, which are models of crystal packing that depict interaction energies as pillars or beams.
Physical sciences
Crystallography
Physics
43062570
https://en.wikipedia.org/wiki/Mesotherm
Mesotherm
A mesotherm (from Greek μέσος mesos "intermediate" and thermē "heat") is a type of animal with a thermoregulatory strategy intermediate to cold-blooded ectotherms and warm-blooded endotherms. Definition Mesotherms have two basic characteristics: Elevation of body temperature via metabolic production of heat. Weak or absent metabolic control of a particular body temperature. The first trait distinguishes mesotherms from ectotherms, the second from endotherms. For instance, endotherms, when cold, will generally resort to shivering or metabolizing brown fat to maintain a constant body temperature, leading to higher metabolic rates. A mesotherm, however, will experience lower body temperatures and lower metabolic rates as ambient temperature drops. In addition, mesotherm body temperatures tend to rise as body size increases (a phenomenon known as gigantothermy), unlike endotherms. This reflects the lower surface area to volume ratio in large animals, which reduces rates of heat loss. While extant mesotherms are relatively rare, good examples include tuna, lamnid sharks (e.g., the great white shark), the leatherback sea turtle, some species of bee, naked mole rats, hyraxes, and echidnas. Historically, the same word was used by de Candolle to describe plants that require a moderate degree of heat for successful growth. In his scheme, a mesotherm plant grew in regions where the warmest month had a mean temperature greater than and the coldest month had a mean temperature of at least . Dinosaur thermoregulation The thermoregulatory status of dinosaurs has long been debated, and is still an active area of research. The term 'mesothermy' was originally coined to advocate for an intermediate status of non-avian dinosaur thermoregulation, between endotherms and ectotherms. A more technical definition was provided by Grady et al, who argued for dinosaur mesothermy on the basis of their intermediate growth rates, and the empirical relationship between growth, metabolism and thermoregulation in extant vertebrates. This viewpoint was challenged by D'Emic, who argued that because growth rates are sensitive to seasonal variation in resources, dinosaur maximum growth rates were underestimated by Grady et al. Adjusting dinosaur rates upwards by a factor of two, D'Emic found dinosaurs to grow similarly to mammals, and thus were likely endothermic. However, sensitivity to seasonal variation in resources should be true for all vertebrates. If all vertebrate taxa were similarly adjusted, the relative differences in rates does not change. Dinosaurs remain intermediate growers and good candidates for mesothermy. Nonetheless, the dinosaur mesothermy hypothesis requires further support to be confirmed. Fossil oxygen isotopes, which can reveal an organism's body temperature, should be particularly informative. Recently, a study of theropod and sauropod isotopes offered some support for dinosaur mesothermy. Feathered theropods are probably the best candidates for dinosaur endothermy, yet the examined theropods had relatively low body temperatures . Large sauropods had higher body temperatures , which may be reflective of mesothermic gigantothermy . Future isotopic analysis of small, juvenile dinosaurs will better resolve this question.
Biology and health sciences
Basics
Biology
3976493
https://en.wikipedia.org/wiki/Vanadate
Vanadate
In chemistry, a vanadate is an anionic coordination complex of vanadium. Often vanadate refers to oxoanions of vanadium, most of which exist in its highest oxidation state of +5. The complexes and are referred to as hexacyanovanadate(III) and nonachlorodivanadate(III), respectively. A simple vanadate ion is the tetrahedral orthovanadate anion, (which is also called vanadate(V)), which is present in e.g. sodium orthovanadate and in solutions of in strong base (pH > 13). Conventionally this ion is represented with a single double bond, however this is a resonance form as the ion is a regular tetrahedron with four equivalent oxygen atoms. Additionally a range of polyoxovanadate ions exist which include discrete ions and "infinite" polymeric ions. There are also vanadates, such as rhodium vanadate, , which has a statistical rutile structure where the and ions randomly occupy the positions in the rutile lattice, that do not contain a lattice of cations and balancing vanadate anions but are mixed oxides. In chemical nomenclature when vanadate forms part of the name, it indicates that the compound contains an anion with a central vanadium atom, e.g. ammonium hexafluorovanadate is a common name for the compound with the IUPAC name of ammonium hexafluoridovanadate(III). Examples of oxovanadate ions Some examples of discrete ions are "orthovanadate", tetrahedral. "pyrovanadate", corner-shared tetrahedra, similar to the dichromate ion , cyclic with corner-shared tetrahedra , cyclic with corner-shared tetrahedra , corner shared tetrahedra , ring. "decavanadate", edge- and corner-shared octahedra , fused octahedra Some examples of polymeric "infinite" ions are in e.g. , sodium metavanadate in In these ions vanadium exhibits tetrahedral, square pyramidal and octahedral coordination. In this respect vanadium shows similarities to tungstate and molybdate, whereas chromium however has a more limited range of ions. Aqueous solutions Dissolution of vanadium pentoxide in strongly basic aqueous solution gives the colourless ion. On acidification, this solution's colour gradually darkens through orange to red at around pH 7. Brown hydrated V2O5 precipitates around pH 2, redissolving to form a light yellow solution containing the ion. The number and identity of the oxyanions that exist between pH 13 and 2 depend on pH as well as concentration. For example, protonation of vanadate initiates a series of condensations to produce polyoxovanadate ions: pH 9–12: , pH 4–9: , , pH 2–4: , Pharmacological properties Vanadate is a potent inhibitor of certain plasma membrane ATPases, such as Na+/K+-ATPase and Ca2+-ATPase (PMCA). Acting as a transition-state analog of phosphate, vanadate undergoes nucleophillic attack by water during phosphoryl transfer, essentially "trapping" P-type ATPases in their phosphorylated E2 state. It also inhibits skeletal muscle actomyosin MgATPase activity and calcium activated force generation by actomyosin in the intact skeletal muscle contractile apparatus. However, it does not inhibit other ATPases, such as SERCA (sarco/endoplasmic reticulum Ca2+-ATPase) or mitochondrial ATPase.
Physical sciences
Metallic oxyanions
Chemistry
3978897
https://en.wikipedia.org/wiki/Soil%20texture
Soil texture
Soil texture is a classification instrument used both in the field and laboratory to determine soil classes based on their physical texture. Soil texture can be determined using qualitative methods such as texture by feel, and quantitative methods such as the hydrometer method based on Stokes' law. Soil texture has agricultural applications such as determining crop suitability and to predict the response of the soil to environmental and management conditions such as drought or calcium (lime) requirements. Soil texture focuses on the particles that are less than two millimeters in diameter which include sand, silt, and clay. The USDA soil taxonomy and WRB soil classification systems use 12 textural classes whereas the UK-ADAS system uses 11. These classifications are based on the percentages of sand, silt, and clay in the soil. History The first classification, the International system, was first proposed by Albert Atterberg in 1905 and was based on his studies in southern Sweden. Atterberg chose 20 μm for the upper limit of silt fraction because particles smaller than that size were not visible to the naked eye, the suspension could be coagulated by salts, capillary rise within 24 hours was most rapid in this fraction, and the pores between compacted particles were so small as to prevent the entry of root hairs. Commission One of the International Society of Soil Science (ISSS) recommended its use at the first International Congress of Soil Science in Washington in 1927. Australia adopted this system, and its equal logarithmic intervals are an attractive feature worth maintaining. The United States Department of Agriculture (USDA) adopted its own system in 1938, and the Food and Agriculture Organization (FAO) used the USDA system in the FAO-UNESCO world soil map and recommended its use. Classification In the United States, twelve major soil texture classifications are defined by the United States Department of Agriculture. The twelve classifications are sand, loamy sand, sandy loam, loam, silt loam, silt, sandy clay loam, clay loam, silty clay loam, sandy clay, silty clay, and clay. Soil textures are classified by the fractions of each soil separate (sand, silt, and clay) present in a soil. Classifications are typically named for the primary constituent particle size or a combination of the most abundant particles sizes, e.g. "sandy clay" or "silty clay". A fourth term, loam, is used to describe equal properties of sand, silt, and clay in a soil sample, and lends to the naming of even more classifications, e.g. "clay loam" or "silt loam". Determining soil texture is often aided with the use of a soil texture triangle plot. An example of a soil triangle is found on the right side of the page. One side of the triangle represents percent sand, the second side represents percent clay, and the third side represents percent silt. If the percentages of sand, clay, and silt in the soil sample are known, then the triangle can be used to determine the soil texture classification. For example, if a soil is 70 percent sand and 10 percent clay then the soil is classified as a sandy loam. The same method can be used starting on any side of the soil triangle. If the texture by feel method was used to determine the soil type, the triangle can also provide a rough estimate on the percentages of sand, silt, and clay in the soil. Chemical and physical properties of a soil are related to texture. Particle size and distribution will affect a soil's capacity for holding water and nutrients. Fine textured soils generally have a higher capacity for water retention, whereas sandy soils contain large pore spaces that allow leaching. Soil separates Soil separates are specific ranges of particle sizes. The smallest particles are clay particles and are classified as having diameters of less than 0.002 mm. Clay particles are plate-shaped instead of spherical, allowing for an increased specific surface area. The next smallest particles are silt particles and have diameters between 0.002 mm and 0.05 mm (in USDA soil taxonomy). The largest particles are sand particles and are larger than 0.05 mm in diameter. Furthermore, large sand particles can be described as coarse, intermediate as medium, and the smaller as fine. Other countries have their own particle size classifications. Methodology Texture by feel Hand analysis is a simple and effective means to rapidly assess and classify a soil's physical condition. Correctly executed, the procedure allows for rapid and frequent assessment of soil characteristics with little or no equipment. It is thus a useful tool for identifying spatial variation both within and between fields as well as identifying progressive changes and boundaries between soil map units (soil series). Texture by feel is a qualitative method, as it does not provide exact values of sand, silt, and clay. Although qualitative, the texture by feel flowchart can be an accurate way for a scientist or interested individual to analyze the relative proportions of sand, silt, and clay. The texture by feel method involves taking a small sample of soil and making a ribbon. A ribbon can be made by taking a ball of soil and pushing the soil between the thumb and forefinger and squeezing it upward into a ribbon. Allow the ribbon to emerge and extend over the forefinger, breaking from its own weight. Measuring the length of the ribbon can help determine the amount of clay in the sample. After making a ribbon, excessively wet a small pinch of soil in the palm of the hand and rub in with the forefinger to determine the amount of sand in the sample. Soils that have a high percentage of sand, such as sandy loam or sandy clay, have a gritty texture. Soils that have a high percentage of silt, such as silty loam or silty clay, feel smooth. Soils that have a high percentage of clay, such as clay loam, have a sticky feel. Although the texture by feel method takes practice, it is a useful way to determine soil texture, especially in the field. The international soil classification system World Reference Base for Soil Resources (WRB) uses an alternative method to determine texture by feel, offering another flow chart. Sieving Sieving is a long-established but still widely used soil analysis technique.  In sieving, a known weight of sample material passes through finer sieves. The amount collected on each sieve is weighted to determine the percentage weight in each size fraction. The method is used to determine the grain size distribution of soils that are greater than 75 μm in diameter, as sieving has a strong disadvantage in the lower measurement border. In fact, in case of finer fraction at high content of clay and silt (below 60 μm), the dispersion becomes challenging because of the high cohesiveness of particles, stickiness of powder to the sieve, and electrostatic charges. Moreover, in the sieving particles pass with the smallest side through the mesh opening, which means that the plate-shaped clay and silt particles might be sieved as well. In all this generally leads to a massive underestimation of the fine fraction. In order to measure silt and clay (with a particle size below 60 μm), a second, independent sizing method (most often hydrometer or pipette technique) is used on the sample taken from the bottom sieve. Particle size distribution obtained from sieve analysis should be combined with the data from a sedimentation analysis to establish a complete particle size distribution of the sample. Hydrometer method Sedimentation analysis (e.g. pipette method, hydrometer) is commonly used in the soil industry or in geology to classify sediments.The hydrometer method was developed in 1927 and is still widely used today. The hydrometer method of determining soil texture is a quantitative measurement providing estimates of the percent sand, clay, and silt in the soil based on Stokes' law, which expresses the relationship between the settling velocity and particle size. According to this law the particles settle down because of the weight and gravity action. However, there are two additional forces acting in the opposite direction of particles's motion which determines the equilibrium condition at which the particle falls at a constant velocity called terminal velocity. The hydrometer method requires the use of sodium hexametaphosphate, which acts as a dispersing agent to separate soil aggregates. The soil is mixed with the sodium hexametaphosphate solution on an orbital shaker overnight. The solution is transferred to one liter graduated cylinders and filled with water. The soil solution is mixed with a metal plunger to disperse the soil particles. The soil particles separate based on size and sink to the bottom. Sand particles sink to the bottom of the cylinder first. Silt particles sink to the bottom of the cylinder after the sand. Clay particles separate out above the silt layer. Measurements are taken using a soil hydrometer. A soil hydrometer measures the relative density of liquids (density of a liquid compared to the density of water). The hydrometer is lowered into the cylinder containing the soil mixture at different times, forty-five seconds to measure sand content, one and a half hours to measure silt content and between six and twenty-four hours (depending on the protocol used) to measure clay. The number on the hydrometer that is visible (above the soil solution) is recorded. A blank (containing only water and the dispersing agent) is used to calibrate the hydrometer. The values recorded from the readings are used to calculate the percent clay, silt and sand. The blank is subtracted from each of the three readings. The calculations are as follows: Percent silt = (dried mass of soil – sand hydrometer reading – blank reading) / (dried mass of soil) *100 Percent clay = (clay hydrometer reading – blank reading) / (dried mass of soil) *100 Percent sand = 100 – (percent clay + percent silt) The Stokes' diameter determined via sedimentation method is the diameter of a sphere having the same settling velocity and same density as the particle. This is the reason why the sedimentation analysis applies well when assuming that particles are spherical, have similar densities, have negligible interactions and are small enough to ensure that the fluid flow stays laminar. Deviations from Stokes' equation are to be expected in case of irregularly shaped particles, such as clay particles which are mostly platy or tubular. The stable position during settling of particles with such shapes is with the maximum cross-sectional area being perpendicular to the direction of motion. For this reason, the drag resistance of particles increases and the settling velocity decreases. The particle diameter is directly proportional to the settling velocity. Therefore, with lower velocity, the calculated diameter also decreases determining an overestimation of the fine size fraction. Sedimentation analysis shows anyways limits for particles smaller than 0.2 micron because such small particles undergo Brownian motion in the suspension and do not settle anymore as per the Stokes' law. Sedimentation analysis can be operated continuously with a high degree of accuracy and repeatability. The particle size distribution of soil containing a significant number of finer particles (silt and clay) cannot be performed by sieve analysis solely, therefore sedimentation analysis is used to determine the lower range of the particle size distribution. Laser Diffraction Laser diffraction is a measurement technique for determining the particle size distribution of samples, either dispersed in a liquid or as a dry powder. The technique is based on light waves getting bent when encountering particles in a sample. The measured equivalent spherical diameter is the diameter of a sphere having on the cross-sectional area the same diffraction pattern as the investigated particle. The angle of diffraction depends on the particle size, hence the pattern of diffraction depends on the relative amounts of different particle sizes present in that sample. This diffraction pattern is then detected and analyzed by means of Mie and Fraunhofer  diffraction models. The outcome of the measurement is a particle size distribution (PSD). By means of laser diffraction not only the particle size distribution and the corresponding volume weighted D-values can be determined but also the percentage of particles in the main size classes used for the soil classification. Compared to other techniques laser diffraction is a fast and cost-effective method to measure particle size and quickly analyze soil samples. A big advantage is the built-in dispersion (e.g. dispersion by air pressure or ultrasound dispersion) unit of laser diffraction instruments. Therefore, dry samples can be measured without external sample preparation steps, which are required for sieving and sedimentation analysis. Moreover, since the sample can be dispersed properly, there is no need to combine two different measurement techniques to obtain the full range of the particle size distribution, including the silt and clay content. Both Fraunhofer and Mie laser diffraction theories assume that particles are spherically shaped. This results in a small measurement error, since small particles in soil samples, such as clay and silt in particular, are elongated and anisotropic. The particle diameter in the laser diffraction method is determined in relation to their potential volume, which is calculated on the basis of an optical diffraction image at the edges of the particle cross-section. The volume of clay particles is the diameter of the plate’s cross-section, which is treated in the calculations as the diameter of the sphere. Therefore, their dimensions are usually overestimated in comparison to those measured via sedimentation analysis. The error associated with the assumption of the sphericity of particles depends furthermore on the degree of anisotropy. The optical properties of anisotropic particles, such as refractive index and absorption index, change according to their orientation relative to the laser beam which is also variable. Therefore, at different particles orientations different cross-sections will be measured and different diffraction patterns produced. For clays with sizes close to the wavelength of a laser beam, Mie theory would be desirable. This requires precise knowledge of the complex refractive index of the particles’ material, including their absorption coefficient. Because these parameters are often difficult to retrieve, especially the light absorption coefficients for various particles and soil grains, Fraunhofer theory, which only takes into account the light diffraction phenomena at the edge of the particles, is often recommended for natural soils. Additional methods There are several additional quantitative methods to determine soil texture. Some examples of these methods are the pipette method, the x-ray sedimentation, the particulate organic matter (POM) method, the rapid method. X-ray sedimentation The x-ray sedimentation technique is a hybrid technique which combines sedimentation and x-ray absorption. The particle size is calculated from the terminal settling velocities of particles by applying Stokes' law. The adsorption of the x-radiation is used to determine the relative mass concentration for each size class by applying the Beer-Lambert-Bouguer law.
Physical sciences
Soil science
Earth science
21694580
https://en.wikipedia.org/wiki/Interstellar%20object
Interstellar object
An interstellar object is an astronomical object (such as an asteroid, a comet, or a rogue planet, but not a star or stellar remnant) in interstellar space that is not gravitationally bound to a star. This term can also be applied to an object that is on an interstellar trajectory but is temporarily passing close to a star, such as certain asteroids and comets (that is, exoasteroids and exocomets). In the latter case, the object may be called an interstellar interloper. The first interstellar objects discovered were rogue planets, planets ejected from their original stellar system (e.g., OTS 44 or Cha 110913−773444), though they are difficult to distinguish from sub-brown dwarfs, planet-mass objects that formed in interstellar space as stars do. The first interstellar object which was discovered traveling through the Solar System was 1I/ʻOumuamua in 2017. The second was 2I/Borisov in 2019. They both possess significant hyperbolic excess velocity, indicating they did not originate in the Solar System. The discovery of ʻOumuamua inspired the tentative identification of CNEOS 2014-01-08, also known as the Manus Island fireball, as an interstellar object that impacted the Earth by astronomers Amir Siraj and Avi Loeb in 2019. This was supported by the U.S. Space Command in 2022 based on the object's velocity relative to the Sun, In May 2023, astronomers reported the possible capture of other interstellar objects in Near Earth Orbit (NEO) over the years. however, NASA and Other astronomers doubt this, and still other experts found Earth-related explanations for the purported meteorite impact instead. The interstellar objects were once bound to a host star and have become unbound since. Different processes can cause planets and smaller objects (planetesimals) to become unbound from their host star. Nomenclature With the first discovery of an interstellar object in the Solar System, the IAU has proposed a new series of small-body designations for interstellar objects, the I numbers, similar to the comet numbering system. The Minor Planet Center will assign the numbers. Provisional designations for interstellar objects will be handled using the C/ or A/ prefix (comet or asteroid), as appropriate. Overview Astronomers estimate that several interstellar objects of extrasolar origin (like ʻOumuamua) pass inside the orbit of Earth each year, and that 10,000 are passing inside the orbit of Neptune on any given day. Interstellar comets occasionally pass through the inner Solar System and approach with random velocities, mostly from the direction of the constellation Hercules because the Solar System is moving in that direction, called the solar apex. Until the discovery of 'Oumuamua, the fact that no comet with a speed greater than the Sun's escape velocity had been observed was used to place upper limits to their density in interstellar space. A paper by Torbett indicated that the density was no more than 1013 (10 trillion) comets per cubic parsec. Other analyses, of data from LINEAR, set the upper limit at 4.5/AU3, or 1012 (1 trillion) comets per cubic parsec. A more recent estimate by David C. Jewitt and colleagues, following the detection of 'Oumuamua, predicts that "The steady-state population of similar, ~100 m scale interstellar objects inside the orbit of Neptune is ~1, each with a residence time of ~10 years." Current models of Oort cloud formation predict that more comets are ejected into interstellar space than are retained in the Oort cloud, with estimates varying from 3 to 100 times as many. Other simulations suggest that 90–99% of comets are ejected. There is no reason to believe comets formed in other star systems would not be similarly scattered. Amir Siraj and Avi Loeb demonstrated that the Oort Cloud could have been formed from ejected planetesimals from other stars in the Sun's birth cluster. Both researchers proposed a search for ʻOumuamua-like objects which are trapped in the Solar System as a result of losing orbital energy through a close encounter with Jupiter. It is possible for objects orbiting a star to be ejected due to interaction with a third massive body, thereby becoming interstellar objects. Such a process was initiated in the early 1980s when C/1980 E1, initially gravitationally bound to the Sun, passed near Jupiter and was accelerated sufficiently to reach escape velocity from the Solar System. This changed its orbit from elliptical to hyperbolic and made it the most eccentric known object at the time, with an eccentricity of 1.057. It is heading for interstellar space.Due to present observational difficulties, an interstellar object can usually only be detected if it passes through the Solar System, where it can be distinguished by its strongly hyperbolic trajectory and hyperbolic excess velocity of more than a few km/s, proving that it is not gravitationally bound to the Sun. In contrast, gravitationally bound objects follow elliptic orbits around the Sun. (There are a few objects whose orbits are so close to parabolic that their gravitationally bound status is unclear.) An interstellar comet can probably, on rare occasions, be captured into a heliocentric orbit while passing through the Solar System. Computer simulations show that Jupiter is the only planet massive enough to capture one, and that this can be expected to occur once every sixty million years. Comets Machholz 1 and Hyakutake C/1996 B2 are possible examples of such comets. They have atypical chemical makeups for comets in the Solar System. Recent research suggests that asteroid 514107 Kaʻepaokaʻawela may be a former interstellar object, captured some 4.5 billion years ago, as evidenced by its co-orbital motion with Jupiter and its retrograde orbit around the Sun. In addition, comet C/2018 V1 (Machholz-Fujikawa-Iwamoto) has a significant probability (72.6%) of having an extrasolar provenance although an origin in the Oort cloud cannot be excluded. Harvard astronomers suggest that matter—and potentially dormant spores—can be exchanged across vast distances. The detection of ʻOumuamua crossing the inner Solar System confirms the possibility of a material link with exoplanetary systems. Interstellar visitors in the Solar System cover the whole range of sizes – from kilometer large objects down to submicron particles. Also, interstellar dust and meteoroids carry with them valuable information from their parent systems. Detection of these objects along the continuum of sizes is, however, not evident (see Figure). The smallest interstellar dust particles are filtered out of the solar system by electromagnetic forces, while the largest ones are too sparse to obtain good statistics from in situ spacecraft detectors. Discrimination between interstellar and interplanetary populations can be a challenge for intermediate (0.1–1 micrometer) sizes. These can vary widely in velocity and directionality. The identification of interstellar meteoroids, observed in the Earth's atmosphere as meteors, is highly challenging and requires high accuracy measurements and appropriate error examinations. Otherwise, measurement errors can transfer near-parabolic orbits over the parabolic limit and create an artificial population of hyperbolic particles, often interpreted as of interstellar origin. Large interstellar visitors like asteroids and comets were detected the first time in the solar system in 2017 (1I/'Oumuamua) and 2019 (2I/Borisov) and are expected to be detected more frequently with new telescopes, e.g. the Vera Rubin Observatory. Amir Siraj and Avi Loeb have predicted that the Vera C. Rubin Observatory will be capable of detecting an anisotropy in the distribution of interstellar objects due to the Sun's motion relative to the Local Standard of Rest and identify the characteristic ejection speed of interstellar objects from their parent stars. In May 2023, astronomers reported the possible capture of other interstellar objects in Near Earth Orbit (NEO) over the years. Confirmed objects 1I/2017 U1 (ʻOumuamua) A dim object was discovered on October 19, 2017, by the Pan-STARRS telescope, at an apparent magnitude of 20. The observations showed that it follows a strongly hyperbolic trajectory around the Sun at a speed greater than the solar escape velocity, in turn meaning that it is not gravitationally bound to the Solar System and likely to be an interstellar object. It was initially named C/2017 U1 because it was assumed to be a comet, and was renamed to A/2017 U1 after no cometary activity was found on October 25. After its interstellar nature was confirmed, it was renamed to 1I/ʻOumuamua – "1" because it is the first such object to be discovered, "I" for interstellar, and "'Oumuamua" is a Hawaiian word meaning "a messenger from afar arriving first". The lack of cometary activity from ʻOumuamua suggests an origin from the inner regions of whatever stellar system it came from, losing all surface volatiles within the frost line, much like the rocky asteroids, extinct comets and damocloids we know from the Solar System. This is only a suggestion, as ʻOumuamua might very well have lost all surface volatiles to eons of cosmic radiation exposure in interstellar space, developing a thick crust layer after it was expelled from its parent system. ʻOumuamua has an eccentricity of 1.199, which was the highest eccentricity ever observed for any non-artificial object in the Solar System by a wide margin prior to the discovery of comet 2I/Borisov in August 2019. In September 2018, astronomers described several possible home star systems from which ʻOumuamua may have begun its interstellar journey. 2I/Borisov The object was discovered on 30 August 2019 at MARGO, Nauchnyy, Crimea by Gennadiy Borisov using his custom-built 0.65-meter telescope. On 13 September 2019, the Gran Telescopio Canarias obtained a low-resolution visible spectrum of 2I/Borisov that revealed that this object has a surface composition not too different from that found in typical Oort Cloud comets. The IAU Working Group for Small Body Nomenclature kept the name Borisov, giving the comet the interstellar designation of 2I/Borisov. On 12 March 2020, astronomers reported observational evidence of "ongoing nucleus fragmentation" from Borisov. Candidates In 2007, Afanasiev et al. reported the likely detection of a multi-centimeter intergalactic meteor hitting the atmosphere above the Special Astrophysical Observatory of the Russian Academy of Sciences on July 28, 2006. In November 2018, Harvard astronomers Amir Siraj and Avi Loeb reported that there should be hundreds of 'Oumuamua-size interstellar objects in the Solar System, based on calculated orbital characteristics, and presented several centaur candidates such as and . These are all orbiting the Sun, but may have been captured in the distant past. Both researchers have proposed methods for increasing the discovery rate of interstellar objects that include stellar occultations, optical signatures from impacts with the moon or the Earth's atmosphere, and radio flares from collisions with neutron stars. 2014 interstellar meteor CNEOS 2014-01-08 (also known as Interstellar meteor 1; IM1), a meteor with a mass of 0.46 tons and width of , burned up in the Earth's atmosphere on January 8, 2014. A 2019 preprint suggested this meteor had been of interstellar origin. It had a heliocentric speed of and an asymptotic speed of , and it exploded at 17:05:34 UTC near Papua New Guinea at an altitude of . After declassifying the data in April 2022, the U.S. Space Command, based on information collected from its planetary defense sensors, confirmed the velocity of the potential interstellar meteor. In 2023, The Galileo Project completed an expedition to retrieve small fragments of the apparently peculiar meteor. Claims about their findings have been doubted by their peers according to a report in The New York Times. Further related studies were reported on 1 September 2023. Other astronomers doubt the interstellar origin because the meteoroid catalog used does not report uncertainties on the incoming velocity. The validity of any single data point (especially for smaller meteoroids) remains questionable. In November 2022, a paper was published, claiming the anomalous properties (including its high strength and strongly hyperbolic trajectory) of CNEOS 2014-01-08 are better described as measurement error rather than genuine parameters. Successful retrieval of any meteoroid fragments is highly unlikely. Common micrometeorites would be indistinguishable from one another. 2017 interstellar meteor CNEOS 2017-03-09 (aka Interstellar meteor 2; IM2), a meteor with a mass of roughly 6.3 tons, burned up in the Earth's atmosphere on March 9, 2017. Similar to IM1, it has a high mechanical strength. In September 2022, astronomers Amir Siraj and Avi Loeb reported the discovery of a candidate interstellar meteor, CNEOS 2017-03-09, that impacted Earth in 2017 and is considered, based in part on the high material strength of the meteor, to be a possible interstellar object. Hypothetical missions With current space technology, close visits and orbital missions are challenging due to their high speeds, though not impossible. The Initiative for Interstellar Studies (i4is) launched in 2017 Project Lyra to assess the feasibility of a mission to ʻOumuamua. Several options for sending a spacecraft to ʻOumuamua within a time-frame of 5 to 25 years were suggested. One option is using first a Jupiter flyby followed by a close solar flyby at in order to take advantage of the Oberth effect. Different mission durations and their velocity requirements were explored with respect to the launch date, assuming direct impulsive transfer to the intercept trajectory. The Comet Interceptor spacecraft by ESA and JAXA, planned to launch in 2029, will be positioned at the Sun-Earth L2 point to wait for a suitable long-period comet to intercept and flyby for study. In case that no suitable comet is identified during its 3-year wait, the spacecraft could be tasked to intercept an interstellar object in short notice, if reachable.
Physical sciences
Astronomy basics
Astronomy
33307786
https://en.wikipedia.org/wiki/Supercapacitor
Supercapacitor
A supercapacitor (SC), also called an ultracapacitor, is a high-capacity capacitor, with a capacitance value much higher than solid-state capacitors but with lower voltage limits. It bridges the gap between electrolytic capacitors and rechargeable batteries. It typically stores 10 to 100 times more energy per unit volume or mass than electrolytic capacitors, can accept and deliver charge much faster than batteries, and tolerates many more charge and discharge cycles than rechargeable batteries. Unlike ordinary capacitors, supercapacitors do not use the conventional solid dielectric, but rather, they use electrostatic double-layer capacitance and electrochemical pseudocapacitance, both of which contribute to the total energy storage of the capacitor. Supercapacitors are used in applications requiring many rapid charge/discharge cycles, rather than long-term compact energy storage: in automobiles, buses, trains, cranes and elevators, where they are used for regenerative braking, short-term energy storage, or burst-mode power delivery. Smaller units are used as power backup for static random-access memory (SRAM). Background The electrochemical charge storage mechanisms in solid media can be roughly (there is an overlap in some systems) classified into 3 types: Electrostatic double-layer capacitors (EDLCs) use carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance, achieving separation of charge in a Helmholtz double layer at the interface between the surface of a conductive electrode and an electrolyte. The separation of charge is of the order of a few ångströms (0.3–0.8 nm), much smaller than in a conventional capacitor. The electric charge in EDLCs is stored in a two-dimensional interphase (surface) of an electronic conductor (e.g. carbon particle) and ionic conductor (electrolyte solution). Batteries with solid electroactive materials store charge in bulk solid phases by virtue of redox chemical reactions. Electrochemical supercapacitors (ECSCs) fall in between EDLs and batteries. ECSCs use metal oxide or conducting polymer electrodes with a high amount of electrochemical pseudocapacitance additional to the double-layer capacitance. Pseudocapacitance is achieved by Faradaic electron charge-transfer with redox reactions, intercalation or electrosorption. In solid-state capacitors, the mobile charges are electrons, and the gap between electrodes is a layer of a dielectric. In electrochemical double-layer capacitors, the mobile charges are solvated ions (cations and anions), and the effective thickness is determined on each of the two electrodes by their electrochemical double layer structure. In batteries the charge is stored in the bulk volume of solid phases, which have both electronic and ionic conductivities. In electrochemical supercapacitors, the charge storage mechanisms either combine the double-layer and battery mechanisms, or are based on mechanisms, which are intermediate between true double layer and true battery. History In the early 1950s, General Electric engineers began experimenting with porous carbon electrodes in the design of capacitors, from the design of fuel cells and rechargeable batteries. Activated charcoal is an electrical conductor that is an extremely porous "spongy" form of carbon with a high specific surface area. In 1957 H. Becker developed a "Low voltage electrolytic capacitor with porous carbon electrodes". He believed that the energy was stored as a charge in the carbon pores as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it is used for energy storage, but it leads to an extremely high capacity." General Electric did not immediately pursue this work. In 1966 researchers at Standard Oil of Ohio (SOHIO) developed another version of the component as "electrical energy storage apparatus", while working on experimental fuel cell designs. The nature of electrochemical energy storage was not described in this patent. Even in 1970, the electrochemical capacitor patented by Donald L. Boos was registered as an electrolytic capacitor with activated carbon electrodes. Early electrochemical capacitors used two aluminum foils covered with activated carbon (the electrodes) that were soaked in an electrolyte and separated by a thin porous insulator. This design gave a capacitor with a capacitance on the order of one farad, significantly higher than electrolytic capacitors of the same dimensions. This basic mechanical design remains the basis of most electrochemical capacitors. SOHIO did not commercialize their invention, licensing the technology to NEC, who finally marketed the results as "supercapacitors" in 1978, to provide backup power for computer memory. Between 1975 and 1980 Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors. In 1991 he described the difference between "supercapacitor" and "battery" behaviour in electrochemical energy storage. In 1999 he defined the term "supercapacitor" to make reference to the increase in observed capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions. His "supercapacitor" stored electrical charge partially in the Helmholtz double-layer and partially as result of faradaic reactions with "pseudocapacitance" charge transfer of electrons and protons between electrode and electrolyte. The working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption (adsorption onto a surface). With his research, Conway greatly expanded the knowledge of electrochemical capacitors. The market expanded slowly. That changed around 1978 as Panasonic marketed its Goldcaps brand. This product became a successful energy source for memory backup applications. Competition started only years later. In 1987 ELNA "Dynacap"s entered the market. First generation EDLC's had relatively high internal resistance that limited the discharge current. They were used for low current applications such as powering SRAM chips or for data backup. At the end of the 1980s, improved electrode materials increased capacitance values. At the same time, the development of electrolytes with better conductivity lowered the equivalent series resistance (ESR) increasing charge/discharge currents. The first supercapacitor with low internal resistance was developed in 1982 for military applications through the Pinnacle Research Institute (PRI), and were marketed under the brand name "PRI Ultracapacitor". In 1992, Maxwell Laboratories (later Maxwell Technologies) took over this development. Maxwell adopted the term Ultracapacitor from PRI and called them "Boost Caps" to underline their use for power applications. Since capacitors' energy content increases with the square of the voltage, researchers were looking for a way to increase the electrolyte's breakdown voltage. In 1994 using the anode of a 200 V high-voltage tantalum electrolytic capacitor, David A. Evans developed an "Electrolytic-Hybrid Electrochemical Capacitor". These capacitors combine features of electrolytic and electrochemical capacitors. They combine the high dielectric strength of an anode from an electrolytic capacitor with the high capacitance of a pseudocapacitive metal oxide (ruthenium (IV) oxide) cathode from an electrochemical capacitor, yielding a hybrid electrochemical capacitor. Evans' capacitors, coined Capattery, had an energy content about a factor of 5 higher than a comparable tantalum electrolytic capacitor of the same size. Their high costs limited them to specific military applications. Recent developments include lithium-ion capacitors. These hybrid capacitors were pioneered by Fujitsu's FDK in 2007. They combine an electrostatic carbon electrode with a pre-doped lithium-ion electrochemical electrode. This combination increases the capacitance value. Additionally, the pre-doping process lowers the anode potential and results in a high cell output voltage, further increasing specific energy. Research departments active in many companies and universities are working to improve characteristics such as specific energy, specific power, and cycle stability and to reduce production costs. Design Basic design Electrochemical capacitors (supercapacitors) consist of two electrodes separated by an ion-permeable membrane (separator), and an electrolyte ionically connecting both electrodes. When the electrodes are polarized by an applied voltage, ions in the electrolyte form electric double layers of opposite polarity to the electrode's polarity. For example, positively polarized electrodes will have a layer of negative ions at the electrode/electrolyte interface along with a charge-balancing layer of positive ions adsorbing onto the negative layer. The opposite is true for the negatively polarized electrode. Additionally, depending on electrode material and surface shape, some ions may permeate the double layer becoming specifically adsorbed ions and contribute with pseudocapacitance to the total capacitance of the supercapacitor. Capacitance distribution The two electrodes form a series circuit of two individual capacitors C1 and C2. The total capacitance Ctotal is given by the formula Supercapacitors may have either symmetric or asymmetric electrodes. Symmetry implies that both electrodes have the same capacitance value, yielding a total capacitance of half the value of each single electrode (if C1 = C2, then Ctotal = ½ C1). For asymmetric capacitors, the total capacitance can be taken as that of the electrode with the smaller capacitance (if C1 >> C2, then Ctotal ≈ C2). Storage principles Electrochemical capacitors use the double-layer effect to store electric energy; however, this double-layer has no conventional solid dielectric to separate the charges. There are two storage principles in the electric double-layer of the electrodes that contribute to the total capacitance of an electrochemical capacitor: Double-layer capacitance, electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer. Pseudocapacitance, electrochemical storage of the electrical energy. The original type uses faradaic redox reactions with charge-transfer. Both capacitances are only separable by measurement techniques. The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size, although the amount of capacitance of each storage principle can vary extremely. Electrical double-layer capacitance Every electrochemical capacitor has two electrodes, mechanically separated by a separator, which are ionically connected to each other via the electrolyte. The electrolyte is a mixture of positive and negative ions dissolved in a solvent such as water. At each of the two electrode surfaces originates an area in which the liquid electrolyte contacts the conductive metallic surface of the electrode. This interface forms a common boundary among two different phases of matter, such as an insoluble solid electrode surface and an adjacent liquid electrolyte. In this interface occurs a very special phenomenon of the double layer effect. Applying a voltage to an electrochemical capacitor causes both electrodes in the capacitor to generate electrical double-layers. These double-layers consist of two layers of charges: one electronic layer is in the surface lattice structure of the electrode, and the other, with opposite polarity, emerges from dissolved and solvated ions in the electrolyte. The two layers are separated by a monolayer of solvent molecules, e.g., for water as solvent by water molecules, called inner Helmholtz plane (IHP). Solvent molecules adhere by physical adsorption on the surface of the electrode and separate the oppositely polarized ions from each other, and can be idealised as a molecular dielectric. In the process, there is no transfer of charge between electrode and electrolyte, so the forces that cause the adhesion are not chemical bonds, but physical forces, e.g., electrostatic forces. The adsorbed molecules are polarized, but, due to the lack of transfer of charge between electrolyte and electrode, suffered no chemical changes. The amount of charge in the electrode is matched by the magnitude of counter-charges in outer Helmholtz plane (OHP). This double-layer phenomena stores electrical charges as in a conventional capacitor. The double-layer charge forms a static electric field in the molecular layer of the solvent molecules in the IHP that corresponds to the strength of the applied voltage. The double-layer serves approximately as the dielectric layer in a conventional capacitor, albeit with the thickness of a single molecule. Thus, the standard formula for conventional plate capacitors can be used to calculate their capacitance: . Accordingly, capacitance C is greatest in capacitors made from materials with a high permittivity ε, large electrode plate surface areas A and small distance between plates d. As a result, double-layer capacitors have much higher capacitance values than conventional capacitors, arising from the extremely large surface area of activated carbon electrodes and the extremely thin double-layer distance on the order of a few ångströms (0.3–0.8 nm), of order of the Debye length. Assuming that the minimum distance between the electrode and the charge accumulating region cannot be less than the typical distance between negative and positive charges in atoms of ~0.05 nm a general capacitance upper limit of ~18 μF/cm2 has been predicted for non-faradaic capacitors. The main drawback of carbon electrodes of double-layer SCs is small values of quantum capacitance which act in series with capacitance of ionic space charge. Therefore, further increase of density of capacitance in SCs can be connected with increasing of quantum capacitance of carbon electrode nanostructures. The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size. The electrostatic storage of energy in the double-layers is linear with respect to the stored charge, and correspond to the concentration of the adsorbed ions. Also, while charge in conventional capacitors is transferred via electrons, capacitance in double-layer capacitors is related to the limited moving speed of ions in the electrolyte and the resistive porous structure of the electrodes. Since no chemical changes take place within the electrode or electrolyte, charging and discharging electric double-layers in principle is unlimited. Real supercapacitors lifetimes are only limited by electrolyte evaporation effects. Electrochemical pseudocapacitance Applying a voltage at the electrochemical capacitor terminals moves electrolyte ions to the opposite polarized electrode and forms a double-layer in which a single layer of solvent molecules acts as separator. Pseudocapacitance can originate when specifically adsorbed ions out of the electrolyte pervade the double-layer. This pseudocapacitance stores electrical energy by means of reversible faradaic redox reactions on the surface of suitable electrodes in an electrochemical capacitor with an electric double-layer. Pseudocapacitance is accompanied with an electron charge-transfer between electrolyte and electrode coming from a de-solvated and adsorbed ion whereby only one electron per charge unit is participating. This faradaic charge transfer originates by a very fast sequence of reversible redox, intercalation or electrosorption processes. The adsorbed ion has no chemical reaction with the atoms of the electrode (no chemical bonds arise) since only a charge-transfer take place. The electrons involved in the faradaic processes are transferred to or from valence electron states (orbitals) of the redox electrode reagent. They enter the negative electrode and flow through the external circuit to the positive electrode where a second double-layer with an equal number of anions has formed. The electrons reaching the positive electrode are not transferred to the anions forming the double-layer, instead they remain in the strongly ionized and "electron hungry" transition-metal ions of the electrode's surface. As such, the storage capacity of faradaic pseudocapacitance is limited by the finite quantity of reagent in the available surface. A faradaic pseudocapacitance only occurs together with a static double-layer capacitance, and its magnitude may exceed the value of double-layer capacitance for the same surface area by factor of 100, depending on the nature and the structure of the electrode, because all the pseudocapacitance reactions take place only with de-solvated ions, which are much smaller than solvated ion with their solvating shell. The amount of pseudocapacitance has a linear function within narrow limits determined by the potential-dependent degree of surface coverage of the adsorbed anions. The ability of electrodes to accomplish pseudocapacitance effects by redox reactions, intercalation or electrosorption strongly depends on the chemical affinity of electrode materials to the ions adsorbed on the electrode surface as well as on the structure and dimension of the electrode pores. Materials exhibiting redox behavior for use as electrodes in pseudocapacitors are transition-metal oxides like RuO2, IrO2, or MnO2 inserted by doping in the conductive electrode material such as active carbon, as well as conducting polymers such as polyaniline or derivatives of polythiophene covering the electrode material. The amount of electric charge stored in a pseudocapacitance is linearly proportional to the applied voltage. The unit of pseudocapacitance is farad, same as that of capacitance. Although conventional battery-type electrode materials also use chemical reactions to store charge, they show very different electrical profiles, as the rate of discharge is limited by the speed of diffusion. Grinding those materials down to nanoscale frees them of the diffusion limit and give them a more pseudocapacitative behavior, making them extrinsic pseudocapacitors. Chodankar et al. 2020, figure 2 shows the representative voltage-capacity curves for bulk LiCoO2, nano LiCoO2, a redox pseudocapacitor (RuO2), and a intercalation pseudocapacitor (T-Nb2O5). Asymmetric capacitors Supercapacitors can also be made with different materials and principles at the electrodes. If both of those materials use a fast, supercapacitor-type reaction (capacitance or pseudocapacitance), the result is called an asymmetric capacitor. The two electrodes have different electric potentials; when combined with proper balancing, the result is improved energy density with no loss of lifespan or current capacity. Hybrid capacitors A number of newer supercapacitors are "hybrid": only one electrode uses a fast reaction (capacitance or pseudocapacitance), the other using a more "battery-like" (slower but higher-capacity) material. For example, an EDLC anode can be combined with an activated carbon–Ni(OH)2 cathode, the latter being a slow faradaic material. The and profiles of a hybrid capacitor have a shape between that of a battery and an SC, more similar to that of an SC. Hybrid capacitors have much higher energy density, but have inferior cycle life and current capacity owing to the slower electrode. Potential distribution Conventional capacitors (also known as electrostatic capacitors), such as ceramic capacitors and film capacitors, consist of two electrodes separated by a dielectric material. When charged, the energy is stored in a static electric field that permeates the dielectric between the electrodes. The total energy increases with the amount of stored charge, which in turn correlates linearly with the potential (voltage) between the plates. The maximum potential difference between the plates (the maximal voltage) is limited by the dielectric's breakdown field strength. The same static storage also applies for electrolytic capacitors in which most of the potential decreases over the anode's thin oxide layer. The somewhat resistive liquid electrolyte (cathode) accounts for a small decrease of potential for "wet" electrolytic capacitors, while electrolytic capacitors with solid conductive polymer electrolyte this voltage drop is negligible. In contrast, electrochemical capacitors (supercapacitors) consists of two electrodes separated by an ion-permeable membrane (separator) and electrically connected via an electrolyte. Energy storage occurs within the double-layers of both electrodes as a mixture of a double-layer capacitance and pseudocapacitance. When both electrodes have approximately the same resistance (internal resistance), the potential of the capacitor decreases symmetrically over both double-layers, whereby a voltage drop across the equivalent series resistance (ESR) of the electrolyte is achieved. For asymmetrical supercapacitors like hybrid capacitors the voltage drop between the electrodes could be asymmetrical. The maximum potential across the capacitor (the maximal voltage) is limited by the electrolyte decomposition voltage. Both electrostatic and electrochemical energy storage in supercapacitors are linear with respect to the stored charge, just as in conventional capacitors. The voltage between the capacitor terminals is linear with respect to the amount of stored energy. Such linear voltage gradient differs from rechargeable electrochemical batteries, in which the voltage between the terminals remains independent of the amount of stored energy, providing a relatively constant voltage. Comparison with other storage technologies Supercapacitors compete with electrolytic capacitors and rechargeable batteries, especially lithium-ion batteries. The following table compares the major parameters of the three main supercapacitor families with electrolytic capacitors and batteries. Electrolytic capacitors feature nearly unlimited charge/discharge cycles, high dielectric strength (up to 550 V) and good frequency response as alternating current (AC) reactance in the lower frequency range. Supercapacitors can store 10 to 100 times more energy than electrolytic capacitors, but they do not support AC applications. With regards to rechargeable batteries, supercapacitors feature higher peak currents, low cost per cycle, no danger of overcharging, good reversibility, non-corrosive electrolyte and low material toxicity. Batteries offer lower purchase cost and stable voltage under discharge, but require complex electronic control and switching equipment, with consequent energy loss and spark hazard given a short. Styles Supercapacitors are made in different styles, such as flat with a single pair of electrodes, wound in a cylindrical case, or stacked in a rectangular case. Because they cover a broad range of capacitance values, the size of the cases can vary. Supercapacitors are constructed with two metal foils (current collectors), each coated with an electrode material such as activated carbon, which serve as the power connection between the electrode material and the external terminals of the capacitor. Specifically to the electrode material is a very large surface area. In this example the activated carbon is electrochemically etched, so that the surface area of the material is about 100,000 times greater than the smooth surface. The electrodes are kept apart by an ion-permeable membrane (separator) used as an insulator to protect the electrodes against short circuits. This construction is subsequently rolled or folded into a cylindrical or rectangular shape and can be stacked in an aluminum can or an adaptable rectangular housing. The cell is then impregnated with a liquid or viscous electrolyte of organic or aqueous type. The electrolyte, an ionic conductor, enters the pores of the electrodes and serves as the conductive connection between the electrodes across the separator. Finally, the housing is hermetically sealed to ensure stable behavior over the specified lifetime. Types Electrical energy is stored in supercapacitors via two storage principles, static double-layer capacitance and electrochemical pseudocapacitance; and the distribution of the two types of capacitance depends on the material and structure of the electrodes. There are three types of supercapacitors based on storage principle: Double-layer capacitors (EDLCs): with activated carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance Pseudocapacitors: with transition metal oxide or conducting polymer electrodes with a high electrochemical pseudocapacitance Hybrid capacitors: with asymmetric electrodes, one of which exhibits mostly electrostatic and the other mostly electrochemical capacitance, such as lithium-ion capacitors Because double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of an electrochemical capacitor, a correct description of these capacitors only can be given under the generic term. The concepts of supercapattery and supercabattery have been recently proposed to better represent those hybrid devices that behave more like the supercapacitor and the rechargeable battery, respectively. The capacitance value of a supercapacitor is determined by two storage principles: Double-layer capacitance – electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer at the interface between the surface of a conductor electrode and an electrolytic solution electrolyte. The separation of charge distance in a double-layer is on the order of a few ångströms (0.3–0.8 nm) and is static in origin. Pseudocapacitance – Electrochemical storage of the electrical energy, achieved by redox reactions, electrosorption or intercalation on the surface of the electrode by specifically adsorbed ions, that results in a reversible faradaic charge-transfer on the electrode. Double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of a supercapacitor. However, the ratio of the two can vary greatly, depending on the design of the electrodes and the composition of the electrolyte. Pseudocapacitance can increase the capacitance value by as much as a factor of ten over that of the double-layer by itself. Electric double-layer capacitors (EDLC) are electrochemical capacitors in which energy storage predominantly is achieved by double-layer capacitance. In the past, all electrochemical capacitors were called "double-layer capacitors". Contemporary usage sees double-layer capacitors, together with pseudocapacitors, as part of a larger family of electrochemical capacitors called supercapacitors. They are also known as ultracapacitors. Materials The properties of supercapacitors come from the interaction of their internal materials. Especially, the combination of electrode material and type of electrolyte determine the functionality and thermal and electrical characteristics of the capacitors. Electrodes Supercapacitor electrodes are generally thin coatings applied and electrically connected to a conductive, metallic current collector. Electrodes must have good conductivity, high temperature stability, long-term chemical stability (inertness), high corrosion resistance and high surface areas per unit volume and mass. Other requirements include environmental friendliness and low cost. The amount of double-layer as well as pseudocapacitance stored per unit voltage in a supercapacitor is predominantly a function of the electrode surface area. Therefore, supercapacitor electrodes are typically made of porous, spongy material with an extraordinarily high specific surface area, such as activated carbon. Additionally, the ability of the electrode material to perform faradaic charge transfers enhances the total capacitance. Generally the smaller the electrode's pores, the greater the capacitance and specific energy. However, smaller pores increase equivalent series resistance (ESR) and decrease specific power. Applications with high peak currents require larger pores and low internal losses, while applications requiring high specific energy need small pores. Electrodes for EDLCs The most commonly used electrode material for supercapacitors is carbon in various manifestations such as activated carbon (AC), carbon fibre-cloth (AFC), carbide-derived carbon (CDC), carbon aerogel, graphite (graphene), graphane and carbon nanotubes (CNTs). Carbon-based electrodes exhibit predominantly static double-layer capacitance, even though a small amount of pseudocapacitance may also be present depending on the pore size distribution. Pore sizes in carbons typically range from micropores (less than 2 nm) to mesopores (2-50 nm), but only micropores (<2 nm) contribute to pseudocapacitance. As pore size approaches the solvation shell size, solvent molecules are excluded and only unsolvated ions fill the pores (even for large ions), increasing ionic packing density and storage capability by faradaic intercalation. Activated carbon Activated carbon was the first material chosen for EDLC electrodes. Even though its electrical conductivity is approximately 0.003% that of metals (1,250 to 2,000 S/m), it is sufficient for supercapacitors. Activated carbon is an extremely porous form of carbon with a high specific surface area — a common approximation is that 1 gram (0.035 oz) (a pencil-eraser-sized amount) has a surface area of roughly — about the size of 4 to 12 tennis courts. The bulk form used in electrodes is low-density with many pores, giving high double-layer capacitance. Solid activated carbon, also termed consolidated amorphous carbon (CAC) is the most used electrode material for supercapacitors and may be cheaper than other carbon derivatives. It is produced from activated carbon powder pressed into the desired shape, forming a block with a wide distribution of pore sizes. An electrode with a surface area of about 1000 m2/g results in a typical double-layer capacitance of about 10 μF/cm2 and a specific capacitance of 100 F/g. virtually all commercial supercapacitors use powdered activated carbon made from coconut shells. Coconut shells produce activated carbon with more micropores than does charcoal made from wood. Activated carbon fibres Activated carbon fibres (ACF) are produced from activated carbon and have a typical diameter of 10 μm. They can have micropores with a very narrow pore-size distribution that can be readily controlled. The surface area of ACF woven into a textile is about . Advantages of ACF electrodes include low electrical resistance along the fibre axis and good contact to the collector. As for activated carbon, ACF electrodes exhibit predominantly double-layer capacitance with a small amount of pseudocapacitance due to their micropores. Carbon aerogel Carbon aerogel is a highly porous, synthetic, ultralight material derived from an organic gel in which the liquid component of the gel has been replaced with a gas. Aerogel electrodes are made via pyrolysis of resorcinol-formaldehyde aerogels and are more conductive than most activated carbons. They enable thin and mechanically stable electrodes with a thickness in the range of several hundred micrometres (μm) and with uniform pore size. Aerogel electrodes also provide mechanical and vibration stability for supercapacitors used in high-vibration environments. Researchers have created a carbon aerogel electrode with gravimetric densities of about 400–1200 m2/g and volumetric capacitance of 104 F/cm3, yielding a specific energy of () and specific power of . Standard aerogel electrodes exhibit predominantly double-layer capacitance. Aerogel electrodes that incorporate composite material can add a high amount of pseudocapacitance. Carbide-derived carbon Carbide-derived carbon (CDC), also known as tunable nanoporous carbon, is a family of carbon materials derived from carbide precursors, such as binary silicon carbide and titanium carbide, that are transformed into pure carbon via physical, e.g., thermal decomposition or chemical, e.g., halogenation) processes. Carbide-derived carbons can exhibit high surface area and tunable pore diameters (from micropores to mesopores) to maximize ion confinement, increasing pseudocapacitance by faradaic adsorption treatment. CDC electrodes with tailored pore design offer as much as 75% greater specific energy than conventional activated carbons. , a CDC supercapacitor offered a specific energy of 10.1 Wh/kg, 3,500 F capacitance and over one million charge-discharge cycles. Graphene Graphene is a one-atom thick sheet of graphite, with atoms arranged in a regular hexagonal pattern, also called "nanocomposite paper". Graphene has a theoretical specific surface area of 2630 m2/g which can theoretically lead to a capacitance of 550 F/g. In addition, an advantage of graphene over activated carbon is its higher electrical conductivity. , a new development used graphene sheets directly as electrodes without collectors for portable applications. In one embodiment, a graphene-based supercapacitor uses curved graphene sheets that do not stack face-to-face, forming mesopores that are accessible to and wettable by ionic electrolytes at voltages up to 4 V. A specific energy of () is obtained at room temperature equaling that of a conventional nickel–metal hydride battery, but with 100–1000 times greater specific power. The two-dimensional structure of graphene improves charging and discharging. Charge carriers in vertically oriented sheets can quickly migrate into or out of the deeper structures of the electrode, thus increasing currents. Such capacitors may be suitable for 100/120 Hz filter applications, which are unreachable for supercapacitors using other carbon materials. Carbon nanotubes Carbon nanotubes (CNTs), also called buckytubes, are carbon molecules with a cylindrical nanostructure. They have a hollow structure with walls formed by one-atom-thick sheets of graphite. These sheets are rolled at specific and discrete ("chiral") angles, and the combination of chiral angle and radius controls properties such as electrical conductivity, electrolyte wettability and ion access. Nanotubes are categorized as single-walled nanotubes (SWNTs) or multi-walled nanotubes (MWNTs). The latter have one or more outer tubes successively enveloping a SWNT, much like the Russian matryoshka dolls. SWNTs have diameters ranging between 1 and 3 nm. MWNTs have thicker coaxial walls, separated by spacing (0.34 nm) that is close to graphene's interlayer distance. Nanotubes can grow vertically on the collector substrate, such as a silicon wafer. Typical lengths are 20 to 100 μm. Carbon nanotubes can greatly improve capacitor performance, due to the highly wettable surface area and high conductivity. A SWNT-based supercapacitor with aqueous electrolyte was systematically studied at University of Delaware in Prof. Bingqing Wei's group. Li et al., for the first time, discovered that the ion-size effect and the electrode-electrolyte wettability are the dominant factors affecting the electrochemical behavior of flexible SWCNTs-supercapacitors in different 1 molar aqueous electrolytes with different anions and cations. The experimental results also showed for flexible supercapacitor that it is suggested to put enough pressure between the two electrodes to improve the aqueous electrolyte CNT supercapacitor. CNTs can store about the same charge as activated carbon per unit surface area, but nanotubes' surface is arranged in a regular pattern, providing greater wettability. SWNTs have a high theoretical specific surface area of 1315 m2/g, while that for MWNTs is lower and is determined by the diameter of the tubes and degree of nesting, compared with a surface area of about 3000 m2/g of activated carbons. Nevertheless, CNTs have higher capacitance than activated carbon electrodes, e.g., 102 F/g for MWNTs and 180 F/g for SWNTs. MWNTs have mesopores that allow for easy access of ions at the electrode–electrolyte interface. As the pore size approaches the size of the ion solvation shell, the solvent molecules are partially stripped, resulting in larger ionic packing density and increased faradaic storage capability. However, the considerable volume change during repeated intercalation and depletion decreases their mechanical stability. To this end, research to increase surface area, mechanical strength, electrical conductivity and chemical stability is ongoing. Electrodes for pseudocapacitors MnO2 and RuO2 are typical materials used as electrodes for pseudocapacitors, since they have the electrochemical signature of a capacitive electrode (linear dependence on current versus voltage curve) as well as exhibiting aic behavior. Additionally, the charge storage originates from electron-transfer mechanisms rather than accumulation of ions in the electrochemical double layer. Pseudocapacitors were created through faradaic redox reactions that occur within the active electrode materials. More research was focused on transition-metal oxides such as MnO2 since transition-metal oxides have a lower cost compared to noble metal oxides such as RuO2. Moreover, the charge storage mechanisms of transition-metal oxides are based predominantly on pseudocapacitance. Two mechanisms of MnO2 charge storage behavior were introduced. The first mechanism implies the intercalation of protons (H+) or alkali metal cations (C+) in the bulk of the material upon reduction followed by deintercalation upon oxidation. MnO2 + H+ (C+) + e− MnOOH(C) The second mechanism is based on the surface adsorption of electrolyte cations on MnO2. (MnO2)surface + C+ + e− (MnO2− C+)surface Not every material that exhibits faradaic behavior can be used as an electrode for pseudocapacitors, such as Ni(OH)2 since it is a battery type electrode (non-linear dependence on current versus voltage curve). Metal oxides Brian Evans Conway's research described electrodes of transition metal oxides that exhibited high amounts of pseudocapacitance. Oxides of transition metals including ruthenium (), iridium (), iron (), manganese () or sulfides such as titanium sulfide () alone or in combination generate strong faradaic electron–transferring reactions combined with low resistance. Ruthenium dioxide in combination with electrolyte provides specific capacitance of 720 F/g and a high specific energy of 26.7 Wh/kg (). Charge/discharge takes place over a window of about 1.2 V per electrode. This pseudocapacitance of about 720 F/g is roughly 100 times higher than for double-layer capacitance using activated carbon electrodes. These transition metal electrodes offer excellent reversibility, with several hundred-thousand cycles. However, ruthenium is expensive and the 2.4 V voltage window for this capacitor limits their applications to military and space applications. Das et al. reported highest capacitance value (1715 F/g) for ruthenium oxide based supercapacitor with electrodeposited ruthenium oxide onto porous single wall carbon nanotube film electrode. A high specific capacitance of 1715 F/g has been reported which closely approaches the predicted theoretical maximum capacitance of 2000 F/g. In 2014, a supercapacitor anchored on a graphene foam electrode delivered specific capacitance of 502.78 F/g and areal capacitance of 1.11 F/cm2) leading to a specific energy of 39.28 Wh/kg and specific power of 128.01 kW/kg over 8,000 cycles with constant performance. The device was a three-dimensional (3D) sub-5 nm hydrous ruthenium-anchored graphene and carbon nanotube (CNT) hybrid foam (RGM) architecture. The graphene foam was conformally covered with hybrid networks of nanoparticles and anchored CNTs. Less expensive oxides of iron, vanadium, nickel and cobalt have been tested in aqueous electrolytes, but none has been investigated as much as manganese dioxide (). However, none of these oxides are in commercial use. Conductive polymers Another approach uses electron-conducting polymers as pseudocapacitive material. Although mechanically weak, conductive polymers have high conductivity, resulting in a low ESR and a relatively high capacitance. Such conducting polymers include polyaniline, polythiophene, polypyrrole and polyacetylene. Such electrodes also employ electrochemical doping or dedoping of the polymers with anions and cations. Electrodes made from, or coated with, conductive polymers have costs comparable to carbon electrodes. Conducting polymer electrodes generally suffer from limited cycling stability. However, polyacene electrodes provide up to 10,000 cycles, much better than batteries. Electrodes for hybrid capacitors All commercial hybrid supercapacitors are asymmetric. They combine an electrode with high amount of pseudocapacitance with an electrode with a high amount of double-layer capacitance. In such systems the faradaic pseudocapacitance electrode with their higher capacitance provides high specific energy while the non-faradaic EDLC electrode enables high specific power. An advantage of the hybrid-type supercapacitors compared with symmetrical EDLC's is their higher specific capacitance value as well as their higher rated voltage and correspondingly their higher specific energy. Composite electrodes Composite electrodes for hybrid-type supercapacitors are constructed from carbon-based material with incorporated or deposited pseudocapacitive active materials like metal oxides and conducting polymers. most research for supercapacitors explores composite electrodes. CNTs give a backbone for a homogeneous distribution of metal oxide or electrically conducting polymers (ECPs), producing good pseudocapacitance and good double-layer capacitance. These electrodes achieve higher capacitances than either pure carbon or pure metal oxide or polymer-based electrodes. This is attributed to the accessibility of the nanotubes' tangled mat structure, which allows a uniform coating of pseudocapacitive materials and three-dimensional charge distribution. The process to anchor pseudocapactive materials usually uses a hydrothermal process. However, a recent researcher, Li et al., from the University of Delaware found a facile and scalable approach to precipitate MnO2 on a SWNT film to make an organic-electrolyte based supercapacitor. Another way to enhance CNT electrodes is by doping with a pseudocapacitive dopant as in lithium-ion capacitors. In this case the relatively small lithium atoms intercalate between the layers of carbon. The anode is made of lithium-doped carbon, which enables lower negative potential with a cathode made of activated carbon. This results in a larger voltage of 3.8-4 V that prevents electrolyte oxidation. As of 2007 they had achieved capacitance of 550 F/g. and reach a specific energy up to 14 Wh/kg (). Battery-type electrodes Rechargeable battery electrodes influenced the development of electrodes for new hybrid-type supercapacitor electrodes as for lithium-ion capacitors. Together with a carbon EDLC electrode in an asymmetric construction offers this configuration higher specific energy than typical supercapacitors with higher specific power, longer cycle life and faster charging and recharging times than batteries. Asymmetric electrodes (pseudo/EDLC) Recently some asymmetric hybrid supercapacitors were developed in which the positive electrode were based on a real pseudocapacitive metal oxide electrode (not a composite electrode), and the negative electrode on an EDLC activated carbon electrode. Asymmetric supercapacitors (ASC) have shown a great potential candidate for high-performance supercapacitor due to their wide operating potential which can remarkably enhance the capacitive behavior. An advantage of this type of supercapacitors is their higher voltage and correspondingly their higher specific energy (up to 10-20 Wh/kg (36-72 kJ/kg)).And they also have good cycling stability. For example, researchers use a kind of novel skutterudite Ni–CoP3 nanosheets and use it as positive electrodes with activated carbon (AC) as negative electrodes to fabricate asymmetric supercapacitor (ASC). It exhibits high energy density of 89.6 Wh/kg at 796 W/kg and stability of 93% after 10,000 cycles, which can be a great potential to be an excellent next-generation electrode candidate. Also, carbon nanofibers/poly(3,4-ethylenedioxythiophene)/manganese oxide (f-CNFs/PEDOT/MnO2) were used as positive electrodes and AC as negative electrodes. It has high specific energy of 49.4 Wh/kg and good cycling stability (81.06% after cycling 8000 times). Besides, many kinds of nanocomposite are being studied as electrodes, like NiCo2S4@NiO, MgCo2O4@MnO2 and so on. For example, Fe-SnO2@CeO2 nanocomposite used as electrode can provide a specific energy and specific power of 32.2 Wh/kg and 747 W/kg. The device exhibited the capacitance retention of 85.05 % over 5000 cycles of operation. As far as known no commercial offered supercapacitors with such kind of asymmetric electrodes are on the market. Electrolytes Electrolytes consist of a solvent and dissolved chemicals that dissociate into positive cations and negative anions, making the electrolyte electrically conductive. The more ions the electrolyte contains, the better its conductivity. In supercapacitors electrolytes are the electrically conductive connection between the two electrodes. Additionally, in supercapacitors the electrolyte provides the molecules for the separating monolayer in the Helmholtz double-layer and delivers the ions for pseudocapacitance. The electrolyte determines the capacitor's characteristics: its operating voltage, temperature range, ESR and capacitance. With the same activated carbon electrode an aqueous electrolyte achieves capacitance values of 160 F/g, while an organic electrolyte achieves only 100 F/g. The electrolyte must be chemically inert and not chemically attack the other materials in the capacitor to ensure long time stable behavior of the capacitor's electrical parameters. The electrolyte's viscosity must be low enough to wet the porous, sponge-like structure of the electrodes. An ideal electrolyte does not exist, forcing a compromise between performance and other requirements. Water is a relatively good solvent for inorganic chemicals. Treated with acids such as sulfuric acid (), alkalis such as potassium hydroxide (KOH), or salts such as quaternary phosphonium salts, sodium perchlorate (), lithium perchlorate () or lithium hexafluoride arsenate (), water offers relatively high conductivity values of about 100 to 1000 mS/cm. Aqueous electrolytes have a dissociation voltage of 1.15 V per electrode (2.3 V capacitor voltage) and a relatively low operating temperature range. They are used in supercapacitors with low specific energy and high specific power. Electrolytes with organic solvents such as acetonitrile, propylene carbonate, tetrahydrofuran, diethyl carbonate, γ-butyrolactone and solutions with quaternary ammonium salts or alkyl ammonium salts such as tetraethylammonium tetrafluoroborate () or triethyl (metyl) tetrafluoroborate () are more expensive than aqueous electrolytes, but they have a higher dissociation voltage of typically 1.35 V per electrode (2.7 V capacitor voltage), and a higher temperature range. The lower electrical conductivity of organic solvents (10 to 60 mS/cm) leads to a lower specific power, but since the specific energy increases with the square of the voltage, a higher specific energy. Ionic electrolytes consists of liquid salts that can be stable in a wider electrochemical window, enabling capacitor voltages above 3.5 V. Ionic electrolytes typically have an ionic conductivity of a few mS/cm, lower than aqueous or organic electrolytes. Separators Separators have to physically separate the two electrodes to prevent a short circuit by direct contact. It can be very thin (a few hundredths of a millimeter) and must be very porous to the conducting ions to minimize ESR. Furthermore, separators must be chemically inert to protect the electrolyte's stability and conductivity. Inexpensive components use open capacitor papers. More sophisticated designs use nonwoven porous polymeric films like polyacrylonitrile or Kapton, woven glass fibers or porous woven ceramic fibres. Collectors and housing Current collectors connect the electrodes to the capacitor's terminals. The collector is either sprayed onto the electrode or is a metal foil. They must be able to distribute peak currents of up to 100 A. If the housing is made out of a metal (typically aluminum) the collectors should be made from the same material to avoid forming a corrosive galvanic cell. Electrical parameters Capacitance Capacitance values for commercial capacitors are specified as "rated capacitance CR". This is the value for which the capacitor has been designed. The value for an actual component must be within the limits given by the specified tolerance. Typical values are in the range of farads (F), three to six orders of magnitude larger than those of electrolytic capacitors. The capacitance value results from the energy (expressed in Joule) of a loaded capacitor loaded via a DC voltage VDC. This value is also called the "DC capacitance". Measurement Conventional capacitors are normally measured with a small AC voltage (0.5 V) and a frequency of 100 Hz or 1 kHz depending on the capacitor type. The AC capacitance measurement offers fast results, important for industrial production lines. The capacitance value of a supercapacitor depends strongly on the measurement frequency, which is related to the porous electrode structure and the limited electrolyte's ion mobility. Even at a low frequency of 10 Hz, the measured capacitance value drops from 100 to 20 percent of the DC capacitance value. This extraordinarily strong frequency dependence can be explained by the different distances the ions have to move in the electrode's pores. The area at the beginning of the pores can be easily accessed by the ions; this short distance is accompanied by low electrical resistance. The greater the distance the ions have to cover, the higher the resistance. This phenomenon can be described with a series circuit of cascaded RC (resistor/capacitor) elements with serial RC time constants. These result in delayed current flow, reducing the total electrode surface area that can be covered with ions if polarity changes – capacitance decreases with increasing AC frequency. Thus, the total capacitance is achieved only after longer measuring times. Out of the reason of the very strong frequency dependence of the capacitance, this electrical parameter has to be measured with a special constant current charge and discharge measurement, defined in IEC standards 62391-1 and -2. Measurement starts with charging the capacitor. The voltage has to be applied and after the constant current/constant voltage power supply has achieved the rated voltage, the capacitor must be charged for 30 minutes. Next, the capacitor has to be discharged with a constant discharge current Idischarge. Then the time t1 and t2, for the voltage to drop from 80% (V1) to 40% (V2) of the rated voltage is measured. The capacitance value is calculated as: The value of the discharge current is determined by the application. The IEC standard defines four classes: Memory backup, discharge current in mA = 1 • C (F) Energy storage, discharge current in mA = 0,4 • C (F) • V (V) Power, discharge current in mA = 4 • C (F) • V (V) Instantaneous power, discharge current in mA = 40 • C (F) • V (V) The measurement methods employed by individual manufacturers are mainly comparable to the standardized methods. The standardized measuring method is too time consuming for manufacturers to use during production for each individual component. For industrial-produced capacitors, the capacitance value is instead measured with a faster, low-frequency AC voltage, and a correlation factor is used to compute the rated capacitance. This frequency dependence affects capacitor operation. Rapid charge and discharge cycles mean that neither the rated capacitance value nor specific energy are available. In this case the rated capacitance value is recalculated for each application condition. The time t a supercapacitor can deliver a constant current I can be calculated as: as the capacitor voltage decreases from Ucharge down to Umin. If the application needs a constant power P for a certain time t this can be calculated as: wherein also the capacitor voltage decreases from Ucharge down to Umin. Operating voltage Supercapacitors are low voltage components. Safe operation requires that the voltage remain within specified limits. The rated voltage UR is the maximum DC voltage or peak pulse voltage that may be applied continuously and remain within the specified temperature range. Capacitors should never be subjected to voltages continuously in excess of the rated voltage. The rated voltage includes a safety margin against the electrolyte's breakdown voltage at which the electrolyte decomposes. The breakdown voltage decomposes the separating solvent molecules in the Helmholtz double-layer, e.g. water splits into hydrogen and oxygen. The solvent molecules then cannot separate the electrical charges from each other. Higher voltages than rated voltage cause hydrogen gas formation or a short circuit. Standard supercapacitors with aqueous electrolyte normally are specified with a rated voltage of 2.1 to 2.3 V and capacitors with organic solvents with 2.5 to 2.7 V. Lithium-ion capacitors with doped electrodes may reach a rated voltage of 3.8 to 4 V, but have a low voltage limit of about 2.2 V. Supercapacitors with ionic electrolytes can exceed an operating voltage of 3.5 V. Operating supercapacitors below the rated voltage improves the long-time behavior of the electrical parameters. Capacitance values and internal resistance during cycling are more stable and lifetime and charge/discharge cycles may be extended. Higher application voltages require connecting cells in series. Since each component has a slight difference in capacitance value and ESR, it is necessary to actively or passively balance them to stabilize the applied voltage. Passive balancing employs resistors in parallel with the supercapacitors. Active balancing may include electronic voltage management above a threshold that varies the current. Internal resistance Charging/discharging a supercapacitor is connected to the movement of charge carriers (ions) in the electrolyte across the separator to the electrodes and into their porous structure. Losses occur during this movement that can be measured as the internal DC resistance. With the electrical model of cascaded, series-connected RC (resistor/capacitor) elements in the electrode pores, the internal resistance increases with the increasing penetration depth of the charge carriers into the pores. The internal DC resistance is time dependent and increases during charge/discharge. In applications often only the switch-on and switch-off range is interesting. The internal resistance Ri can be calculated from the voltage drop ΔV2 at the time of discharge, starting with a constant discharge current Idischarge. It is obtained from the intersection of the auxiliary line extended from the straight part and the time base at the time of discharge start (see picture right). Resistance can be calculated by: The discharge current Idischarge for the measurement of internal resistance can be taken from the classification according to IEC 62391-1. This internal DC resistance Ri should not be confused with the internal AC resistance called equivalent series resistance (ESR) normally specified for capacitors. It is measured at 1 kHz. ESR is much smaller than DC resistance. ESR is not relevant for calculating supercapacitor inrush currents or other peak currents. Ri determines several supercapacitor properties. It limits the charge and discharge peak currents as well as charge/discharge times. Ri and the capacitance C results in the time constant This time constant determines the charge/discharge time. A 100 F capacitor with an internal resistance of 30 mΩ for example, has a time constant of 0.03 • 100 = 3 s. After 3 seconds charging with a current limited only by internal resistance, the capacitor has 63.2% of full charge (or is discharged to 36.8% of full charge). Standard capacitors with constant internal resistance fully charge during about 5 τ. Since internal resistance increases with charge/discharge, actual times cannot be calculated with this formula. Thus, charge/discharge time depends on specific individual construction details. Current load and cycle stability Because supercapacitors operate without forming chemical bonds, current loads, including charge, discharge and peak currents are not limited by reaction constraints. Current load and cycle stability can be much higher than for rechargeable batteries. Current loads are limited only by internal resistance, which may be substantially lower than for batteries. Internal resistance "Ri" and charge/discharge currents or peak currents "I" generate internal heat losses "Ploss" according to: This heat must be released and distributed to the ambient environment to maintain operating temperatures below the specified maximum temperature. Heat generally defines capacitor lifetime due to electrolyte diffusion. The heat generation coming from current loads should be smaller than 5 to 10 K at maximum ambient temperature (which has only minor influence on expected lifetime). For that reason the specified charge and discharge currents for frequent cycling are determined by internal resistance. The specified cycle parameters under maximal conditions include charge and discharge current, pulse duration and frequency. They are specified for a defined temperature range and over the full voltage range for a defined lifetime. They can differ enormously depending on the combination of electrode porosity, pore size and electrolyte. Generally a lower current load increases capacitor life and increases the number of cycles. This can be achieved either by a lower voltage range or slower charging and discharging. Supercapacitors (except those with polymer electrodes) can potentially support more than one million charge/discharge cycles without substantial capacity drops or internal resistance increases. Beneath the higher current load is this the second great advantage of supercapacitors over batteries. The stability results from the dual electrostatic and electrochemical storage principles. The specified charge and discharge currents can be significantly exceeded by lowering the frequency or by single pulses. Heat generated by a single pulse may be spread over the time until the next pulse occurs to ensure a relatively small average heat increase. Such a "peak power current" for power applications for supercapacitors of more than 1000 F can provide a maximum peak current of about 1000 A. Such high currents generate high thermal stress and high electromagnetic forces that can damage the electrode-collector connection requiring robust design and construction of the capacitors. Device capacitance and resistance dependence on operating voltage and temperature Device parameters such as capacitance initial resistance and steady state resistance are not constant, but are variable and dependent on the device's operating voltage. Device capacitance will have a measurable increase as the operating voltage increases. For example: a 100F device can be seen to vary 26% from its maximum capacitance over its entire operational voltage range. Similar dependence on operating voltage is seen in steady state resistance (Rss) and initial resistance (Ri). Device properties can also be seen to be dependent on device temperature. As the temperature of the device changes either through operation of varying ambient temperature, the internal properties such as capacitance and resistance will vary as well. Device capacitance is seen to increase as the operating temperature increases. Energy capacity Supercapacitors occupy the gap between high power/low energy electrolytic capacitors and low power/high energy rechargeable batteries. The energy Wmax (expressed in Joule) that can be stored in a capacitor is given by the formula This formula describes the amount of energy stored and is often used to describe new research successes. However, only part of the stored energy is available to applications, because the voltage drop and the time constant over the internal resistance mean that some of the stored charge is inaccessible. The effective realized amount of energy Weff is reduced by the used voltage difference between Vmax and Vmin and can be represented as: This formula also represents the energy asymmetric voltage components such as lithium ion capacitors. Specific energy and specific power The amount of energy that can be stored in a capacitor per mass of that capacitor is called its specific energy. Specific energy is measured gravimetrically (per unit of mass) in watt-hours per kilogram (Wh/kg). The amount of energy can be stored in a capacitor per volume of that capacitor is called its energy density (also called volumetric specific energy in some literature). Energy density is measured volumetrically (per unit of volume) in watt-hours per litre (Wh/L). Units of liters and dm3 can be used interchangeably. commercial energy density varies widely, but in general range from around 5 to . In comparison, petrol fuel has an energy density of 32.4 MJ/L or . Commercial specific energies range from around 0.5 to . For comparison, an aluminum electrolytic capacitor stores typically 0.01 to , while a conventional lead–acid battery stores typically 30 to and modern lithium-ion batteries 100 to . Supercapacitors can therefore store 10 to 100 times more energy than electrolytic capacitors, but only one tenth as much as batteries. For reference, petrol fuel has a specific energy of 44.4 MJ/kg or . Although the specific energy of supercapacitors is defavorably compared with batteries, capacitors have the important advantage of the specific power. Specific power describes the speed at which energy can be delivered to the load (or, in charging the device, absorbed from the generator). The maximum power Pmax specifies the power of a theoretical rectangular single maximum current peak of a given voltage. In real circuits the current peak is not rectangular and the voltage is smaller, caused by the voltage drop, so IEC 62391–2 established a more realistic effective power Peff for supercapacitors for power applications, which is half the maximum and given by the following formulas : , with V = voltage applied and Ri, the internal DC resistance of the capacitor. Just like specific energy, specific power is measured either gravimetrically in kilowatts per kilogram (kW/kg, specific power) or volumetrically in kilowatts per litre (kW/L, power density). Supercapacitor specific power is typically 10 to 100 times greater than for batteries and can reach values up to 15 kW/kg. Ragone charts relate energy to power and are a valuable tool for characterizing and visualizing energy storage components. With such a diagram, the position of specific power and specific energy of different storage technologies is easily to compare, see diagram. Lifetime Since supercapacitors do not rely on chemical changes in the electrodes (except for those with polymer electrodes), lifetimes depend mostly on the rate of evaporation of the liquid electrolyte. This evaporation is generally a function of temperature, current load, current cycle frequency and voltage. Current load and cycle frequency generate internal heat, so that the evaporation-determining temperature is the sum of ambient and internal heat. This temperature is measurable as core temperature in the center of a capacitor body. The higher the core temperature, the faster the evaporation, and the shorter the lifetime. Evaporation generally results in decreasing capacitance and increasing internal resistance. According to IEC/EN 62391-2, capacitance reductions of over 30%, or internal resistance exceeding four times its data sheet specifications, are considered "wear-out failures," implying that the component has reached end-of-life. The capacitors are operable, but with reduced capabilities. Whether the aberration of the parameters have any influence on the proper functionality depends on the application of the capacitors. Such large changes of electrical parameters specified in IEC/EN 62391-2 are usually unacceptable for high current load applications. Components that support high current loads use much smaller limits, e.g., 20% loss of capacitance or double the internal resistance. The narrower definition is important for such applications, since heat increases linearly with increasing internal resistance, and the maximum temperature should not be exceeded. Temperatures higher than specified can destroy the capacitor. The real application lifetime of supercapacitors, also called "service life," "life expectancy," or "load life," can reach 10 to 15 years or more, at room temperature. Such long periods cannot be tested by manufacturers. Hence, they specify the expected capacitor lifetime at the maximum temperature and voltage conditions. The results are specified in datasheets using the notation "tested time (hours)/max. temperature (°C)," such as "5000 h/65 °C". With this value, and expressions derived from historical data, lifetimes can be estimated for lower temperature conditions. Datasheet lifetime specification is tested by the manufactures using an accelerated aging test called an "endurance test," with maximum temperature and voltage over a specified time. For a "zero defect" product policy, no wear out or total failure may occur during this test. The lifetime specification from datasheets can be used to estimate the expected lifetime for a given design. The "10-degrees-rule" used for electrolytic capacitors with non-solid electrolyte is used in those estimations, and can be used for supercapacitors. This rule employs the Arrhenius equation: a simple formula for the temperature dependence of reaction rates. For every 10 °C reduction in operating temperature, the estimated life doubles. With: Lx = estimated lifetime L0 = specified lifetime T0 = upper specified capacitor temperature Tx = actual operating temperature of the capacitor cell Calculated with this formula, capacitors specified with 5000 h at 65 °C, have an estimated lifetime of 20,000 h at 45 °C. Lifetimes are also dependent on the operating voltage, because the development of gas in the liquid electrolyte depends on the voltage. The lower the voltage, the smaller the gas development, and the longer the lifetime. No general formula relates voltage to lifetime. The voltage dependent curves shown from the picture are an empirical result from one manufacturer. Life expectancy for power applications may be also limited by current load or number of cycles. This limitation has to be specified by the relevant manufacturer and is strongly type dependent. Self-discharge Storing electrical energy in the double-layer separates the charge carriers within the pores by distances in the range of molecules. Irregularities can occur over this short distance, leading to a small exchange of charge carriers and gradual discharge. This self-discharge is called leakage current. Leakage depends on capacitance, voltage, temperature, and the chemical stability of the electrode/electrolyte combination. At room temperature, leakage is so low that it is specified as time to self-discharge in hours, days, or weeks. As an example, a 5.5 V/F Panasonic "Goldcapacitor" specifies a voltage drop at 20 °C from 5.5 V to 3 V in 600 hours (25 days or 3.6 weeks) for a double cell capacitor. Post charge voltage relaxation It has been noticed that after the EDLC experiences a charge or discharge, the voltage will drift over time, relaxing toward its previous voltage level. The observed relaxation can occur over several hours and is likely due to long diffusion time constants of the porous electrodes within the EDLC. Polarity Since the positive and negative electrodes (or simply positrode and negatrode, respectively) of symmetric supercapacitors consist of the same material, theoretically supercapacitors have no true polarity and catastrophic failure does not normally occur. However reverse-charging a supercapacitor lowers its capacity, so it is recommended practice to maintain the polarity resulting from the formation of the electrodes during production. Asymmetric supercapacitors are inherently polar. Pseudocapacitor and hybrid supercapacitors which have electrochemical charge properties may not be operated with reverse polarity, precluding their use in AC operation. However, this limitation does not apply to EDLC supercapacitors A bar in the insulating sleeve identifies the negative terminal in a polarized component. In some literature, the terms "anode" and "cathode" are used in place of negative electrode and positive electrode. Using anode and cathode to describe the electrodes in supercapacitors (and also rechargeable batteries, including lithium-ion batteries) can lead to confusion, because the polarity changes depending on whether a component is considered as a generator or as a consumer of current. In electrochemistry, cathode and anode are related to reduction and oxidation reactions, respectively. However, in supercapacitors based on electric double-layer capacitance, there is no oxidation nor reduction reactions on any of the two electrodes. Therefore, the concepts of cathode and anode do not apply. Comparison of selected commercial supercapacitors The range of electrodes and electrolytes available yields a variety of components suitable for diverse applications. The development of low-ohmic electrolyte systems, in combination with electrodes with high pseudocapacitance, enable many more technical solutions. The following table shows differences among capacitors of various manufacturers in capacitance range, cell voltage, internal resistance (ESR, DC or AC value) and volumetric and gravimetric specific energy. In the table, ESR refers to the component with the largest capacitance value of the respective manufacturer. Roughly, they divide supercapacitors into two groups. The first group offers greater ESR values of about 20 milliohms and relatively small capacitance of 0.1 to 470 F. These are "double-layer capacitors" for memory back-up or similar applications. The second group offers 100 to 10,000 F with a significantly lower ESR value under 1 milliohm. These components are suitable for power applications. A correlation of some supercapacitor series of different manufacturers to the various construction features is provided in Pandolfo and Hollenkamp. In commercial double-layer capacitors, or, more specifically, EDLCs in which energy storage is predominantly achieved by double-layer capacitance, energy is stored by forming an electrical double layer of electrolyte ions on the surface of conductive electrodes. Since EDLCs are not limited by the electrochemical charge transfer kinetics of batteries, they can charge and discharge at a much higher rate, with lifetimes of more than 1 million cycles. The EDLC energy density is determined by operating voltage and the specific capacitance (farad/gram or farad/cm3) of the electrode/electrolyte system. The specific capacitance is related to the Specific Surface Area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Commercial EDLCs are based on two symmetric electrodes impregnated with electrolytes comprising tetraethylammonium tetrafluoroborate salts in organic solvents. Current EDLCs containing organic electrolytes operate at 2.7 V and reach energy densities around 5-8 Wh/kg and 7 to 10 Wh/L. The specific capacitance is related to the specific surface area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Graphene-based platelets with mesoporous spacer material is a promising structure for increasing the SSA of the electrolyte. Standards Supercapacitors vary sufficiently that they are rarely interchangeable, especially those with higher specific energy. Applications range from low to high peak currents, requiring standardized test protocols. Test specifications and parameter requirements are specified in the generic specification IEC/EN 62391–1, Fixed electric double layer capacitors for use in electronic equipment. The standard defines four application classes, according to discharge current levels: Memory backup Energy storage, mainly used for driving motors require a short time operation, Power, higher power demand for a long time operation, Instantaneous power, for applications that requires relatively high current units or peak currents ranging up to several hundreds of amperes even with a short operating time Three further standards describe special applications: IEC 62391–2, Fixed electric double-layer capacitors for use in electronic equipment - Blank detail specification - Electric double-layer capacitors for power application IEC 62576, Electric double-layer capacitors for use in hybrid electric vehicles. Test methods for electrical characteristics BS/EN 61881-3, Railway applications. Rolling stock equipment. Capacitors for power electronics. Electric double-layer capacitors Applications Supercapacitors have advantages in applications where a large amount of power is needed for a relatively short time, where a very high number of charge/discharge cycles or a longer lifetime is required. Typical applications range from milliamp currents or milliwatts of power for up to a few minutes to several amps current or several hundred kilowatts power for much shorter periods. Supercapacitors do not support alternating current (AC) applications. Consumer electronics In applications with fluctuating loads, such as laptop computers, PDAs, GPS, portable media players, hand-held devices, and photovoltaic systems, supercapacitors can stabilize the power supply. Supercapacitors deliver power for photographic flashes in digital cameras and for LED flashlights that can be charged in much shorter periods of time, e.g., 90 seconds. Some portable speakers are powered by supercapacitors. A cordless electric screwdriver with supercapacitors for energy storage has about half the run time of a comparable battery model, but can be fully charged in 90 seconds. It retains 85% of its charge after three months left idle. Power generation and distribution Grid power buffering Numerous non-linear loads, such as EV chargers, HEVs, air conditioning systems, and advanced power conversion systems cause current fluctuations and harmonics. These current differences create unwanted voltage fluctuations and therefore power oscillations on the grid. Power oscillations not only reduce the efficiency of the grid, but can cause voltage drops in the common coupling bus, and considerable frequency fluctuations throughout the entire system. To overcome this problem, supercapacitors can be implemented as an interface between the load and the grid to act as a buffer between the grid and the high pulse power drawn from the charging station. Low-power equipment power buffering Supercapacitors provide backup or emergency shutdown power to low-power equipment such as RAM, SRAM, micro-controllers and PC Cards. They are the sole power source for low energy applications such as automated meter reading (AMR) equipment or for event notification in industrial electronics. Supercapacitors buffer power to and from rechargeable batteries, mitigating the effects of short power interruptions and high current peaks. Batteries kick in only during extended interruptions, e.g., if the mains power or a fuel cell fails, which lengthens battery life. Uninterruptible power supplies (UPS) may be powered by supercapacitors, which can replace much larger banks of electrolytic capacitors. This combination reduces the cost per cycle, saves on replacement and maintenance costs, enables the battery to be downsized and extends battery life. Supercapacitors provide backup power for actuators in wind turbine pitch systems, so that blade pitch can be adjusted even if the main supply fails. Voltage stabilization Supercapacitors can stabilize voltage fluctuations for powerlines by acting as dampers. Wind and photovoltaic systems exhibit fluctuating supply evoked by gusting or clouds that supercapacitors can buffer within milliseconds. Micro grids Micro grids are usually powered by clean and renewable energy. Most of this energy generation, however, is not constant throughout the day and does not usually match demand. Supercapacitors can be used for micro grid storage to instantaneously inject power when the demand is high and the production dips momentarily, and to store energy in the reverse conditions. They are useful in this scenario, because micro grids are increasingly producing power in DC, and capacitors can be utilized in both DC and AC applications. Supercapacitors work best in conjunction with chemical batteries. They provide an immediate voltage buffer to compensate for quick changing power loads due to their high charge and discharge rate through an active control system. Once the voltage is buffered, it is put through an inverter to supply AC power to the grid. Supercapacitors cannot provide frequency correction in this form directly in the AC grid. Energy harvesting Supercapacitors are suitable temporary energy storage devices for energy harvesting systems. In energy harvesting systems, the energy is collected from the ambient or renewable sources, e.g., mechanical movement, light or electromagnetic fields, and converted to electrical energy in an energy storage device. For example, it was demonstrated that energy collected from RF (radio frequency) fields (using an RF antenna as an appropriate rectifier circuit) can be stored to a printed supercapacitor. The harvested energy was then used to power an application-specific integrated circuit (ASIC) for over 10 hours. Batteries The UltraBattery is a hybrid rechargeable lead-acid battery and a supercapacitor. Its cell construction contains a standard lead-acid battery positive electrode, standard sulphuric acid electrolyte and a specially prepared negative carbon-based electrode that store electrical energy with double-layer capacitance. The presence of the supercapacitor electrode alters the chemistry of the battery and affords it significant protection from sulfation in high rate partial state of charge use, which is the typical failure mode of valve regulated lead-acid cells used this way. The resulting cell performs with characteristics beyond either a lead-acid cell or a supercapacitor, with charge and discharge rates, cycle life, efficiency and performance all enhanced. Medical Supercapacitors are used in defibrillators where they can deliver 500 joules to shock the heart back into sinus rhythm. Military Supercapacitors' low internal resistance supports applications that require short-term high currents. Among the earliest uses were motor startup (cold engine starts, particularly with diesels) for large engines in tanks and submarines. Supercapacitors buffer the battery, handling short current peaks, reducing cycling and extending battery life. Further military applications that require high specific power are phased array radar antennae, laser power supplies, military radio communications, avionics displays and instrumentation, backup power for airbag deployment and GPS-guided missiles and projectiles. Transport A primary challenge of all transport is reducing energy consumption and reducing emissions. Recovery of braking energy (recuperation or regenerative braking) helps with both. This requires components that can quickly store and release energy over long times with a high cycle rate. Supercapacitors fulfill these requirements and are therefore used in various applications in transportation. Aviation In 2005, aerospace systems and controls company Diehl Luftfahrt Elektronik GmbH chose supercapacitors to power emergency actuators for doors and evacuation slides used in airliners, including the Airbus 380. Cars The Toyota Yaris Hybrid-R concept car uses a supercapacitor to provide bursts of power. PSA Peugeot Citroën started using supercapacitors (circa 2014) as part of its stop-start fuel-saving system, which permits faster initial acceleration. Mazda's i-ELOOP system stores energy in a supercapacitor during deceleration and uses it to power on-board electrical systems while the engine is stopped by the stop-start system. Rail Supercapacitors can be used to supplement batteries in starter systems in diesel railroad locomotives with diesel–electric transmission. The capacitors capture the braking energy of a full stop and deliver the peak current for starting the diesel engine and acceleration of the train and ensures the stabilization of line voltage. Depending on the driving mode up to 30% energy saving is possible by recovery of braking energy. Low maintenance and environmentally friendly materials encouraged the choice of supercapacitors. Plant machinery Mobile hybrid Diesel–electric rubber tyred gantry cranes move and stack containers within a terminal. Lifting the boxes requires large amounts of energy. Some of the energy could be recaptured while lowering the load, resulting in improved efficiency. A triple hybrid forklift truck uses fuel cells and batteries as primary energy storage and supercapacitors to buffer power peaks by storing braking energy. They provide the fork lift with peak power over 30 kW. The triple-hybrid system offers over 50% energy savings compared with Diesel or fuel-cell systems. Supercapacitor-powered terminal tractors transport containers to warehouses. They provide an economical, quiet and pollution-free alternative to Diesel terminal tractors. Light rail Supercapacitors make it possible not only to reduce energy, but to replace overhead lines in historical city areas, so preserving the city's architectural heritage. This approach may allow many new light rail city lines to replace overhead wires that are too expensive to fully route. In 2003 Mannheim adopted a prototype light-rail vehicle (LRV) using the MITRAC Energy Saver system from Bombardier Transportation to store mechanical braking energy with a roof-mounted supercapacitor unit. It contains several units each made of 192 capacitors with 2700 F / 2.7 V interconnected in three parallel lines. This circuit results in a 518 V system with an energy content of 1.5 kWh. For acceleration when starting this "on-board-system" can provide the LRV with 600 kW and can drive the vehicle up to 1 km without overhead line supply, thus better integrating the LRV into the urban environment. Compared to conventional LRVs or Metro vehicles that return energy into the grid, onboard energy storage saves up to 30% and reduces peak grid demand by up to 50%. In 2009 supercapacitors enabled LRVs to operate in the historical city area of Heidelberg without overhead wires, thus preserving the city's architectural heritage. The SC equipment cost an additional €270,000 per vehicle, which was expected to be recovered over the first 15 years of operation. The supercapacitors are charged at stop-over stations when the vehicle is at a scheduled stop. In April 2011 German regional transport operator Rhein-Neckar, responsible for Heidelberg, ordered a further 11 units. In 2009, Alstom and RATP equipped a Citadis tram with an experimental energy recovery system called "STEEM". The system is fitted with 48 roof-mounted supercapacitors to store braking energy, which provides tramways with a high level of energy autonomy by enabling them to run without overhead power lines on parts of its route, recharging while traveling on powered stop-over stations. During the tests, which took place between the Porte d'Italie and Porte de Choisy stops on line T3 of the tramway network in Paris, the tramset used an average of approximately 16% less energy. In 2012 tram operator Geneva Public Transport began tests of an LRV equipped with a prototype roof-mounted supercapacitor unit to recover braking energy. Siemens is delivering supercapacitor-enhanced light-rail transport systems that include mobile storage. Hong Kong's South Island metro line is to be equipped with two 2 MW energy storage units that are expected to reduce energy consumption by 10%. In August 2012 the CSR Zhuzhou Electric Locomotive corporation of China presented a prototype two-car light metro train equipped with a roof-mounted supercapacitor unit. The train can travel up 2 km without wires, recharging in 30 seconds at stations via a ground mounted pickup. The supplier claimed the trains could be used in 100 small and medium-sized Chinese cities. Seven trams (street cars) powered by supercapacitors were scheduled to go into operation in 2014 in Guangzhou, China. The supercapacitors are recharged in 30 seconds by a device positioned between the rails. That powers the tram for up to . As of 2017, Zhuzhou's supercapacitor vehicles are also used on the new Nanjing streetcar system, and are undergoing trials in Wuhan. In 2012, in Lyon (France), the SYTRAL (Lyon public transportation administration) started experiments of a "way side regeneration" system built by Adetel Group which has developed its own energy saver named "NeoGreen" for LRV, LRT and metros. In 2014 China began using trams powered with supercapacitors that are recharged in 30 seconds by a device positioned between the rails, storing power to run the tram for up to 4 km — more than enough to reach the next stop, where the cycle can be repeated. In 2015, Alstom announced SRS, an energy storage system that charges supercapacitors on board a tram by means of ground-level conductor rails located at tram stops. This allows trams to operate without overhead lines for short distances. The system has been touted as an alternative to the company's ground-level power supply (APS) system, or can be used in conjunction with it, as in the case of the VLT network in Rio de Janeiro, Brazil, which opened in 2016. CAF also offers supercapacitors on their Urbos 3 trams in the form of their ACR system. Buses Maxwell Technologies, an American supercapacitor maker, claimed that more than 20,000 hybrid buses use the devices to increase acceleration, particularly in China. The first hybrid electric bus with supercapacitors in Europe came in 2001 in Nuremberg, Germany. It was MAN's so-called "Ultracapbus", and was tested in real operation in 2001/2002. The test vehicle was equipped with a diesel-electric drive in combination with supercapacitors. The system was supplied with 8 Ultracap modules of 80 V, each containing 36 components. The system worked with 640 V and could be charged/discharged at 400 A. Its energy content was 0.4 kWh with a weight of 400 kg. The supercapacitors recaptured braking energy and delivered starting energy. Fuel consumption was reduced by 10 to 15% compared to conventional diesel vehicles. Other advantages included reduction of emissions, quiet and emissions-free engine starts, lower vibration and reduced maintenance costs. in Luzern, Switzerland an electric bus fleet called TOHYCO-Rider was tested. The supercapacitors could be recharged via an inductive contactless high-speed power charger after every transportation cycle, within 3 to 4 minutes. In early 2005 Shanghai tested a new form of electric bus called capabus that runs without powerlines (catenary free operation) using large onboard supercapacitors that partially recharge whenever the bus is at a stop (under so-called electric umbrellas), and fully charge in the terminus. In 2006, two commercial bus routes began to use the capabuses; one of them is route 11 in Shanghai. It was estimated that the supercapacitor bus was cheaper than a lithium-ion battery bus, and one of its buses had one-tenth the energy cost of a diesel bus with lifetime fuel savings of $200,000. A hybrid electric bus called tribrid was unveiled in 2008 by the University of Glamorgan, Wales, for use as student transport. It is powered by hydrogen fuel or solar cells, batteries and ultracapacitors. Motor racing The FIA, a governing body for motor racing events, proposed in the Power-Train Regulation Framework for Formula 1 version 1.3 of 23 May 2007 that a new set of power train regulations be issued that includes a hybrid drive of up to 200 kW input and output power using "superbatteries" made with batteries and supercapacitors connected in parallel (KERS). About 20% tank-to-wheel efficiency could be reached using the KERS system. The Toyota TS030 Hybrid LMP1 car, a racing car developed under Le Mans Prototype rules, uses a hybrid drivetrain with supercapacitors. In the 2012 24 Hours of Le Mans race a TS030 qualified with a fastest lap only 1.055 seconds slower (3:24.842 versus 3:23.787) than the fastest car, an Audi R18 e-tron quattro with flywheel energy storage. The supercapacitor and flywheel components, whose rapid charge-discharge capabilities help in both braking and acceleration, made the Audi and Toyota hybrids the fastest cars in the race. In the 2012 Le Mans race the two competing TS030s, one of which was in the lead for part of the race, both retired for reasons unrelated to the supercapacitors. The TS030 won three of the 8 races in the 2012 FIA World Endurance Championship season. In 2014 the Toyota TS040 Hybrid used a supercapacitor to add 480 horsepower from two electric motors. Hybrid electric vehicles Supercapacitor/battery combinations in electric vehicles (EV) and hybrid electric vehicles (HEV) are well investigated. A 20 to 60% fuel reduction has been claimed by recovering brake energy in EVs or HEVs. The ability of supercapacitors to charge much faster than batteries, their stable electrical properties, broader temperature range and longer lifetime are suitable, but weight, volume and especially cost mitigate those advantages. Supercapacitors' lower specific energy makes them unsuitable for use as a stand-alone energy source for long distance driving. The fuel economy improvement between a capacitor and a battery solution is about 20% and is available only for shorter trips. For long distance driving the advantage decreases to 6%. Vehicles combining capacitors and batteries run only in experimental vehicles. all automotive manufacturers of EV or HEVs have developed prototypes that uses supercapacitors instead of batteries to store braking energy in order to improve driveline efficiency. The Mazda 6 is the only production car that uses supercapacitors to recover braking energy. Branded as i-eloop, the regenerative braking is claimed to reduce fuel consumption by about 10%. Russian Yo-cars Ё-mobile series was a concept and crossover hybrid vehicle working with a gasoline driven rotary vane type and an electric generator for driving the traction motors. A supercapacitor with relatively low capacitance recovers brake energy to power the electric motor when accelerating from a stop. Toyota's Yaris Hybrid-R concept car uses a supercapacitor to provide quick bursts of power. PSA Peugeot Citroën fit supercapacitors to some of its cars as part of its stop-start fuel-saving system, as this permits faster start-ups when the traffic lights turn green. Gondolas In Zell am See, Austria, an aerial lift connects the city with Schmittenhöhe mountain. The gondolas sometimes run 24 hours per day, using electricity for lights, door opening and communication. The only available time for recharging batteries at the stations is during the brief intervals of guest loading and unloading, which is too short to recharge batteries. Supercapacitors offer a fast charge, higher number of cycles and longer life time than batteries. Emirates Air Line (cable car), also known as the Thames cable car, is a 1-kilometre (0.62 mi) gondola line in London, UK, that crosses the Thames from the Greenwich Peninsula to the Royal Docks. The cabins are equipped with a modern infotainment system, which is powered by supercapacitors. Developments commercially available lithium-ion supercapacitors offered the highest gravimetric specific energy to date, reaching 15 Wh/kg (). Research focuses on improving specific energy, reducing internal resistance, expanding temperature range, increasing lifetimes and reducing costs. Projects include tailored-pore-size electrodes, pseudocapacitive coating or doping materials and improved electrolytes. Research into electrode materials requires measurement of individual components, such as an electrode or half-cell. By using a counterelectrode that does not affect the measurements, the characteristics of only the electrode of interest can be revealed. Specific energy and power for real supercapacitors only have more or less roughly 1/3 of the electrode density. Market worldwide sales of supercapacitors is about US$400 million. The market for batteries (estimated by Frost & Sullivan) grew from US$47.5 billion, (76.4% or US$36.3 billion of which was rechargeable batteries) to US$95 billion. The market for supercapacitors is still a small niche market that is not keeping pace with its larger rival. In 2016, IDTechEx forecast sales to grow from $240 million to $2 billion by 2026, an annual increase of about 24%. Supercapacitor costs in 2006 were US$0.01 per farad or US$2.85 per kilojoule, moving in 2008 below US$0.01 per farad, and were expected to drop further in the medium term.
Technology
Components
null
38815769
https://en.wikipedia.org/wiki/Stellar%20core
Stellar core
A stellar core is the extremely hot, dense region at the center of a star. For an ordinary main sequence star, the core region is the volume where the temperature and pressure conditions allow for energy production through thermonuclear fusion of hydrogen into helium. This energy in turn counterbalances the mass of the star pressing inward; a process that self-maintains the conditions in thermal and hydrostatic equilibrium. The minimum temperature required for stellar hydrogen fusion exceeds 107 K (), while the density at the core of the Sun is over . The core is surrounded by the stellar envelope, which transports energy from the core to the stellar atmosphere where it is radiated away into space. Main sequence Main sequence stars are distinguished by the primary energy-generating mechanism in their central region, which joins four hydrogen nuclei to form a single helium atom through thermonuclear fusion. The Sun is an example of this class of stars. Once stars with the mass of the Sun form, the core region reaches thermal equilibrium after about 100 million (108) years and becomes radiative. This means the generated energy is transported out of the core via radiation and conduction rather than through mass transport in the form of convection. Above this spherical radiation zone lies a small convection zone just below the outer atmosphere. At lower stellar mass, the outer convection shell takes up an increasing proportion of the envelope, and for stars with a mass of around (35% of the mass of the Sun) or less (including failed stars) the entire star is convective, including the core region. These very low-mass stars (VLMS) occupy the late range of the M-type main-sequence stars, or red dwarf. The VLMS form the primary stellar component of the Milky Way at over 70% of the total population. The low-mass end of the VLMS range reaches about , below which ordinary (non-deuterium) hydrogen fusion does not take place and the object is designated a brown dwarf. The temperature of the core region for a VLMS decreases with decreasing mass, while the density increases. For a star with , the core temperature is about while the density is around . Even at the low end of the temperature range, the hydrogen and helium in the core region is fully ionized. Below about , energy production in the stellar core is predominantly through the proton–proton chain reaction, a process requiring only hydrogen. For stars above this mass, the energy generation comes increasingly from the CNO cycle, a hydrogen fusion process that uses intermediary atoms of carbon, nitrogen, and oxygen. In the Sun, only 1.5% of the net energy comes from the CNO cycle. For stars at where the core temperature reaches 18 MK, half the energy production comes from the CNO cycle and half from the pp chain. The CNO process is more temperature-sensitive than the pp chain, with most of the energy production occurring near the very center of the star. This results in a stronger thermal gradient, which creates convective instability. Hence, the core region is convective for stars above about . For all masses of stars, as the core hydrogen is consumed, the temperature increases so as to maintain pressure equilibrium. This results in an increasing rate of energy production, which in turn causes the luminosity of the star to increase. The lifetime of the core hydrogen–fusing phase decreases with increasing stellar mass. For a star with the mass of the Sun, this period is around ten billion years. At the lifetime is 65 million years while at the core hydrogen–fusing period is only six million years. The longest-lived stars are fully convective red dwarfs, which can stay on the main sequence for hundreds of billions of years or more. Subgiant stars Once a star has converted all the hydrogen in its core into helium, the core is no longer able to support itself and begins to collapse. It heats up and becomes hot enough for hydrogen in a shell outside the core to start fusion. The core continues to collapse and the outer layers of the star expand. At this stage, the star is a subgiant. Very-low-mass stars never become subgiants because they are fully convective. Stars with masses between about and have small non-convective cores on the main sequence and develop thick hydrogen shells on the subgiant branch. They spend several billion years on the subgiant branch, with the mass of the helium core slowly increasing from the fusion of the hydrogen shell. Eventually, the core becomes degenerate, where the dominant source of core pressure is electron degeneracy pressure, and the star expands onto the red giant branch. Stars with higher masses have at least partially convective cores while on the main sequence, and they develop a relatively large helium core before exhausting hydrogen throughout the convective region, and possibly in a larger region due to convective overshoot. When core fusion ceases, the core starts to collapse and it is so large that the gravitational energy actually increases the temperature and luminosity of the star for several million years before it becomes hot enough to ignite a hydrogen shell. Once hydrogen starts fusing in the shell, the star cools and it is considered to be a subgiant. When the core of a star is no longer undergoing fusion, but its temperature is maintained by fusion of a surrounding shell, there is a maximum mass called the Schönberg–Chandrasekhar limit. When the mass exceeds that limit, the core collapses, and the outer layers of the star expand rapidly to become a red giant. In stars up to approximately , this occurs only a few million years after the star becomes a subgiant. Stars more massive than have cores above the Schönberg–Chandrasekhar limit before they leave the main sequence. Giant stars Once the supply of hydrogen at the core of a low-mass star with at least is depleted, it will leave the main sequence and evolve along the red giant branch of the Hertzsprung–Russell diagram. Those evolving stars with up to about will contract their core until hydrogen begins fusing through the pp chain along with a shell around the inert helium core, passing along the subgiant branch. This process will steadily increase the mass of the helium core, causing the hydrogen-fusing shell to increase in temperature until it can generate energy through the CNO cycle. Due to the temperature sensitivity of the CNO process, this hydrogen fusing shell will be thinner than before. Non-core convecting stars above that have consumed their core hydrogen through the CNO process, contract their cores, and directly evolve into the giant stage. The increasing mass and density of the helium core will cause the star to increase in size and luminosity as it evolves up the red giant branch. For stars in the mass range , the helium core becomes degenerate before it is hot enough for helium to start fusion. When the density of the degenerate helium at the core is sufficiently high − at around with a temperature of about − it undergoes a nuclear explosion known as a "helium flash". This event is not observed outside the star, as the unleashed energy is entirely used up to lift the core from electron degeneracy to normal gas state. The helium fusing core expands, with the density decreasing to about , while the stellar envelope undergoes a contraction. The star is now on the horizontal branch, with the photosphere showing a rapid decrease in luminosity combined with an increase in the effective temperature. In the more massive main-sequence stars with core convection, the helium produced by fusion becomes mixed throughout the convective zone. Once the core hydrogen is consumed, it is thus effectively exhausted across the entire convection region. At this point, the helium core starts to contract and hydrogen fusion begins along with a shell around the perimeter, which then steadily adds more helium to the inert core. At stellar masses above , the core does not become degenerate before initiating helium fusion. Hence, as the star ages, the core continues to contract and heat up until a triple alpha process can be maintained at the center, fusing helium into carbon. However, most of the energy generated at this stage continues to come from the hydrogen fusing shell. For stars above , helium fusion at the core begins immediately as the main sequence comes to an end. Two hydrogen fusing shells are formed around the helium core: a thin CNO cycle inner shell and an outer pp chain shell.
Physical sciences
Stellar astronomy
Astronomy
1476259
https://en.wikipedia.org/wiki/Megaloptera
Megaloptera
Megaloptera is an order of insects. It contains the alderflies, dobsonflies and fishflies, and there are about 300 known species. The order's name comes from Ancient Greek, from mega- (μέγα-) "large" + pteryx (πτέρυξ) "wing", in reference to the large, clumsy wings of these insects. Megaloptera are relatively unknown insects across much of their range, due to the adults' short lives, the aquatic larvae's often-high tolerance of pollution (so they are not often encountered by swimmers etc.), and the generally crepuscular or nocturnal habits. However, in the Americas the dobsonflies are rather well known, as their males have tusk-like mandibles. These, while formidable in appearance, are relatively harmless to humans and other animals; much like a peacock's feathers, they serve mainly to impress females. However, the mandibles are also used to hold females during mating, and some male dobsonflies spar with each other in courtship displays, trying to flip each other over with their long mandibles. Dobsonfly larvae, commonly called hellgrammites, are often used for angling bait in North America. The Megaloptera were formerly considered part of a group then called Neuroptera, together with lacewings and snakeflies, but these are now generally considered to be separate orders, with Neuroptera referring to the lacewings and relatives (which were formerly called Planipennia). The former Neuroptera, particularly the lacewing group, are nonetheless very closely related to each other, and the new name for this group is Neuropterida. This is either placed at superorder rank, with the Holometabola—of which they are part—becoming an unranked clade above it, or the Holometabola are maintained as a superorder, with an unranked Neuropterida being a part of them. Within the holometabolans, the closest living relatives of the neuropteridan clade are the beetles. The Asian dobsonfly Acanthacorydalis fruhstorferi can have a wingspan of up to , making it the largest aquatic insect in the world by this measurement. Anatomy and life cycle Adult megalopterans closely resemble the lacewings, except for the presence of a pleated region on their hindwings, helping them to fold over the abdomen. They have strong mandibles and mouthparts apparently adapted for chewing, although many species do not eat as adults. They have large compound eyes, and, in some species, also have ocelli. The wings are large and subequal. The female may lay up to 3,000 eggs in a single mass, placing them on vegetation overhanging water. Megaloptera undergo the most rudimentary form of complete metamorphosis among the insects. There are fewer differences between the larval and adult forms of Megaloptera than in any other order of holometabolous insects, and their aquatic larvae dwell in fresh water, around which the adults also live. The larvae are carnivorous, and are known to feed on small invertebrates, such as crustaceans, clams, worms and other insects. They possess strong jaws that they use to capture their prey. They have large heads and elongated bodies. The abdomen bears a number of fine tactile filaments, which, in some species, may include gills. The final segment of the abdomen bears either a pair of prolegs, or a single, tail-like appendage. The larvae grow slowly, taking anywhere from 1 to 5 years to reach the last larval stage. When they reach maturity, the larvae crawl out onto land to pupate in damp soil or under logs. Unusually, the pupa is fully motile, with large mandibles that it can use to defend itself against predators. The short-lived adults emerge from the pupa to mate - many species never feed as adults, living only a few days or hours, up to a few weeks at most. Evolution Apart from the two living families, there are a few prehistoric taxa sometimes placed Megaloptera, only known from fossils. Family Corydasialidae (Other authors place as members of Neuroptera) Family Parasialidae (placement within the Neuropterida uncertain) Family Euchauliodidae (other authors have suggested that this likely represents a "grylloblattid" instead) Family Nanosialidae (other authors consider this family more closely related to snakeflies) The Megaloptera are monophyletic and are a sister clade of the Neuroptera. Within the Megaloptera, Corydalinae and Chauliodinae are sister clades. The oldest fossils confidently identifiable as megalopterans date to the Early Jurassic.
Biology and health sciences
Insects: General
Animals
1476975
https://en.wikipedia.org/wiki/Dangerous%20goods
Dangerous goods
Dangerous goods (DG) are substances that are a risk to health, safety, property or the environment during transport. Certain dangerous goods that pose risks even when not being transported are known as hazardous materials (syllabically abbreviated as HAZMAT or hazmat). An example for dangerous goods is hazardous waste which is waste that has substantial or potential threats to public health or the environment. Hazardous materials are often subject to chemical regulations. Hazmat teams are personnel specially trained to handle dangerous goods, which include materials that are radioactive, flammable, explosive, corrosive, oxidizing, asphyxiating, biohazardous, toxic, poisonous, pathogenic, or allergenic. Also included are physical conditions such as compressed gases and liquids or hot materials, including all goods containing such materials or chemicals, or may have other characteristics that render them hazardous in specific circumstances. Dangerous goods are often indicated by diamond-shaped signage on the item (see NFPA 704), its container, or the building where it is stored. The color of each diamond indicates its hazard, e.g., flammable is indicated with red, because fire and heat are generally of red color, and explosive is indicated with orange, because mixing red (flammable) with yellow (oxidizing agent) creates orange. A nonflammable and nontoxic gas is indicated with green, because all compressed air vessels were this color in France after World War II, and France was where the diamond system of hazmat identification originated. Global regulations The most widely applied regulatory scheme is that for the transportation of dangerous goods. The United Nations Economic and Social Council issues the UN Recommendations on the Transport of Dangerous Goods, which form the basis for most regional, national, and international regulatory schemes. For instance, the International Civil Aviation Organization has developed dangerous goods regulations for air transport of hazardous materials that are based upon the UN model but modified to accommodate unique aspects of air transport. Individual airline and governmental requirements are incorporated with this by the International Air Transport Association to produce the widely used IATA Dangerous Goods Regulations (DGR). Similarly, the International Maritime Organization (IMO) has developed the International Maritime Dangerous Goods Code ("IMDG Code", part of the International Convention for the Safety of Life at Sea) for transportation of dangerous goods by sea. IMO member countries have also developed the HNS Convention to provide compensation in case of dangerous goods spills in the sea. The Intergovernmental Organisation for International Carriage by Rail has developed the regulations concerning the International Carriage of Dangerous Goods by Rail ("RID", part of the Convention concerning International Carriage by Rail). Many individual nations have also structured their dangerous goods transportation regulations to harmonize with the UN model in organization as well as in specific requirements. The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is an internationally agreed upon system set to replace the various classification and labeling standards used in different countries. The GHS uses consistent criteria for classification and labeling on a global level. UN numbers and proper shipping names Dangerous goods are assigned to UN numbers and proper shipping names according to their hazard classification and their composition. Dangerous goods commonly carried are listed in the Dangerous Goods list. Examples for UN numbers and proper shipping names are: 1202 GAS OIL or DIESEL FUEL or HEATING OIL, LIGHT 1203 MOTOR SPIRIT or GASOLINE or PETROL 3090 LITHIUM METAL BATTERIES 3480 LITHIUM ION BATTERIES including lithium ion polymer batteries Classification and labeling summary tables Dangerous goods are divided into nine classes (in addition to several subcategories) on the basis of the specific chemical characteristics producing the risk. Note: The graphics and text in this article representing the dangerous goods safety marks are derived from the United Nations-based system of identifying dangerous goods. Not all countries use precisely the same graphics (label, placard or text information) in their national regulations. Some use graphic symbols, but without English wording or with similar wording in their national language. Refer to the dangerous goods transportation regulations of the country of interest. For example, see the TDG Bulletin: Dangerous Goods Safety Marks based on the Canadian Transportation of Dangerous Goods Regulations. The statement above applies equally to all the dangerous goods classes discussed in this article. Handling and transportation Handling Mitigating the risks associated with hazardous materials may require the application of safety precautions during their transport, use, storage and disposal. Most countries regulate hazardous materials by law, and they are subject to several international treaties as well. Even so, different countries may use different class diamonds for the same product. For example, in Australia, anhydrous ammonia UN 1005 is classified as 2.3 (toxic gas) with subsidiary hazard 8 (corrosive), whereas in the U.S. it is only classified as 2.2 (non-flammable gas). People who handle dangerous goods will often wear protective equipment, and metropolitan fire departments often have a response team specifically trained to deal with accidents and spills. Persons who may come into contact with dangerous goods as part of their work are also often subject to monitoring or health surveillance to ensure that their exposure does not exceed occupational exposure limits. Laws and regulations on the use and handling of hazardous materials may differ depending on the activity and status of the material. For example, one set of requirements may apply to their use in the workplace while a different set of requirements may apply to spill response, sale for consumer use, or transportation. Most countries regulate some aspect of hazardous materials. Packing groups Packing groups are used for the purpose of determining the degree of protective packaging required for dangerous goods during transportation. Group I: great danger, and most protective packaging required. Some combinations of different classes of dangerous goods on the same vehicle or in the same container are forbidden if one of the goods is Group I. Group II: medium danger Group III: minor danger among regulated goods, and least protective packaging within the transportation requirement Transport documents One of the transport regulations is that, as an assistance during emergency situations, written instructions how to deal in such need to be carried and easily accessible in the driver's cabin. Dangerous goods shipments also require a dangerous goods transport document prepared by the shipper. The information that is generally required includes the shipper's name and address; the consignee's name and address; descriptions of each of the dangerous goods, along with their quantity, classification, and packaging; and emergency contact information. Common formats include the one issued by the International Air Transport Association (IATA) for air shipments and the form by the International Maritime Organization (IMO) for sea cargo. Training A license or permit card for hazmat training must be presented when requested by officials. Society and culture Global goals The international community has defined the responsible management of hazardous waste and chemicals as an important part of sustainable development with Sustainable Development Goal 3. Target 3.9 has this target with respect to hazardous chemicals: "By 2030, substantially reduce the number of deaths and illnesses from hazardous chemicals and air, water and soil pollution and contamination." Furthermore, Sustainable Development Goal 6 also mentions hazardous materials in Target 6.3: "By 2030, improve water quality by reducing pollution, eliminating dumping and minimizing release of hazardous chemicals and materials [...]." By country or region Australia The Australian Dangerous Goods Code complies with international standards of importation and exportation of dangerous goods in line with the UN Recommendations on the Transport of Dangerous Goods. Australia uses the standard international UN numbers with a few slightly different signs on the back, front and sides of vehicles carrying hazardous substances. The country uses the same "Hazchem" code system as the UK to provide advisory information to emergency services personnel in the event of an emergency. Canada Transportation of dangerous goods (hazardous materials) in Canada by road is normally a provincial jurisdiction. The federal government has jurisdiction over air, most marine, and most rail transport. The federal government acting centrally created the federal Transportation of Dangerous Goods Act and regulations, which provinces adopted in whole or in part via provincial transportation of dangerous goods legislation. The result is that all provinces use the federal regulations as their standard within their province; some small variances can exist because of provincial legislation. Creation of the federal regulations was coordinated by Transport Canada. Hazard classifications are based upon the UN model. Outside of federal facilities, labour standards are generally under the jurisdiction of individual provinces and territories. However, communication about hazardous materials in the workplace has been standardized across the country through Health Canada's Workplace Hazardous Materials Information System (WHMIS). Europe The European Union has passed numerous directives and regulations to avoid the dissemination and restrict the usage of hazardous substances, important ones being the Restriction of Hazardous Substances Directive (RoHS) and the REACH regulation. There are also long-standing European treaties such as ADR, ADN and RID that regulate the transportation of hazardous materials by road, rail, river and inland waterways, following the guide of the UN model regulations. European law distinguishes clearly between the law of dangerous goods and the law of hazardous materials. The first refers primarily to the transport of the respective goods including the interim storage, if caused by the transport. The latter describes the requirements of storage (including warehousing) and usage of hazardous materials. This distinction is important, because different directives and orders of European law are applied. United Kingdom The United Kingdom (and also Australia, Malaysia, and New Zealand) use the Hazchem warning plate system which carries information on how an emergency service should deal with an incident. The Dangerous Goods Emergency Action Code List (EAC) lists dangerous goods; it is reviewed every two years and is an essential compliance document for all emergency services, local government and for those who may control the planning for, and prevention of, emergencies involving dangerous goods. The latest 2015 version is available from the National Chemical Emergency Centre (NCEC) website. Guidance is available from the Health and Safety Executive. New Zealand New Zealand's Land Transport Rule: Dangerous Goods 2005 and the Dangerous Goods Amendment 2010 describe the rules applied to the transportation of hazardous and dangerous goods in New Zealand. The system closely follows the United Nations Recommendations on the Transport of Dangerous Goods and uses placards with Hazchem codes and UN numbers on packaging and the transporting vehicle's exterior to convey information to emergency services personnel. Drivers that carry dangerous goods commercially, or carry quantities in excess of the rule's guidelines must obtain a D (dangerous goods) endorsement on their driver's licence. Drivers carrying quantities of goods under the rule's guidelines and for recreational or domestic purposes do not need any special endorsements. United States Due to the increase in fear of terrorism in the early 21st century after the September 11, 2001 attacks, funding for greater hazmat-handling capabilities was increased throughout the United States, recognizing that flammable, poisonous, explosive, or radioactive substances in particular could be used for terrorist attacks. The Pipeline and Hazardous Materials Safety Administration regulates hazmat transportation within the territory of the US by Title 49 of the Code of Federal Regulations. The U.S. Occupational Safety and Health Administration (OSHA) regulates the handling of hazardous materials in the workplace as well as response to hazardous-materials-related incidents, most notably through Hazardous Waste Operations and Emergency Response (HAZWOPER). regulations found at 29 CFR 1910.120. In 1984 the agencies OSHA, EPA, USCG, and NIOSH jointly published the first Hazardous Waste Operations and Emergency Response Guidance Manual which is available for download. The Environmental Protection Agency (EPA) regulates hazardous materials as they may impact the community and environment, including specific regulations for environmental cleanup and for handling and disposal of waste hazardous materials. For instance, transportation of hazardous materials is regulated by the Hazardous Materials Transportation Act. The Resource Conservation and Recovery Act was also passed to further protect human and environmental health. The Consumer Product Safety Commission regulates hazardous materials that may be used in products sold for household and other consumer uses. Hazard classes for materials in transport Following the UN model, the DOT divides regulated hazardous materials into nine classes, some of which are further subdivided. Hazardous materials in transportation must be placarded and have specified packaging and labelling. Some materials must always be placarded, others may only require placarding in certain circumstances. Trailers of goods in transport are usually marked with a four digit UN number. This number, along with standardized logs of hazmat information, can be referenced by first responders (firefighters, police officers, and ambulance personnel) who can find information about the material in the Emergency Response Guidebook. Fixed facilities Different standards usually apply for handling and marking hazmats at fixed facilities, including NFPA 704 diamond markings (a consensus standard often adopted by local governmental jurisdictions), OSHA regulations requiring chemical safety information for employees, and CPSC requirements requiring informative labeling for the public, as well as wearing hazmat suits when handling hazardous materials.
Physical sciences
Basics: General
Chemistry
1477183
https://en.wikipedia.org/wiki/Lancetfish
Lancetfish
Lancetfishes are large oceanic predatory ray-finned fishes in the genus Alepisaurus ("scaleless lizard") in the monogeneric family Alepisauridae. Lancetfishes grow up to in length. Very little is known about their biology, though they are widely distributed in all oceans, except the polar seas. Specimens have been recorded as far north as Greenland. They are often caught as bycatch by vessels long-lining for tuna. The generic name is from Greek a- meaning "without", meaning "scale", and sauros meaning "lizard". Species The two currently recognized extant species in this genus are: Alepisaurus brevirostris Gibbs, 1960 (short-snouted lancetfish) Alepisaurus ferox R. T. Lowe, 1833 (long-snouted lancetfish) The anatomic difference between the two species is the shape of the snout, which is long and pointed in A. ferox, and slightly shorter in A. brevirostris. The long-snouted lancetfish is found in the tropical and northern sub-tropical waters of the Pacific Ocean. The short-snouted lancetfish lives in the Atlantic Ocean's tropics, subtropics, and southern sub-tropics of the Pacific ocean. A third recognized species, A. paronai D'Erasmo, 1923, is a fossil known from Middle Miocene-aged strata from Italy. Description Lancetfish possess a long and very high dorsal fin, soft-rayed from end to end, with an adipose fin behind it. The dorsal fin has 41 to 44 rays and occupies the greater length of the back. This fin is rounded in outline, about twice as high as the fish is deep, and can be depressed into a groove along the back. The body is slender, flattened from side to side, deepest at the gill covers, and tapers back to a slender caudal peduncle. The mouth is wide, gaping to the back of the eye, and each jaw has two or three large, fang-like teeth, in addition to numerous smaller teeth. The caudal fin is very deeply forked; its upper lobe is prolonged as a long filament, although most lancetfishes seem to lose this when captured. The anal fin originates under the last dorsal ray, and is deeply concave in outline. The ventral fins are about halfway between the anal fin and the tip of the snout, while the pectoral fins are considerably longer than the body is deep and are situated very low down on the sides. No scales are present, and the fins are very fragile. Lancetfishes are among the largest living bathypelagic fish forms. Specimens have been collected in excess of in length, often from dead individuals that drifted ashore. Like their close relatives in the Aulopiformes suborder Alepisauroidei, lancetfish lack swimbladders and are simultaneous hermaphrodites. Ecology and life history Lancetfish have large mouths and sharp teeth, indicating a predatory mode of life. Their watery muscle is not suited to fast swimming and long pursuit, so they likely are ambush predators, using their narrow body profile and silvery coloration to conceal their presence. Lancetfish float in the water column, using their large eyes to scan for prey, which once detected, they attack using their forked tails for rapid bursts of speed, their large dorsal sails are likely used to maintain a stable trajectory toward their target, and their large mouths and teeth are used to subdue prey before it can escape. They are voracious predators and their distensible stomachs have often been found to contain a variety of food, mainly fishes, octopods, squid, tunicates, and crustacea. Extremely little is known of lancetfish reproductive habits. While they are known to be simultaneous hermaphrodites, spawning has never been observed. They likely are planktonic spawners from the small size and pelagic transmission of their larva. While seasonal presences and absences of lancetfish capture have been noted in certain ocean basins, it remains unclear whether spawning is a seasonal occurrence. No commercial fisheries exist for lancetfishes. Their flesh is watery and gelatinous, although edible and reportedly sweet to taste. They are caught as bycatch by tuna fisheries and are often considered pests, taking bait intended for more valuable species. Data from longline fisheries in the central and western Pacific indicates shows an apparent increases in lancetfish bycatch. This increase is thought by researchers to reflect a faunal regime shift as other commercially target species, like yellowfin and bigeye tuna populations are reduced through fishing. Lancetfishes have been caught on longlines as shallow as ten fathoms in Oregon and the Gulf of Mexico. Some anecdotal reports have observed shoals of 30–40 individuals at the surface in Icelandic waters during spring. Hook and line capture of lancetfish from surf zones is not unheard of and dietary surveys suggest at least some feeding occurs within inshore waters. However, lancetfish are generally considered solitary, mesopelagic, and bathypelagic fishes occupying depths between 100 and 2000 m. While they have not been shown to participate in diel vertical migration, they have been found in a huge variety of depths. The tetraphyllidean tapeworm Pelichnibothrium speciosum is a significant parasite of long-snouted lancetfish. The species seems to be an intermediate or paratenic host for the tapeworm. The large size, wide depth distribution, and opportunistic diet of lancetfish have lent them to the study of other pelagic biodiversity because their voraciousness can be used to survey smaller organisms throughout the deep-sea that are difficult to capture by other means. Adult lancetfish are commonly caught as bycatch in longline fisheries and analysis of their gut contents provides a convenient, if somewhat biased, method for surveying regional pelagic biodiversity, so much so that some species of deep-sea fishes were first described from specimens found in the stomachs of lancetfish. This may be partially due to the unusually slow rate of digestion apparent in lancetfish, where actual digestion seemingly does not begin in earnest until the beginning of the small intestines. In addition to a high degree of cannibalism and consumption of gelatinous foods, lancetfishes have also been documented with plastic refuse in their stomachs in the tropical north Pacific. While the exact pathway of this ingestion is not yet clear, lancetfish likely have some affinity with the epipelagic, but this could be by way of direct migration or migration of prey which had eaten plastic at the surface and returned to depth. One particularly bizarre example of this affinity for surface waters comes from a gut survey of lancetfish in the Indian Ocean where a large amount (24.1%) of floating macroalgae was documented in the stomachs of exclusively adult (>100 cm) individuals. This is most likely indicative of the pursuit of evasive prey-types by larger lancetfish into epipelagic refuges. The voracious appetite, low degree of prey selectivity, broad depth distribution, slow rate of digestion and ease of sampling via longline bycatch make lancetfishes useful platforms by which to study the greater ecology of deep mid-water fauna. In 2023, several lancetfish have washed up on the coastline of beaches of Oregon. While the fish tend to live in tropical or subtropical water, they often need to travel to Alaska's Bering Sea to find food. Beach-goers that see the fish have been requested to take a photo and tag the NOAA Fisheries West Coast region.
Biology and health sciences
Aulopiformes
Animals
1477664
https://en.wikipedia.org/wiki/Fort%20Peck%20Dam
Fort Peck Dam
The Fort Peck Dam is the highest of six major dams along the Missouri River, located in northeast Montana in the United States, near Glasgow, and adjacent to the community of Fort Peck. At in length and over in height, it is the largest hydraulically filled dam in the United States, and creates Fort Peck Lake, the fifth largest artificial lake in the U.S., more than long, deep, and it has a shoreline which is longer than the state of California's coastline. It lies within the Charles M. Russell National Wildlife Refuge. The dam and the lake are owned and operated by the U.S. Army Corps of Engineers and exist for the purposes of hydroelectric power generation, flood control, and water quality management. The dam presently has a nameplate capacity of 185.25 megawatts, divided among 5 generating units (which in turn are divided between the Western and Eastern grids). Three units in powerhouse number one, completed in 1951, have a capacity of 105 MW. Completed in 1961, the two remaining generating units in powerhouse number 2, have a nameplate capacity of 80 MW. The lake has a maximum operating pool elevation of above mean sea level and a normal operating pool elevation of above mean sea level. The lake level fluctuates over time based on a number of factors. During the first week of February 2007, the reservoir set a record low elevation of above mean sea level, nearly lower than the previous record low set in 1991. In June 2011, in response to the 2011 Missouri River Floods, the dam was releasing almost , which greatly exceeded its previous record release of set in 1975. Background Fort Peck was a major project of the Public Works Administration, part of the New Deal. Construction of Fort Peck Dam started in 1933, and at its peak in July 1936 employed 10,546 workers. The dam, named for a 19th-century trading post, was completed in 1940, and began generating electricity in July 1943. The town of Fort Peck, Montana, "the government town," was built for Army Corps of Engineers personnel and men in "positions of responsibility" and their families during the dam's construction. Many of the facilities that supported the dam's workers are still used today, such as the recreation center and the Fort Peck Theater. In addition to Fort Peck, other towns sprang up to house the workers. Among these were Wheeler and McCone City as well as more than a dozen others. Many of the homes were later moved to farms and towns around Montana. Fort Peck Dam is one of six Missouri River Main stem dams operated by the U.S. Army Corps of Engineers, Omaha District. The dams downstream of Fort Peck Dam are: Garrison Dam (near Riverdale, North Dakota), Oahe Dam (near Pierre, South Dakota), Big Bend Dam (near Fort Thompson, South Dakota), Fort Randall Dam (near Pickstown, South Dakota), and Gavins Point Dam (near Yankton, South Dakota). These six mainstem dams impound these Missouri River reservoirs with a total combined water storage capacity of approximately and approximately of water surface area. Construction The site chosen was on a stretch of the Missouri River flowing from south to north. The river bed at the site consisted of approximately of alluvial deposits, varying from coarse, pervious sands and gravels to impermeable clays. Beneath these deposits lay a thick (approximately ) deposit of Bear Paw shale. This shale is classified as a firm shale and contains thin (< layers of bentonite. The topmost layer of soft clay was removed from the alluvium in order to found the dam on the stable sandy deposits beneath, at an elevation of approximately . The remaining deposits consisted of the alluvial materials mentioned above. These deposits had many interconnecting layers of coarse sands and gravels, necessitating the installation of a steel sheet pile wall down to the firm shale, from the left to the right abutment. As designed, the dam extends to an elevation of , for a total height of from the cleared river bed and has a length from the left to the right abutment of approximately . The upstream face was designed with an average slope of one vertical on four horizontal and included three horizontal shelves built into the slope. A flatter (1 on 7.5) berm was to be placed between stations 30+00 and 75+00 (approximately the center half of the length of the dam). Since the construction method of hydraulic fill was chosen, four electric dredges were built. Because of the distance of the site from the nearest shoreline, a shipyard was started on the site, affectionately dubbed "The Fort Peck Navy" and "The Biggest Shipyard in Montana" by the workers. These dredges would pump material from nearby borrow pits to the dam site where it was discharged by pipes along the outside edges of the fill. The coarser material settled out quickly, while the fines were carried downhill toward what would eventually become the core of the dam. Samples were taken from all zones regularly to ensure that the material had the gradation and consolidation characteristics specified by the design. Dam failure during construction This process proceeded until elevation of the fill reached approximately , while the reservoir was at an elevation of . At this point, the danger of the core pool overtopping or bursting the shell became greater because the beaches became narrower. For this reason, an extensive alarm system was implemented along the narrower upstream shell. This alarm system could immediately shut off the dredge pumps if a shell breach was detected. Part of this alarm system involved monitoring the elevations of the core pool and the pipelines carrying the dredged fill. On the morning of September 22, 1938, an inspection by the engineer in charge of construction revealed that the elevation of the pipeline relative to the core pool near station 15+10 (see attached overhead photo) was only when it should have been . A survey crew was dispatched immediately to determine if the fill on which the pipeline was founded was settling or if the pipeline footing itself was settling. Preliminary measurement showed that the pipeline was closer to the pool than expected from station 15+00 to station 17+00. A meeting with the district engineer and supervisory personnel was scheduled for 1:15 p.m. near the location in question. At approximately 1:15 p.m. the core pool near station 15+00 began to settle slowly. As its rate of settlement increased, cracks appeared below the crest in the upstream embankment. As the settlement of the pool got larger, portions of the shell began to slide backwards into the core pool area and the majority of the upstream shell began to move into the reservoir, translating south and rotating slightly about the east abutment. The west end of the slide mass broke away from the dam near station 27+00 and the core pool water rapidly poured out of the breach that was created in the shell. Portions of the core in the still-stable portion of the dam continued to slump into the hole created by the loss of the slide mass. One construction supervisor was backing his car away from the advancing scarp to the west along the beach to avoid the slumping and noted that the small scarp in the core was advancing to the west at a speed equal to his own (approximately 10 mph (16 km/h)). A pump barge moored near the dam at the east abutment was swamped by the slide and was lost along with several tractors, loadmasters, and draglines on the slope. Of the 134 men working in the area at the time, 34 were carried into the sliding material. Of these 34, eight were unable to be rescued and lost their lives. Of the eight men, only two bodies were ever recovered, leaving six men permanently entombed in the structure. The dead were: Oliver Butcher, 58, laborer, Hinsdale John I. Johnson, 25, motorboat operator, Dodson Walter Lubbinge, 29, drill runner's helper, New Deal Archie Moir, 26, deckhand, Hinsdale Douglas J. Moore, 35, associate superintendent, DuBois (body found) Dolphie Paulson, 51, laborer Albert Stoeser, 23, deckhand, Glasgow (body found) Nelson P. VanStone, 31, foreman In the testing and analysis done by the Corps of Engineers and others to determine the cause of the slide, several modes of failure were considered. These were: movement along a weak zone in the shale in the abutment, movement along the shale surface, bursting of the shell due to excessive core pressure, and temporary liquefaction of the shell or foundation sand. Extensive laboratory testing of the shale, both weathered and unweathered indicated strengths leading to a factor of safety greater than one. Also, portions of the weathered shale were found in the slide mass, indicating that the slip surface was located somewhere in the shale, but probably at a shallow depth. The core material turned out to be much stronger than expected (having a friction angle of approximately 29 degrees) and was carried out into the slide nearly in a solid mass, making it unlikely that the core was the weak point in the slide. Laboratory testing was done on the shell material and the foundation sand and it was determined that both materials were denser than the critical state for liquefaction. There was no evidence of ground vibration, seismic or otherwise. Some liquefaction may have occurred after the sliding was initiated, but it was unlikely that it caused the slide. The major weak point in the dam seemed to be the bentonite seams in the Bear Paw shale. Very high water pressures were reported at some points in the shale during the construction. This was likely caused by consolidation due to the overburden of the fill being placed for the dam. The excess pore pressures could not be relieved due to the low permeability of the surrounding shale. This resulted in a low effective stress in the bentonite and a very low shear strength. 2011 flooding and repairs According to the Billings Gazette, the dam was damaged by "record-high runoff and flooding in 2011." As of March 2013, "more than $42.9 million in repairs to Fort Peck Dam have been approved by the U.S. Army Corps of Engineers." Representations in art and literature Fort Peck Dam is probably best known for being the subject of a photograph of the spillway taken by Margaret Bourke-White while still under construction that was the cover photo of the first issue of Life magazine on November 23, 1936. Later, the photograph by Bourke-White was used on a United States postage stamp in the "Celebrate the Century" series. The dam is also center-stage in Bucking the Sun, by the Montana-born writer Ivan Doig, published in 1996. The novel tells the story of the fictional Duff family and their various roles in the mammoth dam project, and in the process describes the working conditions and way of life of the thousands of workers hired to construct the Fort Peck Dam, many of them homesteaders from upriver farms destined to disappear under the waters of the newly formed Fort Peck Lake. The Fort Peck Dam is also featured in Ivan Doig’s novel, The Bartender’s Tale. Fifty Cents an Hour: The Builders and Boomtowns of the Fort Peck Dam, by Montana author Lois Lonnquist, published in 2006, is an overall history of the Fort Peck dam and spillway construction. Built by the Army Corps of Engineers, PWA Project #30 provided thousands of jobs during the Great Depression. The book includes the history of the boomtowns that sprang up in the area, and the "project people" who lived and worked at Fort Peck during the "dam days." M.R. Montgomery, Personal History, "Impalpable Dust," The New Yorker, March 27, 1989, p. 94 was written by the son of an engineer who worked at the dam during its construction. After his father's death, the author researched the dam's construction and his father's role in it.
Technology
Dams
null
1478144
https://en.wikipedia.org/wiki/Cyclopentane
Cyclopentane
Cyclopentane (also called C pentane) is a highly flammable alicyclic hydrocarbon with chemical formula C5H10 and CAS number 287-92-3, consisting of a ring of five carbon atoms each bonded with two hydrogen atoms above and below the plane. It is a colorless liquid with a petrol-like odor. Its freezing point is −94 °C and its boiling point is 49 °C. Cyclopentane is in the class of cycloalkanes, being alkanes that have one or more carbon rings. It is formed by cracking cyclohexane in the presence of alumina at a high temperature and pressure. It was first prepared in 1893 by the German chemist Johannes Wislicenus. Production, occurrence and use Cycloalkanes are formed by catalytic reforming. For example, when passed over a hot platinum surface, 2-methylbutane converts into cyclopentane. Cyclopentane is principally used as a blowing agent in the manufacture of polyurethane insulating foam, replacing ozone-depleting agents such as CFC-11 and HCFC-141b. While cyclopentane is not typically used as a refrigerant, it is common for domestic appliances that are insulated with cyclopentane-based foam, such as refrigerators and freezers, to be marked with cyclopentane warning labels due to its flammability. Cyclopentane is also used in the manufacture of synthetic resins and rubber adhesives. Cyclopentane is a minor component of automobile fuel, with its share in US gasoline varying between 0.2 and 1.6% in early 1990s and 0.1 to 1.7% in 2011. Its research and motor octane numbers are reported as 101 or 103 and 85 or 86 respectively. Multiple alkylated cyclopentane (MAC) lubricants, such as 1,3,4-tri-(2-octyldodecyl) cyclopentane, have low volatility and are used by NASA in space applications. Cyclopentane requires safety precautions to prevent leakage and ignition as it is both highly flammable and can also cause respiratory arrest when inhaled. Cyclopentane can be fluorinated to give compounds ranging from to perfluorocyclopentane . Such species are conceivable refrigerants and specialty solvents. The cyclopentane ring is pervasive in natural products including many useful drugs. Examples include most steroids, prostaglandins, and some lipids. Conformations In a regular pentagon, the angles at the vertices are all 108°, slightly less than the bond angle in perfectly tetrahedrally bonded carbon, which is about 109.47°. However, cyclopentane is not planar in its normal conformations. It puckers in order to increase the distances between the hydrogen atoms (something which does not happen in the planar cyclopentadienyl anion because it doesn't have as many hydrogen atoms). This means that the average C-C-C angle is less than 108°. There are two conformations that give local minima of the energy, the "envelope" and the "half-chair". The envelope has mirror symmetry (C), while the half chair has two-fold rotational symmetry (C). In both cases the symmetry implies that there are two pairs of equal C-C-C angles and one C-C-C angle that has no pair. In fact for cyclopentane, unlike for cyclohexane (C6H12, see cyclohexane conformation) and higher cycloalkanes, it is not possible geometrically for all the angles and bond lengths to be equal except if it is in the form of a flat regular pentagon.
Physical sciences
Aliphatic hydrocarbons
Chemistry
1480601
https://en.wikipedia.org/wiki/Power%20cable
Power cable
A power cable is an electrical cable, an assembly of one or more electrical conductors, usually held together with an overall sheath. The assembly is used for transmission of electrical power. Power cables may be installed as permanent wiring within buildings, buried in the ground, run overhead, or exposed. Power cables that are bundled inside thermoplastic sheathing and that are intended to be run inside a building are known as NM-B (nonmetallic sheathed building cable). Flexible power cables are used for portable devices, mobile tools, and machinery. History The first power distribution system developed by Thomas Edison in 1882 in New York City used copper rods, wrapped in jute and placed in rigid pipes filled with a bituminous compound. Although vulcanized rubber had been patented by Charles Goodyear in 1844, it was not applied to cable insulation until the 1880s, when it was used for lighting circuits. Rubber-insulated cable was used for 11,000-volt circuits in 1897 installed for the Niagara Falls power project. Mass-impregnated paper-insulated medium voltage cables were commercially practical by 1895. During World War II several varieties of synthetic rubber and polyethylene insulation were applied to cables. Typical residential and office construction in North America has gone through several technologies: Early bare and cloth-covered wires installed with staples Knob and tube wiring, 1880s–1930s, using asphalt-saturated cloth or later rubber insulation Armored cable, known by the genericized trademark "BX" - flexible steel sheath with two cloth-covered, rubber-insulated conductors - introduced in 1906 but more expensive than open single conductors Rubber-insulated wires with jackets of woven cotton cloth (usually impregnated with tar), waxed paper filler - introduced in 1922 Modern two or three-wire+ground PVC-insulated cable (e.g., NM-B), produced by such brands as Romex Aluminum wire was used in the 1960s and 1970s as a cheap replacement for copper and is still used today, but this is now considered unsafe, without proper installation, due to corrosion, softness and creeping of connection. Asbestos was used as an electrical insulator in some cloth wires from the 1920s to 1970s, but discontinued due to its health risk. Teck cable, a PVC-sheathed armored cable Construction Modern power cables come in a variety of sizes, materials, and types, each particularly adapted to its uses. Large single insulated conductors are also sometimes called power cables in the industry. Cables consist of three major components: conductors, insulation, protective jacket. The makeup of individual cables varies according to application. The construction and material are determined by three main factors: Working voltage, determining the thickness of the insulation; Current-carrying capacity, determining the cross-sectional size of the conductor(s); Environmental conditions such as temperature, water, chemical or sunlight exposure, and mechanical impact, determining the form and composition of the outer cable jacket. Cables for direct burial or for exposed installations may also include metal armor in the form of wires spiraled around the cable, or a corrugated tape wrapped around it. The armor may be made of steel or aluminum, and although connected to earth ground is not intended to carry current during normal operation. Electrical power cables are sometimes installed in raceways, including electrical conduit and cable trays, which may contain one or more conductors. When it is intended to be used inside a building, nonmetallic sheathed building cable (NM-B) consists of two or more wire conductors (plus a grounding conductor) enclosed inside a thermoplastic insulation sheath that is heat-resistant. It has advantages over armored building cable because it is lighter, easier to handle, and its sheathing is easier to work with. Power cables use stranded copper or aluminum conductors, although small power cables may use solid conductors in sizes of up to 1/0. (For a detailed discussion on copper cables, see: Copper wire and cable.). The cable may include uninsulated conductors used for the circuit neutral or for ground (earth) connection. The grounding conductor connects the equipment's enclosure/chassis to ground for protection from electric shock. These uninsulated versions are known are bare conductors or tinned bare conductors. The overall assembly may be round or flat. Non-conducting filler strands may be added to the assembly to maintain its shape. Filler materials can be made in non-hydroscopic versions if required for the application. Special purpose power cables for overhead applications are often bound to a high strength alloy, ACSR, or alumoweld messenger. This cable is called aerial cable or pre-assembled aerial cable (PAC). PAC can be ordered unjacketed, however, this is less common in recent years due to the low added cost of supplying a polymeric jacket. For vertical applications the cable may include armor wires on top of the jacket, steel or Kevlar. The armor wires are attached to supporting plates periodically to help support the weight of the cable. A supporting plate may be included on each floor of the building, tower, or structure. This cable would be called an armored riser cable. For shorter vertical transitions (perhaps 30–150 feet) an unarmored cable can be used in conjunction with basket (Kellum) grips or even specially designed duct plugs. Material specification for the cable's jacket will often consider resistance to water, oil, sunlight, underground conditions, chemical vapors, impact, fire, or high temperatures. In nuclear industry applications the cable may have special requirements for ionizing radiation resistance. Cable materials for a transit application may be specified not to produce large amounts of smoke if burned (low smoke zero halogen). Cables intended for direct burial must consider damage from backfill or dig-ins. HDPE or polypropylene jackets are common for this use. Cables intended for subway (underground vaults) may consider oil, fire resistance, or low smoke as a priority. Few cables these days still employ an overall lead sheath. However, some utilities may still install paper insulated lead covered cable in distribution circuits. Transmission or submarine cables are more likely to use lead sheaths. However, lead is in decline and few manufacturers exist today to produce such items. When cables must run where exposed to mechanical damage (industrial sites), they may be protected with flexible steel tape or wire armor, which may also be covered by a water-resistant jacket. A hybrid cable can include conductors for control signals or may also include optical fibers for data. Higher voltages For circuits operating at or above 2,000 volts between conductors, a conductive shield should surround the conductor's insulation. This equalizes electrical stress on the cable insulation. This technique was patented by Martin Hochstadter in 1916; the shield is sometimes called a Hochstadter shield. Aside from the semi conductive ("semicon") insulation shield, there will also be a conductor shield. The conductor shield may be semi conductive (usually) or non conducting. The purpose of the conductor shield is similar to the insulation shield: it is a void filler and voltage stress equalizer. To drain off stray voltage, a metallic shield will be placed over the "semicon." This shield is intended to "make safe" the cable by pulling the voltage on the outside of the insulation down to zero (or at least under the OSHA limit of 50 volts). This metallic shield can consist of a thin copper tape, concentric drain wires, flat straps, lead sheath, or other designs. The metallic shields of a cable are connected to earth ground at the ends of the cable, and possibly locations along the length if voltage rise during faults would be dangerous. Multi-point grounding is the most common way to ground the cable's shield. Some special applications require shield breaks to limit circulating currents during the normal operations of the circuit. Circuits with shield breaks could be single or multi point grounded. Special engineering situations may require cross bonding. Liquid or gas filled cables are still employed in distribution and transmission systems today. Cables of 10 kV or higher may be insulated with oil and paper, and are run in a rigid steel pipe, semi-rigid aluminum or lead sheath. For higher voltages the oil may be kept under pressure to prevent formation of voids that would allow partial discharges within the cable insulation. Liquid filled cables are known for extremely long service lives with little to no outages. Unfortunately, oil leaks into soil and bodies of water are of grave concern and maintaining a fleet of the needed pumping stations is a drain on the O+M budget of most power utilities. Pipe type cables are often converted to solid insulation circuit at the end of their service life despite a shorter expected service life. Modern high-voltage cables use polyethylene or other polymers, including XLPE for insulation. They require special techniques for jointing and terminating, see High-voltage cable. Flexibility of cables (stranding class) All electrical cables are somewhat flexible, allowing them to be shipped to installation sites wound on reels, drums or hand coils. Flexibility is an important factor in determining the appropriate stranding class of the cable as it directly affects the minimum bending radius. Power cables are generally stranding class A, B, or C. These classes allow for the cable to be trained into a final installed position where the cable will generally not be disturbed. Class A, B, and C offer more durability, especially when pulling cable, and are generally cheaper. Power utilities generally order Class B stranded wire for primary and secondary voltage applications. At times, a solid conductor medium voltage cable can be used when flexibility is not a concern but low cost and water blocking are prioritized. Applications requiring a cable to be moved repeatedly, such as for portable equipment, more flexible cables called "cords" or "flex" are used (stranding class G-M). Flexible cords contain fine stranded conductors, rope lay or bunch stranded. They feature overall jackets with appropriate amounts of filler materials to improve their flexibility, trainability, and durability. Heavy duty flexible power cords such as those feeding a mine face cutting machine are carefully engineered — their life is measured in weeks. Very flexible power cables are used in automated machinery, robotics, and machine tools. See power cord and extension cable for further description of flexible power cables. Other types of flexible cable include twisted pair, extensible, coaxial, shielded, and communication cable. An X-ray cable is a special type of flexible high-voltage cable.
Technology
Power transmission
null
26096048
https://en.wikipedia.org/wiki/Infestation
Infestation
Infestation is the state of being invaded or overrun by pests or parasites. It can also refer to the actual organisms living on or within a host. Terminology In general, the term "infestation" refers to parasitic diseases caused by animals such as arthropods (i.e. mites, ticks, and lice) and worms, but excluding (except) conditions caused by protozoa, fungi, bacteria, and viruses, which are called infections. External and internal Infestations can be classified as either external or internal with regards to the parasites' location in relation to the host. External or ectoparasitic infestation is a condition in which organisms live primarily on the surface of the host (though porocephaliasis can penetrate viscerally) and includes those involving mites, ticks, head lice and bed bugs. An internal (or endoparasitic) infestation is a condition in which organisms live within the host and includes those involving worms (though swimmer's itch stays near the surface). Sometimes, the term "infestation" is reserved for external ectoparasitic infestations while the term infection refers to internal endoparasitic conditions.
Biology and health sciences
Helminthic diseases and infestations
Health
5348550
https://en.wikipedia.org/wiki/Evolution%20of%20molluscs
Evolution of molluscs
The evolution of the molluscs is the way in which the Mollusca, one of the largest groups of invertebrate animals, evolved. This phylum includes gastropods, bivalves, scaphopods, cephalopods, and several other groups. The fossil record of mollusks is relatively complete, and they are well represented in most fossil-bearing marine strata. Very early organisms which have dubiously been compared to molluscs include Kimberella and Odontogriphus. Fossil record Good evidence exists for the appearance of gastropods, cephalopods and bivalves in the Cambrian period . However, the evolutionary history both of the emergence of molluscs from the ancestral group Lophotrochozoa, and of their diversification into the well-known living and fossil forms, is still vigorously debated. Debate occurs about whether some Ediacaran and Early Cambrian fossils are molluscs. Kimberella, from about , has been described by some palaeontologists as "mollusc-like", but others are unwilling to go further than "probable bilaterian". There is an even sharper debate about whether Wiwaxia, from about , was a mollusc, and much of this centers on whether its feeding apparatus was a type of radula or more similar to that of some polychaete worms. Nicholas Butterfield, who opposes the idea that Wiwaxia was a mollusc, has written that earlier microfossils from are fragments of a genuinely mollusc-like radula. This appears to contradict the concept that the ancestral molluscan radula was mineralized. However, the Helcionellids, which first appear over in Early Cambrian rocks from Siberia and China, are thought to be early molluscs with rather snail-like shells. Shelled molluscs therefore predate the earliest trilobites. Although most helcionellid fossils are only a few millimeters long, specimens a few centimeters long have also been found, most with more limpet-like shapes. The tiny specimens have been suggested to be juveniles and the larger ones adults. Some analyses of helcionellids concluded these were the earliest gastropods. However, other scientists are not convinced these Early Cambrian fossils show clear signs of the torsion characteristic of modern gastropods, that twists the internal organs so the anus lies above the head. Volborthella, some fossils of which predate , was long thought to be a cephalopod, but discoveries of more detailed fossils showed its shell was not secreted, but built from grains of the mineral silicon dioxide (silica), and it was not divided into a series of compartments by septa as those of fossil shelled cephalopods and the living Nautilus are. Volborthella'''s classification is uncertain. The Late Cambrian fossil Plectronoceras is now thought to be the earliest clearly cephalopod fossil, as its shell had septa and a siphuncle, a strand of tissue that Nautilus uses to remove water from compartments it has vacated as it grows, and which is also visible in fossil ammonite shells. However, Plectronoceras and other early cephalopods crept along the seafloor instead of swimming, as their shells contained a "ballast" of stony deposits on what is thought to be the underside, and had stripes and blotches on what is thought to be the upper surface. All cephalopods with external shells except the nautiloids became extinct by the end of the Cretaceous period . However, the shell-less Coleoidea (squid, octopus, cuttlefish) are abundant today. The Early Cambrian fossils Fordilla and Pojetaia are regarded as bivalves. "Modern-looking" bivalves appeared in the Ordovician period, . One bivalve group, the rudists, became major reef-builders in the Cretaceous, but became extinct in the Cretaceous–Paleogene extinction event. Even so, bivalves remain abundant and diverse. The Hyolitha are a class of extinct animals with a shell and operculum that may be molluscs. Authors who suggest they deserve their own phylum do not comment on the position of this phylum in the tree of life Phylogeny The phylogeny (evolutionary "family tree") of molluscs is a controversial subject. In addition to the debates about whether Kimberella'' and any of the "halwaxiids" were molluscs or closely related to molluscs, debates arise about the relationships between the classes of living molluscs. In fact, some groups traditionally classified as molluscs may have to be redefined as distinct but related. Molluscs are generally regarded members of the Lophotrochozoa, a group defined by having trochophore larvae and, in the case of living Lophophorata, a feeding structure called a lophophore. The other members of the Lophotrochozoa are the annelid worms and seven marine phyla. The diagram on the right summarizes a phylogeny presented in 2007. Because the relationships between the members of the family tree are uncertain, it is difficult to identify the features inherited from the last common ancestor of all molluscs. For example, it is uncertain whether the ancestral mollusc was metameric (composed of repeating units)—if it was, that would suggest an origin from an annelid-like worm. Scientists disagree about this: Giribet and colleagues concluded, in 2006, the repetition of gills and of the foot's retractor muscles were later developments, while in 2007, Sigwart concluded the ancestral mollusc was metameric, and it had a foot used for creeping and a "shell" that was mineralized. In one particular branch of the family tree, the shell of conchiferans is thought to have evolved from the spicules (small spines) of aplacophorans; but this is difficult to reconcile with the embryological origins of spicules. The molluscan shell appears to have originated from a mucus coating, which eventually stiffened into a cuticle. This would have been impermeable and thus forced the development of more sophisticated respiratory apparatus in the form of gills. Eventually, the cuticle would have become mineralized, using the same genetic machinery (the engrailed gene) as most other bilaterian skeletons. The first mollusc shell almost certainly was reinforced with the mineral aragonite. The evolutionary relationships 'within' the molluscs are also debated, and the diagrams below show two widely supported reconstructions: Morphological analyses tend to recover a conchiferan clade that receives less support from molecular analyses, although these results also lead to unexpected paraphylies, for instance scattering the bivalves throughout all other mollusc groups. However, an analysis in 2009 using both morphological and molecular phylogenetics comparisons concluded the molluscs are not monophyletic; in particular, Scaphopoda and Bivalvia are both separate, monophyletic lineages unrelated to the remaining molluscan classes; the traditional phylum Mollusca is polyphyletic, and it can only be made monophyletic if scaphopods and bivalves are excluded. A 2010 analysis recovered the traditional conchiferan and aculiferan groups, and showed molluscs were monophyletic, demonstrating that available data for solenogastres was contaminated. Current molecular data are insufficient to constrain the molluscan phylogeny, and since the methods used to determine the confidence in clades are prone to overestimation, it is risky to place too much emphasis even on the areas of which different studies agree. Rather than eliminating unlikely relationships, the latest studies add new permutations of internal molluscan relationships, even bringing the conchiferan hypothesis into question.
Biology and health sciences
Basics_4
Biology
5353692
https://en.wikipedia.org/wiki/Metriorhynchidae
Metriorhynchidae
Metriorhynchidae is an extinct family of specialized, aquatic metriorhynchoid crocodyliforms from the Middle Jurassic to the Early Cretaceous period (Bajocian to early Aptian) of Europe, North America and South America. The name Metriorhynchidae was coined by the Austrian zoologist Leopold Fitzinger in 1843. The group contains two subfamilies, the Metriorhynchinae and the Geosaurinae. They represent the most marine adapted of all archosaurs. Description Metriorhynchids are fully aquatic crocodyliforms. Their forelimbs were small and paddle-like, and unlike living crocodylians, they lost their osteoderms ("armour scutes"). Their body shape maximised hydrodynamy (swimming efficiency), as they did have a shark-like tail fluke. Like ichthyosaurs and plesiosaurs, metriorhynchids developed smooth, scaleless skin. Metriorhynchids were the only group of archosaurs to become fully adapted to the marine realm, becoming pelagic in lifestyle. With tail flukes, reduced limb musculature, and long bones histologically comparable to other obligately aquatic animals, they were almost certainly incapable of terrestrial locomotion; combined with an unusually tall hip opening, as also seen in other obligately aquatic reptiles including the viviparous Keichousaurus, these characters suggest that metriorhynchids gave live birth. A fossil of a pregnant Dakosaurus female recovered from the Late Jurassic plattenkalk, Bavaria, preserves the complete skeleton of a neonate with small, paddle-like forelimbs unsuited for walking on land, similar to those of adults, further supporting live birth in metriorhynchids. Recent research posits that despite their successful adaptation to a pelagic lifestyle, basal metriorhynchids were uniquely disadvantaged among aquatic tetrapods in evolving into sustained swimmers due to little to no posterodorsal retraction of the external nares (unlike other reptilian groups such as mesosaurs, phytosaurs, thalattosaurians, saurosphargids, ichthyosauriforms, sauropterygians, pleurosaurids or mosasauroids, as well as mammalian cetaceans or sirenians). The family has a wide geographic distribution, with material found in Argentina, Chile, Cuba, England, France, Germany, Italy, Mexico, Poland, Russia, Switzerland and Czech Republic. Classification Phylogenetic analyses published during the 2000s cast doubt on the idea that many traditional metriorhynchid genera formed natural groups (i.e., include all descendants of a common ancestor). The traditional species of Geosaurus, Dakosaurus and Cricosaurus were found to represent unnatural groups, and the species traditionally classified in these genera were reshuffled in a study published in November 2009 by Mark T. Young and Marco Brandalise de Andrade. The monophyly of Metriorhynchus and Teleidosaurus is also unsupported, and the species of these genera are pending reclassification. The classification presented by Young and Andrade in 2009 was approved in later studies of the Metriorhynchidae. Metriorhynchidae is a node-based taxon defined in the PhyloCode by Mark T. Young and colleagues in 2024 as "the smallest clade within Metriorhynchoidea containing Thalattosuchus superciliosus, Gracilineustes leedsi, Metriorhynchus brevirostris, Rhacheosaurus gracilis, and Geosaurus giganteus" The cladogram below follows the topology from the 2020 analyses by Young et al. and reduced to genera only. List of genera The type genus of the family Metriorhynchidae is Metriorhynchus from the Middle to Late Jurassic. Other genera included within this family are Cricosaurus, Geosaurus, and Dakosaurus. Though once considered a metriorhynchid, Teleidosaurus has since been found to be slightly more distantly related to these animals within the superfamily Metriorhynchoidea. Within this family, the genus Neustosaurus and Enaliosuchus are considered nomen dubium ("doubtful name"). The genus Capelliniosuchus was once thought to be a metriorhynchid similar to Dakosaurus. However, it was later found to be a mosasaur.
Biology and health sciences
Other prehistoric archosaurs
Animals
1481498
https://en.wikipedia.org/wiki/Personal%20flotation%20device
Personal flotation device
A personal flotation device (PFD; also referred to as a life jacket, life preserver, life belt, Mae West, life vest, life saver, cork jacket, buoyancy aid or flotation suit) is a flotation device in the form of a vest or suit that is worn by a user to prevent the wearer from drowning in a body of water. The device will keep the wearer afloat with their head and mouth above the surface – they do not have to swim or tread water in order to stay afloat and can even be unconscious. PFDs are commonly worn on small watercraft or other locations where accidental entry into deep water may occur in order to provide immediate support for the wearer should they end up in the water. PFDs are also kept on large vessels for passengers to wear in an emergency in order to help them stay afloat if they be forced to enter the water or accidentally fall overboard during an evacuation. PFDs are commonly worn for swimming and/or other activities that require an individual to be in water. This is for reasons such as safety (to prevent the drowning of weak swimmers, swimmers in dangerous conditions or swimmers far from safety), to make swimming easier and less demanding, to allow someone who is unable to swim to safely enter water, or as assistance for activities such as water skiing. PFDs are available in different sizes to accommodate variations in body weight. Designs differ depending on wearing convenience, the activities and conditions they are designed to be used in and the level of protection the wearer needs. There are three main types of PFDs: life jackets, buoyancy aids and survival suits; PFDs are most often constructed out of foam pieces, with the exception of some life jackets which are inflated with air. Other highly specialized forms of PFDs include buoyancy compensators used for scuba diving, and submarine escape devices. History The oldest examples of primitive life jackets can be traced back to inflated bladders, animal skins, or hollow sealed gourds for support when crossing deep streams and rivers. Purpose-designed buoyant safety devices consisting of simple blocks of wood or cork were used by Norwegian seamen. In a letter to the Naval Chronicle, dated February 1802, Abraham Bosquet proposed issuing Royal Navy Ships with "strong canvas bags of dimensions, when filled with cork shavings, equal to about that of a bed bolster, coiled in manner like a collar, and sufficiently wide for the head and shoulders to pass through." In 1804, a cork life jacket was available for sale in The Sporting Magazine. In 1806, Francis Daniel, a physician working at Wapping, exhibited an inflatable life preserver, mounting a demonstration in which a number of suitably equipped men jumped into the Thames below Blackfriars Bridge, and variously played musical instruments, smoked pipes, discharged guns and drank wine, as the tide took them upstream. Daniel pursued his idea for some years, by his own account receiving a gold medal from the Royal Society of Arts after surrendering the idea to them. Personal flotation devices were not part of the equipment issued to naval sailors until the early 19th century, for example at the Napoleonic Battle of Trafalgar, although seamen who were press-ganged into naval service might have used such devices to jump ship and swim to freedom. Following the 1852 sinking of the troopship Birkenhead, Ensign G.A. Lucas of the 73rd Regiment of Foot wrote "Cornet Bond, 12th Lancers, was...the only person to have a lifejacket – a privately owned Macintosh Life Preserver and seems to have got ashore fairly easily." It was not until lifesaving services were formed that the personal safety of lifeboat crews heading out in pulling boats in generally horrific sea conditions was addressed. The modern life jacket is generally credited to the Inspector of Lifeboats at the Royal National Lifeboat Institution in the UK, Captain John Ross Ward (later Vice Admiral of the Royal Navy). He created a brown cork vest in 1854 to be worn by lifeboat crews for both weather protection and buoyancy. They would be worn over the blue/grey waterproof oilskins In 1900, French electrical engineer, Gustave Trouvé, patented a battery-powered wearable lifejacket. It incorporated small, rubber-insulated maritime electric batteries not only to inflate the jacket, but also to power a light to transmit and receive SOS messages and to launch a distress flare. In 1904 the rigid cork material was supplanted by pouches containing watertight cells filled with kapok, a vegetable material. These soft cells were much more flexible and comfortable to wear compared with devices using hard cork pieces. Kapok buoyancy was used in many navies fighting in World War II. In 1972 yellow or red Beaufort synthetic foam life jackets supplanted kapok for 'inherently buoyant' (vs. inflated and therefore not inherently buoyant) flotation. These modern jackets could support not only the rescuer but the rescued at the same time. The University of Victoria pioneered research and development of the UVic Thermo Float PFD, which provides superior protection from immersion hypothermia by incorporating a neoprene rubber "diaper" that seals the user's upper thigh and groin region from contact with otherwise cold, flushing and debilitating water. During World War II, research to improve the design of life jackets was also conducted in the UK by Edgar Pask, the first Professor of Anaesthesia at Newcastle University. His research involved self-administered anaesthesia as a means of simulating unconsciousness in freezing sea-water. Pask's work earned him the OBE and the description of "the bravest man in the RAF never to have flown an aeroplane". M1926 Inflatable Life Preserver Belt The M1926 Life Preserver belt was issued to US infantry where they were on ships or near the water, in particular amphibious landings such as D-Day. The belt had two bottles that could be activated to inflate the belt if needed, or it could be blown up manually with a tube, if the bottles failed. Admiralty Pattern 14124 inflatable life ring The Admiralty Pattern 14124 inflatable life ring was the main life preserver issued to British sailors at the start of WW2. It provided about 8.5 lbs of buoyancy. Its inherent flaw, and an issue with many life preservers at the time, was that it did not keep the wearer's head back out of the water while they were floating. This meant if they went unconscious they would roll forward and end up face down in the water and drown. Mae West The Mae West was a common nickname for the first inflatable life preserver, which was invented in 1928 by Peter Markus (1885–1974) (US Patent 1694714), with his subsequent improvements in 1930 and 1931. The nickname originated because someone wearing the inflated life preserver often appeared to be as large-breasted as the actress Mae West. It was popular during the Second World War with U.S. Army Air Forces and Royal Air Force servicemen, who were issued inflatable Mae Wests as part of their flight gear. Air crew members whose lives were saved by use of the Mae West (and other personal flotation devices) were eligible for membership in the Goldfish Club. British pilot Eric Brown noted in an interview that the Mae West device saved his life after he was forced into the ocean following the sinking of the aircraft carrier he was on, HMS Audacity, by a U-boat in WW2. Out of the 24 crew in his group in the water, the only two who survived were two pilots wearing Mae Wests, the rest were sailors wearing more basic flotation devices (inflatable rings) that kept them afloat, but did not keep their heads out of the water. Specifications Devices designed and approved by authorities for use by civilians (recreational boaters, sailors, canoeists, kayakers) differ from those designed for use by passengers and crew of aircraft (helicopters, airplanes) and of commercial vessels (tugboats, passenger ferries, cargo ships). Devices used by government and military (e.g. water police, coast guard, navy, marines) generally have features not found on civilian or commercial models, for example compatibility with other items worn, like a survival vest, bulletproof vest/body armor, equipment harness, rappelling harness, or parachute harness, and the use of ballistic nylon cloth to protect pressurized canisters used for inflating the vest from injuring the wearer if struck by a round from a firearm. The ballistic cloth keeps the fragments from the canister from becoming shrapnel injurious to the user. Design and regulation Life jackets or life vests are mandatory on airplanes flying over water bodies, in which case they consist of a pair of air cells (bladders) that can be inflated by triggering the release of carbon dioxide gas from a canister—one for each cell. Alternately, the cells can be inflated "orally", that is by blowing into a flexible tube with a one-way valve to seal the air in the cell. Life jackets must also be supplied on commercial seafaring vessels, be accessible to all crew and passengers, and be donned in an emergency. Flotation devices are also found in near water-edges and at swimming pools. They may take the form of a simple vest, a jacket, a full-body suit (one piece coverall), or their variations suited for particular purposes. They are most commonly made of a tough synthetic fiber material encapsulating a source of buoyancy, such as foam or a chamber of air, and are often brightly colored yellow or orange to maximize visibility for rescuers. Some devices consist of a combination of both buoyancy foam and an air chamber. Retroreflective "SOLAS" tape is often sewn to the fabric used to construct life jackets and PFDs to facilitate a person being spotted in darkness when a search light is shone towards the wearer. In the US, federal regulations require all persons under the age of 13 to wear a life jacket (PFD) when in a watercraft under 12 meters long. State regulations may raise or lower this number and must be followed when in that state's jurisdiction. Types Buoyancy aid (foam core) Buoyancy aids are designed to allow freedom of movement while providing a user with the necessary buoyancy. They are also designed for minimal maintenance and as they are only constructed from foam and can be mass-produced inexpensively, making them one of the most common forms of PFDs. Some buoyancy aids also come designed especially for children and youth. These vests may include one or two understraps to be worn between the legs of the wearer and also a headrest flap. The understraps are designed to keep the vest from riding up when worn in the water and restrict the wearer from slipping out of the life vest. These straps are adjustable and are included on many different life vests designed to be worn by everyone from infants to adults. The headrest flap is designed to help support the head and keep it out of the water. A grab handle is attached to the headrest to be used if needed to rescue or lift someone out of the water. Buoyancy aids are rated by the amount of buoyancy they provide in Newtons - the minimum rating to be considered suitable as an adult life-jacket for offshore use is . Life jacket Life jackets for outfitting large commercial transport ventures in potentially dangerous waters, such as coastal cruises, offshore passages, and overwater air flights, consisting of either a single air chamber or a pair of (twin or double) sealed air chambers constructed of coated nylon (sometimes with a protective outer encasing of heavier, tougher material such as vinyl), joined, and buckled with a side release buckle. For use aboard ships they may be constructed of foam. Twin air chambers provide for redundancy in the event of one of the air chambers leaking or failing to "fire", for example if the thin air cell fabric is sliced open by sharp metal fragments during emergency evacuation and egress. Most life jackets for leisure use are of the single air chamber type. Aircraft devices for crew and passengers are always inflatable since it may be necessary to swim down and away from a ditched or submerged aircraft and inflated or foam filled devices would significantly impede a person from swimming downward in order to escape a vehicle cabin. Upon surfacing, the person then inflates the device, orally or by triggering the gas canister release mechanism. Most commercial passenger life jackets are fitted with a plastic whistle for attracting attention. It has a light which is activated when in contact with water. Quality life jackets always provide more buoyancy than offered by the buoyancy aids alone. The positioning of the buoyancy on the wearer's torso is such that a righting moment (rotational force) results that will eventually turn most persons who are floating face down in the water (for example, because they are unconscious) into a face up orientation with their bodies inclined backward, unlike more simply designed common foam buoyancy vests. A life jacket that is too loose may not provide sufficient buoyancy in case of an emergency. Today these air chamber vests are commonly referred to as 'inflatable life jackets or vests' and are available not only for commercial applications but also for those engaged in recreational boating, fishing, sailing, kayaking and canoeing. They are available in a variety of styles and are generally more comfortable and less bulky than traditional foam vests. There are also life vests made especially for women. The air chambers are always located over the breast, across the shoulders and encircle the back of the head. They may be inflated by either self-contained carbon dioxide cartridges activated by pulling a cord, or blow tubes with a one-way valve for inflation by exhalation. Some inflatable life jackets also react with salt or fresh water, which causes them to self-inflate. The latest generation of self-triggering inflation devices responds to water pressure when submerged and incorporates an actuator known as a 'hydrostatic release'. All automatic life-jackets can be fired manually if required. Regardless of whether manually or automatically triggered, a pin punctures the cartridge/canister and the CO2 gas escapes into the sealed air chamber. However, there is a chance that these water pressure activated inflation devices do not inflate the life jacket if a person is wearing waterproof clothing and falls into the water face-down. In these cases the buoyancy of the clothing holds a person on the water surface, which prevents the hydrostatic release. As a result, a person can drown although wearing a fully functional life jacket. In addition there are some circumstances in which the use of self-triggering devices can result in the wearer becoming trapped underwater. For example, the coxswain of a bowloader rowing shell risks being unable to escape should the craft capsize. To be on the safe side, a pill-activated inflation device is preferred. A small pill that dissolves on water contact is the safest option, as it also works in shallow waters where a hydrostatic activator fails. This type of jacket is called an 'automatic'. As it is more sensitive to the presence of water, early models could also be activated by very heavy rain or spray. For this reason, spare re-arming kits should be carried on board for each life jacket. However, with modern cup/bobbin mechanisms this problem rarely arises and mechanisms such as the Halkey Roberts Pro firing system have all but eliminated accidental firing. Drifting in open seas and international waters, as encountered on long sea voyages and by military forces, requires prolonged survival in water. Suitable life jackets are often attached to a vest with pockets and attachment points for distress signaling and survival aids, for example, a handheld two-way radio (walkie-talkie), emergency beacon (406 MHz frequency), signal mirror, sea marker dye, smoke or light signal flares, strobe light, first-aid supplies, concentrated nutritional items, water purification supplies, shark repellent, knife, and pistol. Accessories such as leg straps can be utilized to keep the inflated chambers in position for floating in a stable attitude, and splash or face shields constructed of clear see-through vinyl covers the head and face to prevent water from waves from inundating the face and entering the airway through the nose or mouth. Immersion suit Some formats of PFDs are designed for long term immersion in cold water in that they provide insulation as well as buoyancy. While a wetsuit of neoprene rubber or a diver's drysuit provides a degree of flotation, in most maritime countries they are not formally considered by regulatory agencies as approved lifesaving devices or as PFDs. It is possible for an incapacitated person in the water to float face-down while wearing only a wet suit or a dry suit since they are not designed to serve as lifesaving devices in the normal understanding of that term. The Mark 10 Submarine Escape Immersion Equipment (SEIE) suit is intended to allow submariners to escape from much deeper depths than currently possible with the Steinke hood. Some United States Navy submarines already have the system, with an ambitious installation and training schedule in place for the remainder of the fleet. Because it is a full-body suit, the Mark 10 provides thermal protection once the wearer reaches the surface, and the Royal Navy has successfully tested it at depths. Buoyancy compensator Scuba divers commonly wear a buoyancy compensator, which has an inflatable gas chamber. The amount of gas can be increased or decreased to enable the diver to ascend, descend or maintain neutral buoyancy at a given water depth and to provide positive buoyancy in an emergency to bring the diver to the surface or keep the diver at the surface. Specialized Specialized life jackets include shorter-profile vests commonly used for kayaking (especially playboating), and high-buoyant types for river outfitters and other whitewater professionals. PFDs which include harnesses for tethered rescue work ('live-bait rescue') and pockets or daisy-chains (a series of loops created by sewing flat nylon webbing at regular intervals for the attachment of rescue gear) are made for swiftwater rescue technicians. PFDs for pets Personal flotation devices have been developed for dogs and other pets. While the USCG does not certify personal flotation devices for animals, many manufacturers produce life jackets for dogs and cats. Dogs and cats have been known to die from drowning, either because they do not know how to swim, or because they tire out from overexposure or old age, or have a medical complication such as a seizure, or become unconscious. Most life jackets on the market are designed with foam that wraps around the animal's torso and neck. They provide a basic amount of buoyancy for a dog, but may not provide enough support for the head. They are not ideal for use with heavy dogs. However, they often incorporate a grab handle, which may help to hoist the dog back into the boat. Although most pet life jackets are passive devices, there is at least one automatically inflated life jacket available for pets (made by Critter's Inflatable, LLC). An automatic flotation device is generally more expensive than a foam life jacket, but, like automatic PFDs designed for humans, they are less bulky to wear when not inflated, and when inflated may provide more buoyancy than foam devices. Automatic pet flotation devices are popular in the bulldog community, and also for water therapy where extra support may be needed under the head.
Technology
Basics_7
null
1481660
https://en.wikipedia.org/wiki/Anomura
Anomura
Anomura (sometimes Anomala) is a group of decapod crustaceans, including hermit crabs and others. Although the names of many anomurans include the word crab, all true crabs are in the sister group to the Anomura, the Brachyura (the two groups together form the clade Meiura). Description The name Anomura derives from an old classification in which reptant decapods were divided into Macrura (long-tailed), Brachyura (short-tailed) and Anomura (differently-tailed). The alternative name Anomala reflects the unusual variety of forms in this group; whereas all crabs share some obvious similarities, the various groups of anomurans are quite dissimilar. The group has been moulded by several instances of carcinisation – the development of a crab-like body form. Thus, the king crabs (Lithodidae), porcelain crabs (Porcellanidae) and hairy stone crab (Lomisidae) are all separate instances of carcinisation. As decapods (meaning ten-legged), anomurans have ten pereiopods, but the last pair of these is reduced in size, and often hidden inside the gill chamber (under the carapace) to be used for cleaning the gills. Since this arrangement is very rare in true crabs (for example, the small family Hexapodidae), a "crab" with only eight visible pereiopods is generally an anomuran. Evolution The infraorder Anomura belongs to the group Reptantia, which consists of the walking/crawling decapods (lobsters and crabs). There is wide acceptance from morphological and molecular data that Anomura and Brachyura ("true" crabs) are sister taxa, together making up the clade Meiura. Anomura likely diverged from Brachyura in the Late Triassic period, with the earliest discovered Anomuran fossil Platykotta akaina dating from the Norian–Rhaetian aged Ghalilah Formation of the United Arab Emirates. The cladogram below shows Anomura's placement within the larger order Decapoda, from analysis by Wolfe et al. (2019). Some of the internal relationships within Anomura are shown in the cladogram below, which shows Hippidae as sister to Paguroidea, and resolves Parapaguridae outside of Paguroidea: Classification The infraorder Anomura contained seven extant superfamilies: The oldest fossil attributed to Anomura is Platykotta, from the Norian–Rhaetian (Late Triassic) Period in the United Arab Emirates.
Biology and health sciences
Crabs and hermit crabs
Animals
1481873
https://en.wikipedia.org/wiki/Copper%28II%29%20chloride
Copper(II) chloride
Copper(II) chloride, also known as cupric chloride, is an inorganic compound with the chemical formula . The monoclinic yellowish-brown anhydrous form slowly absorbs moisture to form the orthorhombic blue-green dihydrate , with two water molecules of hydration. It is industrially produced for use as a co-catalyst in the Wacker process. Both the anhydrous and the dihydrate forms occur naturally as the rare minerals tolbachite and eriochalcite, respectively. Structure Anhydrous copper(II) chloride adopts a distorted cadmium iodide structure. In this structure, the copper centers are octahedral. Most copper(II) compounds exhibit distortions from idealized octahedral geometry due to the Jahn-Teller effect, which in this case describes the localization of one d-electron into a molecular orbital that is strongly antibonding with respect to a pair of chloride ligands. In , the copper again adopts a highly distorted octahedral geometry, the Cu(II) centers being surrounded by two water ligands and four chloride ligands, which bridge asymmetrically to other Cu centers. Copper(II) chloride is paramagnetic. Of historical interest, was used in the first electron paramagnetic resonance measurements by Yevgeny Zavoisky in 1944. Properties and reactions Aqueous solutions prepared from copper(II) chloride contain a range of copper(II) complexes depending on concentration, temperature, and the presence of additional chloride ions. These species include the blue color of and the yellow or red color of the halide complexes of the formula . Hydrolysis When copper(II) chloride solutions are treated with a base, a precipitation of copper(II) hydroxide occurs: Partial hydrolysis gives dicopper chloride trihydroxide, , a popular fungicide. When an aqueous solution of copper(II) chloride is left in the air and isn't stabilized by a small amount of acid, it is prone to undergo slight hydrolysis. Redox and decomposition Copper(II) chloride is a mild oxidant. It starts to decompose to copper(I) chloride and chlorine gas around and is completely decomposed near : The reported melting point of copper(II) chloride of is a melt of a mixture of copper(I) chloride and copper(II) chloride. The true melting point of can be extrapolated by using the melting points of the mixtures of CuCl and . Copper(II) chloride reacts with several metals to produce copper metal or copper(I) chloride (CuCl) with oxidation of the other metal. To convert copper(II) chloride to copper(I) chloride, it can be convenient to reduce an aqueous solution with sulfur dioxide as the reductant: Coordination complexes reacts with HCl or other chloride sources to form complex ions: the red (found in potassium trichloridocuprate(II) ) (it is a dimer in reality, , a couple of tetrahedrons that share an edge), and the green or yellow (found in potassium tetrachloridocuprate(II) ). Some of these complexes can be crystallized from aqueous solution, and they adopt a wide variety of structures. Copper(II) chloride also forms a variety of coordination complexes with ligands such as ammonia, pyridine and triphenylphosphine oxide: (tetragonal) (tetrahedral) However "soft" ligands such as phosphines (e.g., triphenylphosphine), iodide, and cyanide as well as some tertiary amines induce reduction to give copper(I) complexes. Preparation Copper(II) chloride is prepared commercially by the action of chlorination of copper. Copper at red heat (300-400°C) combines directly with chlorine gas, giving (molten) copper(II) chloride. The reaction is very exothermic. A solution of copper(II) chloride is commercially produced by adding chlorine gas to a circulating mixture of hydrochloric acid and copper. From this solution, the dihydrate can be produced by evaporation. Although copper metal itself cannot be oxidized by hydrochloric acid, copper-containing bases such as the hydroxide, oxide, or copper(II) carbonate can react to form in an acid-base reaction which can subsequently be heated above to produce the anhydrous derivative. Once prepared, a solution of may be purified by crystallization. A standard method takes the solution mixed in hot dilute hydrochloric acid, and causes the crystals to form by cooling in a calcium chloride () ice bath. There are indirect and rarely used means of using copper ions in solution to form copper(II) chloride. Electrolysis of aqueous sodium chloride with copper electrodes produces (among other things) a blue-green foam that can be collected and converted to the hydrate. While this is not usually done due to the emission of toxic chlorine gas, and the prevalence of the more general chloralkali process, the electrolysis will convert the copper metal to copper ions in solution forming the compound. Indeed, any solution of copper ions can be mixed with hydrochloric acid and made into a copper chloride by removing any other ions. Uses Co-catalyst in Wacker process A major industrial application for copper(II) chloride is as a co-catalyst with palladium(II) chloride in the Wacker process. In this process, ethene (ethylene) is converted to ethanal (acetaldehyde) using water and air. During the reaction, reduced to Pd, and the serves to re-oxidize this back to . Air can then oxidize the resultant CuCl back to , completing the cycle. The overall process is: In organic synthesis Copper(II) chloride has some highly specialized applications in the synthesis of organic compounds. It affects the chlorination of aromatic hydrocarbons—this is often performed in the presence of aluminium oxide. It is able to chlorinate the alpha position of carbonyl compounds: This reaction is performed in a polar solvent such as dimethylformamide, often in the presence of lithium chloride, which accelerates the reaction. , in the presence of oxygen, can also oxidize phenols. The major product can be directed to give either a quinone or a coupled product from oxidative dimerization. The latter process provides a high-yield route to 1,1-binaphthol: Such compounds are intermediates in the synthesis of BINAP and its derivatives. Copper(II) chloride dihydrate promotes the hydrolysis of acetonides, i.e., for deprotection to regenerate diols or aminoalcohols, as in this example (where TBDPS = tert-butyldiphenylsilyl): also catalyses the free radical addition of sulfonyl chlorides to alkenes; the alpha-chlorosulfone may then undergo elimination with a base to give a vinyl sulfone product. Catalyst in production of chlorine Copper(II) chloride is used as a catalyst in a variety of processes that produce chlorine by oxychlorination. The Deacon process takes place at about 400 to 450 °C in the presence of a copper chloride: Copper(II) chloride catalyzes the chlorination in the production of vinyl chloride and dichloromethane. Copper(II) chloride is used in the copper–chlorine cycle where it reacts with steam into copper(II) oxide dichloride and hydrogen chloride and is later recovered in the cycle from the electrolysis of copper(I) chloride. Niche uses Copper(II) chloride is used in pyrotechnics as a blue/green coloring agent. In a flame test, copper chlorides, like all copper compounds, emit green-blue light. In humidity indicator cards (HICs), cobalt-free brown to azure (copper(II) chloride base) HICs can be found on the market. In 1998, the European Community classified items containing cobalt(II) chloride of 0.01 to 1% w/w as T (Toxic), with the corresponding R phrase of R49 (may cause cancer if inhaled). Consequently, new cobalt-free humidity indicator cards containing copper have been developed. Copper(II) chloride is used as a mordant in the textile industry, petroleum sweetener, wood preservative, and water cleaner. Copper(II) chloride is also used in high school demos, such as reacting with aluminum to create aluminum chloride and copper, and learning how to measure moles. Natural occurrence Copper(II) chloride occurs naturally as the very rare anhydrous mineral tolbachite and the dihydrate eriochalcite.<ref name="xray18">Marlene C. Morris, Howard F. McMurdie, Eloise H. Evans, Boris Paretzkin, Harry S. Parker, and Nicolas C. Panagiotopoulos (1981) Copper chloride hydrate (eriochalcite), in Standard X-ray Diffraction Powder Patterns National Bureau of Standards, Monograph 25, Section 18; page 33.</ref> Both are found near fumaroles and in some copper mines. Mixed oxyhydroxide-chlorides like atacamite () are more common, arising among Cu ore beds oxidation zones in arid climates. Safety and biological impact Copper(II) chloride can be toxic. Only concentrations below 1.3 ppm of aqueous copper ions are allowed in drinking water by the US Environmental Protection Agency. If copper chloride is absorbed, it results in headache, diarrhea, a drop in blood pressure, and fever. Ingestion of large amounts may induce copper poisoning, CNS disorders, and haemolysis. Copper(II) chloride has been demonstrated to cause chromosomal aberrations and mitotic cycle disturbances within A. cepa (onion) cells. Such cellular disturbances lead to genotoxicity. Copper(II) chloride has also been studied as a harmful environmental pollutant. Often present in irrigation-grade water, it can negatively affect water and soil microbes. Specifically, denitrifying bacteria were found to be very sensitive to the presence of copper(II) chloride. At a concentration of 0.95 mg/L, copper(II) chloride was found to cause a 50% inhibition (IC50) of the metabolic activity of denitrifying microbes.
Physical sciences
Halide salts
Chemistry
1481886
https://en.wikipedia.org/wiki/Cannabis%20%28drug%29
Cannabis (drug)
Cannabis (), commonly known as marijuana (), weed, and pot, among other names, is a non-chemically uniform drug from the cannabis plant. Native to Central or South Asia, the cannabis plant has been used as a drug for both recreational and entheogenic purposes and in various traditional medicines for centuries. Tetrahydrocannabinol (THC) is the main psychoactive component of cannabis, which is one of the 483 known compounds in the plant, including at least 65 other cannabinoids, such as cannabidiol (CBD). Cannabis can be used by smoking, vaporizing, within food, or as an extract. Cannabis has various mental and physical effects, which include euphoria, altered states of mind and sense of time, difficulty concentrating, impaired short-term memory, impaired body movement (balance and fine psychomotor control), relaxation, and an increase in appetite. Onset of effects is felt within minutes when smoked, but may take up to 90 minutes when eaten (as orally consumed drugs must be digested and absorbed). The effects last for two to six hours, depending on the amount used. At high doses, mental effects can include anxiety, delusions (including ideas of reference), hallucinations, panic, paranoia, and psychosis. There is a strong relation between cannabis use and the risk of psychosis, though the direction of causality is debated. Physical effects include increased heart rate, difficulty breathing, nausea, and behavioral problems in children whose mothers used cannabis during pregnancy; short-term side effects may also include dry mouth and red eyes. Long-term adverse effects may include addiction, decreased mental ability in those who started regular use as adolescents, chronic coughing, susceptibility to respiratory infections, and cannabinoid hyperemesis syndrome. Cannabis is mostly used recreationally or as a medicinal drug, although it may also be used for spiritual purposes. In 2013, between 128 and 232 million people used cannabis (2.7% to 4.9% of the global population between the ages of 15 and 65). It is the most commonly used largely-illegal drug in the world, with the highest use among adults in Zambia, the United States, Canada, and Nigeria. Since the 1970s, the potency of illicit cannabis has increased, with THC levels rising and CBD levels dropping. Cannabis plants have been grown since at least the 3rd millennium BCE and there is evidence of it being smoked for its psychoactive effects around 500 BCE in the Pamir Mountains, Central Asia. Since the 14th century, cannabis has been subject to legal restrictions. The possession, use, and cultivation of cannabis has been illegal in most countries since the 20th century. In 2013, Uruguay became the first country to legalize recreational use of cannabis. Other countries to do so are Canada, Georgia, Germany, Luxembourg, Malta, South Africa, and Thailand. In the U.S., the recreational use of cannabis is legalized in 24 states, 3 territories, and the District of Columbia, though the drug remains federally illegal. In Australia, it is legalized only in the Australian Capital Territory. Etymology Cannabis is a Scythian word. The ancient Greeks learned of the use of cannabis by observing Scythian funerals, during which cannabis was consumed. In Akkadian, cannabis was known as qunubu (). The word was adopted in to the Hebrew as qaneh bosem (). Uses Medical Medical cannabis, or medical marijuana, refers to the use of cannabis to treat disease or improve symptoms; however, there is no single agreed-upon definition (e.g., cannabinoids derived from cannabis and synthetic cannabinoids are also used). The rigorous scientific study of cannabis as a medicine has been hampered by production restrictions and by the fact that it is classified as an illegal drug by many governments. There is some evidence suggesting cannabis can be used to reduce nausea and vomiting during chemotherapy, to improve appetite in people with HIV/AIDS, or to treat chronic pain and muscle spasms. Evidence for its use for other medical applications is insufficient for drawing conclusions about safety or efficacy. There is evidence supporting the use of cannabis or its derivatives in the treatment of chemotherapy-induced nausea and vomiting, neuropathic pain, and multiple sclerosis. Lower levels of evidence support its use for AIDS wasting syndrome, epilepsy, rheumatoid arthritis, and glaucoma. The medical use of cannabis is legal only in a limited number of territories, including Canada, Belgium, Australia, the Netherlands, New Zealand, Spain, and many U.S. states. This usage generally requires a prescription, and distribution is usually done within a framework defined by local laws. Recreational According to DEA Chief Administrative Law Judge, Francis Young, "cannabis is one of the safest therapeutically active substances known to man". Being under the effects of cannabis is usually referred to as being "high". Cannabis consumption has both psychoactive and physiological effects. The "high" experience can vary widely, based (among other things) on the user's prior experience with cannabis, and the type of cannabis consumed. When smoking cannabis, a euphoriant effect can occur within minutes of smoking. Aside from a subjective change in perception and mood, the most common short-term physical and neurological effects include increased heart rate, increased appetite, impairment of short-term and working memory, and impairment of psychomotor coordination. Additional desired effects from consuming cannabis include relaxation, a general alteration of conscious perception, increased awareness of sensation, increased libido and distortions in the perception of time and space. At higher doses, effects can include altered body image, auditory or visual illusions, pseudohallucinations and ataxia from selective impairment of polysynaptic reflexes. In some cases, cannabis can lead to dissociative states such as depersonalization and derealization. Spiritual Cannabis has held sacred status in several religions and has served as an entheogen – a chemical substance used in religious, shamanic, or spiritual contexts – in the Indian subcontinent since the Vedic period. The earliest known reports regarding the sacred status of cannabis in the Indian subcontinent come from the Atharva Veda, estimated to have been composed sometime around 1400 BCE. The Hindu god Shiva is described as a cannabis user, known as the "Lord of bhang". In modern culture, the spiritual use of cannabis has been spread by the disciples of the Rastafari movement who use cannabis as a sacrament and as an aid to meditation. Consumption Modes of consumption Many different ways to consume cannabis involve heat to decarboxylate THCA into THC; common modes include: Smoking, involves burning and inhaling cannabinoids ("smoke") from small pipes, bongs (portable versions of hookahs with a water chamber), paper-wrapped joints, tobacco-leaf-wrapped blunts, or the like. Vaporizing, heating various forms of cannabis to , causing the active ingredients to form vapor without combustion of the plant material (the boiling point of THC is at atmospheric pressure). Edibles, adding cannabis as an ingredient to a wide variety of foods, including butter and baked goods. In India it is commonly consumed as the beverage bhang. Cannabis tea, prepared with attention to the lipophilic quality of THC, which is only slightly water-soluble (2.8 mg per liter), often involving cannabis in a saturated fat. Tincture of cannabis, sometimes known as green dragon, is an alcoholic cannabis concentrate. Capsules, typically containing cannabis oil, and other dietary supplement products, for which some 220 were approved in Canada in 2018. Consumption by country In 2013, between 128 and 232 million people used cannabis (2.7% to 4.9% of the global population between the ages of 15 and 65). Cannabis is by far the most widely used illicit substance, with the highest use among adults () in Zambia, the United States, Canada, and Nigeria. United States Between 1973 and 1978, eleven states decriminalized marijuana. In 2001, Nevada reduced marijuana possession to a misdemeanor and since 2012, several other states have decriminalized and even legalized marijuana. In 2018, surveys indicated that almost half of the people in the United States had tried marijuana, 16% had used it in the past year, and 11% had used it in the past month. In 2014, surveys said daily marijuana use amongst US college students had reached its highest level since records began in 1980, rising from 3.5% in 2007 to 5.9% in 2014 and had surpassed daily cigarette use. In the US, men are over twice as likely to use marijuana as women, and 1829-year-olds are six times more likely to use as over-65-year-olds. In 2015, a record 44% of the US population has tried marijuana in their lifetime, an increase from 38% in 2013 and 33% in 1985. Marijuana use in the United States is three times above the global average, but in line with other Western democracies. Forty-four percent of American 12th graders have tried the drug at least once, and the typical age of first-use is 16, similar to the typical age of first-use for alcohol but lower than the first-use age for other illicit drugs. A 2022 Gallup poll concluded Americans are smoking more marijuana than cigarettes for the first time. Adverse effects Short-term Acute negative effects may include anxiety and panic, impaired attention and memory, an increased risk of psychotic symptoms, the inability to think clearly, and an increased risk of accidents. Cannabis impairs a person's driving ability, and THC was the illicit drug most frequently found in the blood of drivers who have been involved in vehicle crashes. Those with THC in their system are from three to seven times more likely to be the cause of the accident than those who had not used either cannabis or alcohol, although its role is not necessarily causal because THC stays in the bloodstream for days to weeks after intoxication. Some immediate undesired side effects include a decrease in short-term memory, dry mouth, impaired motor skills, reddening of the eyes, dizziness, feeling tired and vomiting. Some users may experience an episode of acute psychosis, which usually abates after six hours, but in rare instances, heavy users may find the symptoms continuing for many days. Legalization has increased the rates at which children are exposed to cannabis, particularly from edibles. While the toxicity and lethality of THC in children is not known, they are at risk for encephalopathy, hypotension, respiratory depression severe enough to require ventilation, somnolence and coma. Fatality There is no clear evidence for a link between cannabis use and deaths from cardiovascular disease, but a 2019 review noted that it may be an under-reported, contributory factor or direct cause in cases of sudden death, due to the strain it can place on the cardiovascular system. Some deaths have also been attributed to cannabinoid hyperemesis syndrome. There is an association between cannabis use and suicide, particularly in younger users. A 16-month survey of Oregon and Alaska emergency departments found a report of the death of an adult who had been admitted for acute cannabis toxicity. Long-term Psychological effects A 2015 meta-analysis found that, although a longer period of abstinence was associated with smaller magnitudes of impairment, both retrospective and prospective memory were impaired in cannabis users. The authors concluded that some, but not all, of the deficits associated with cannabis use were reversible. A 2012 meta-analysis found that deficits in most domains of cognition persisted beyond the acute period of intoxication, but was not evident in studies where subjects were abstinent for more than 25 days. Few high quality studies have been performed on the long-term effects of cannabis on cognition, and the results were generally inconsistent. Furthermore, effect sizes of significant findings were generally small. One review concluded that, although most cognitive faculties were unimpaired by cannabis use, residual deficits occurred in executive functions. Impairments in executive functioning are most consistently found in older populations, which may reflect heavier cannabis exposure, or developmental effects associated with adolescent cannabis use. One review found three prospective cohort studies that examined the relationship between self-reported cannabis use and intelligence quotient (IQ). The study following the largest number of heavy cannabis users reported that IQ declined between ages 7–13 and age 38. Poorer school performance and increased incidence of leaving school early were both associated with cannabis use, although a causal relationship was not established. Cannabis users demonstrated increased activity in task-related brain regions, consistent with reduced processing efficiency. A reduced quality of life is associated with heavy cannabis use, although the relationship is inconsistent and weaker than for tobacco and other substances. The direction of cause and effect, however, is unclear. The long-term effects of cannabis are not clear. There are concerns surrounding memory and cognition problems, risk of addiction, and the risk of schizophrenia in young people. Neuroimaging Although global abnormalities in white matter and grey matter are not consistently associated with cannabis use, reduced hippocampal volume is consistently found. Amygdala abnormalities are sometimes reported, although findings are inconsistent. Cannabis use is associated with increased recruitment of task-related areas, such as the dorsolateral prefrontal cortex, which is thought to reflect compensatory activity due to reduced processing efficiency. Cannabis use is also associated with downregulation of CB1 receptors. The magnitude of down regulation is associated with cumulative cannabis exposure, and is reversed after one month of abstinence. There is limited evidence that chronic cannabis use can reduce levels of glutamate metabolites in the human brain. Cannabis dependence About 9% of those who experiment with marijuana eventually become dependent according to DSM-IV (1994) criteria. A 2013 review estimates daily use is associated with a 10–20% rate of dependence. The highest risk of cannabis dependence is found in those with a history of poor academic achievement, deviant behavior in childhood and adolescence, rebelliousness, poor parental relationships, or a parental history of drug and alcohol problems. Of daily users, about 50% experience withdrawal upon cessation of use (i.e. are dependent), characterized by sleep problems, irritability, dysphoria, and craving. Cannabis withdrawal is less severe than withdrawal from alcohol. According to DSM-V criteria, 9% of those who are exposed to cannabis develop cannabis use disorder, compared to 20% for cocaine, 23% for alcohol and 68% for nicotine. Cannabis use disorder in the DSM-V involves a combination of DSM-IV criteria for cannabis abuse and dependence, plus the addition of craving, without the criterion related to legal troubles. Psychiatric From a clinical perspective, two significant school of thought exists for psychiatric conditions associated with cannabis (or cannabinoids) use: transient, non-persistent psychotic reactions, and longer-lasting, persistent disorders that resemble schizophrenia. The former is formally known as acute cannabis-associated psychotic symptoms (CAPS). At an epidemiological level, a dose–response relationship exists between cannabis use and increased risk of psychosis and earlier onset of psychosis. Although the epidemiological association is robust, evidence to prove a causal relationship is lacking. Cannabis may also increase the risk of depression, but insufficient research has been performed to draw a conclusion. Cannabis use is associated with increased risk of anxiety disorders, although causality has not been established. A review in 2019 found that research was insufficient to determine the safety and efficacy of using cannabis to treat schizophrenia, psychosis, or other mental disorders. Another found that cannabis during adolescence was associated with an increased risk of developing depression and suicidal behavior later in life, while finding no effect on anxiety. Physical Heavy, long-term exposure to marijuana may have physical, mental, behavioral and social health consequences. It may be "associated with diseases of the liver (particularly with co-existing hepatitis C), lungs, heart, and vasculature". A 2014 review found that while cannabis use may be less harmful than alcohol use, the recommendation to substitute it for problematic drinking was premature without further study. Various surveys conducted between 2015 and 2019 found that many users of cannabis substitute it for prescription drugs (including opioids), alcohol, and tobacco; most of those who used it in place of alcohol or tobacco either reduced or stopped their intake of the latter substances. Cannabinoid hyperemesis syndrome (CHS) is a severe condition seen in some chronic cannabis users where they have repeated bouts of uncontrollable vomiting for 24–48 hours. Four cases of death have been reported as a result of CHS. A limited number of studies have examined the effects of cannabis smoking on the respiratory system. Chronic heavy marijuana smoking is associated with respiratory infections, coughing, production of sputum, wheezing, and other symptoms of chronic bronchitis. The available evidence does not support a causal relationship between cannabis use and chronic obstructive pulmonary disease. Short-term use of cannabis is associated with bronchodilation. Other side effects of cannabis use include cannabinoid hyperemesis syndrome (CHS), a condition which involves recurrent nausea, cramping abdominal pain, and vomiting. Cannabis smoke contains thousands of organic and inorganic chemical compounds. This tar is chemically similar to that found in tobacco smoke, and over fifty known carcinogens have been identified in cannabis smoke, including; nitrosamines, reactive aldehydes, and polycyclic aromatic hydrocarbons, including benz[a]pyrene. Cannabis smoke is also inhaled more deeply than tobacco smoke. , there is no consensus regarding whether cannabis smoking is associated with an increased risk of cancer. Light and moderate use of cannabis is not believed to increase risk of lung or upper airway cancer. Evidence for causing these cancers is mixed concerning heavy, long-term use. In general there are far lower risks of pulmonary complications for regular cannabis smokers when compared with those of tobacco. A 2015 review found an association between cannabis use and the development of testicular germ cell tumors (TGCTs), particularly non-seminoma TGCTs. Another 2015 meta-analysis found no association between lifetime cannabis use and risk of head or neck cancer. Combustion products are not present when using a vaporizer, consuming THC in pill form, or consuming cannabis foods. There is concern that cannabis may contribute to cardiovascular disease, but , evidence of this relationship was unclear. Research in these events is complicated because cannabis is often used in conjunction with tobacco, and drugs such as alcohol and cocaine that are known to have cardiovascular risk factors. Smoking cannabis has also been shown to increase the risk of myocardial infarction by 4.8 times for the 60 minutes after consumption. There is preliminary evidence that cannabis interferes with the anticoagulant properties of prescription drugs used for treating blood clots. , the mechanisms for the anti-inflammatory and possible pain relieving effects of cannabis were not defined, and there were no governmental regulatory approvals or clinical practices for use of cannabis as a drug. Emergency department visits Emergency room (ER) admissions associated with cannabis use rose significantly from 2012 to 2016; adolescents from age 12–17 had the highest risk. At one Colorado medical center following legalization, approximately two percent of ER admissions were classified as cannabis users. The symptoms of one quarter of these users were partially attributed to cannabis (a total of 2567 out of 449,031 patients); other drugs were sometimes involved. Of these cannabis admissions, one quarter were for acute psychiatric effects, primarily suicidal ideation, depression, and anxiety. An additional third of the cases were for gastrointestinal issues including cannabinoid hyperemesis syndrome. According to the United States Department of Health and Human Services, there were 455,000 emergency room visits associated with cannabis use in 2011. These statistics include visits in which the patient was treated for a condition induced by or related to recent cannabis use. The drug use must be "implicated" in the emergency department visit, but does not need to be the direct cause of the visit. Most of the illicit drug emergency room visits involved multiple drugs. In 129,000 cases, cannabis was the only implicated drug. Reproductive health Pharmacology Mechanism of action THC is a weak partial agonist at CB1 receptors, while CBD is a CB1 receptor antagonist. The CB1 receptor is found primarily in the brain as well as in some peripheral tissues, and the CB2 receptor is found primarily in peripheral tissues, but is also expressed in neuroglial cells. THC appears to alter mood and cognition through its agonist actions on the CB1 receptors, which inhibit a secondary messenger system (adenylate cyclase) in a dose-dependent manner. Via CB1 receptor activation, THC indirectly increases dopamine release and produces psychotropic effects. CBD also acts as an allosteric modulator of the μ- and δ-opioid receptors. THC also potentiates the effects of the glycine receptors. It is unknown if or how these actions contribute to the effects of cannabis. Pharmacokinetics The high lipid-solubility of cannabinoids results in their persisting in the body for long periods of time. Even after a single administration of THC, detectable levels of THC can be found in the body for weeks or longer (depending on the amount administered and the sensitivity of the assessment method). Investigators have suggested that this is an important factor in marijuana's effects, perhaps because cannabinoids may accumulate in the body, particularly in the lipid membranes of neurons. Chemistry Chemical composition The main psychoactive component of cannabis is tetrahydrocannabinol (THC), which is formed via decarboxylation of tetrahydrocannabinolic acid (THCA) from the application of heat. Raw leaf is not psychoactive because the cannabinoids are in the form of carboxylic acids. THC is one of the 483 known compounds in the plant, including at least 65 other cannabinoids, such as cannabidiol (CBD). Detection in body fluids THC and its major (inactive) metabolite, THC-COOH, can be measured in blood, urine, hair, oral fluid or sweat using chromatographic techniques as part of a drug use testing program or a forensic investigation of a traffic or other criminal offense. The concentrations obtained from such analyses can often be helpful in distinguishing active use from passive exposure, elapsed time since use, and extent or duration of use. These tests cannot, however, distinguish authorized cannabis smoking for medical purposes from unauthorized recreational smoking. Commercial cannabinoid immunoassays, often employed as the initial screening method when testing physiological specimens for marijuana presence, have different degrees of cross-reactivity with THC and its metabolites. Urine contains predominantly THC-COOH, while hair, oral fluid and sweat contain primarily THC. Blood may contain both substances, with the relative amounts dependent on the recency and extent of usage. The Duquenois–Levine test is commonly used as a screening test in the field, but it cannot definitively confirm the presence of cannabis, as a large range of substances have been shown to give false positives. Researchers at John Jay College of Criminal Justice reported that dietary zinc supplements can mask the presence of THC and other drugs in urine. However, a 2013 study conducted by researchers at the University of Utah School of Medicine refute the possibility of self-administered zinc producing false-negative urine drug tests. Varieties and strains CBD is a 5-HT1A receptor agonist, which is under laboratory research to determine if it has an anxiolytic effect. It is often claimed that sativa strains provide a more stimulating psychoactive high while indica strains are more sedating with a body high. However, this is disputed by researchers. A 2015 review found that the use of high CBD-to-THC strains of cannabis showed significantly fewer positive symptoms, such as delusions and hallucinations, better cognitive function and both lower risk for developing psychosis, as well as a later age of onset of the illness, compared to cannabis with low CBD-to-THC ratios. Psychoactive ingredients According to the United Nations Office on Drugs and Crime (UNODC), "the amount of THC present in a cannabis sample is generally used as a measure of cannabis potency." The three main forms of cannabis products are the flower/fruit, resin (hashish), and oil (hash oil). The UNODC states that cannabis often contains 5% THC content, resin "can contain up to 20% THC content", and that "Cannabis oil may contain more than 60% THC content." Studies have found that the potency of illicit cannabis has greatly increased since the 1970s, with THC levels rising and CBD levels dropping. It is unclear, however, whether the increase in THC content has caused people to consume more THC or if users adjust based on the potency of the cannabis. It is likely that the higher THC content allows people to ingest less tar. At the same time, CBD levels in seized samples have lowered, in part because of the desire to produce higher THC levels and because more illegal growers cultivate indoors using artificial lights. This helps avoid detection but reduces the CBD production of the plant. Australia's National Cannabis Prevention and Information Centre (NCPIC) states that the buds (infructescences) of the female cannabis plant contain the highest concentration of THC, followed by the leaves. The stalks and seeds have "much lower THC levels". The UN states that the leaves can contain ten times less THC than the buds, and the stalks 100 times less THC. After revisions to cannabis scheduling in the UK, the government moved cannabis back from a class C to a class B drug. A purported reason was the appearance of high potency cannabis. They believe skunk accounts for between 70% and 80% of samples seized by police (despite the fact that skunk can sometimes be incorrectly mistaken for all types of herbal cannabis). Extracts such as hashish and hash oil typically contain more THC than high potency cannabis infructescences. Laced cannabis and synthetic cannabinoids Hemp buds (or low-potency cannabis buds) laced with synthetic cannabinoids started to be sold as cannabis street drug in 2020. The short-term effects of cannabis can be altered if it has been laced with opioid drugs such as heroin or fentanyl. The added drugs are meant to enhance the psychoactive properties, add to its weight, and increase profitability, despite the increased danger of overdose. Preparations Marijuana Marijuana or marihuana (herbal cannabis) consists of the dried flowers and fruits and subtending leaves and stems of the female cannabis plant. This is the most widely consumed form, containing 3% to 20% THC, with reports of up to 33% THC. This is the stock material from which all other preparations are derived. Although herbal cannabis and industrial hemp derive from the same species and contain the psychoactive component (THC), they are distinct strains with unique biochemical compositions and uses. Hemp has lower concentrations of THC and higher concentrations of CBD, which gives lesser psychoactive effects. Kief Kief is a powder, rich in trichomes, which can be sifted from the leaves, flowers and fruits of cannabis plants and either consumed in powder form or compressed to produce cakes of hashish. The word "kif" derives from colloquial Arabic , meaning pleasure. Hashish Hashish (also spelled hasheesh, hashisha, or simply hash) is a concentrated resin cake or ball produced from pressed kief, the detached trichomes and fine material that falls off cannabis fruits, flowers and leaves, or from scraping the resin from the surface of the plants and rolling it into balls. It varies in color from black to golden brown depending upon purity and variety of cultivar it was obtained from. It can be consumed orally or smoked, and is also vaporized, or 'vaped'. The term "rosin hash" refers to a high quality solventless product obtained through heat and pressure. Tincture Cannabinoids can be extracted from cannabis plant matter using high-proof spirits (often grain alcohol) to create a tincture, often referred to as "green dragon". Nabiximols is a branded product name from a tincture manufacturing pharmaceutical company. Hash oil Hash oil is a resinous matrix of cannabinoids obtained from the cannabis plant by solvent extraction, formed into a hardened or viscous mass. Hash oil can be the most potent of the main cannabis products because of its high level of psychoactive compound per its volume, which can vary depending on the plant's mix of essential oils and psychoactive compounds. Butane and supercritical carbon dioxide hash oil have become popular in recent years. Infusions There are many varieties of cannabis infusions owing to the variety of non-volatile solvents used. The plant material is mixed with the solvent and then pressed and filtered to express the oils of the plant into the solvent. Examples of solvents used in this process are cocoa butter, dairy butter, cooking oil, glycerine, and skin moisturizers. Depending on the solvent, these may be used in cannabis foods or applied topically. Marihuana prensada ('pressed marijuana') is a cannabis-derived product widespread among the lower classes of South America, especially from the 90s. Locally it is known as "" or "", since its main producer is Paraguay. Marijuana is dried and mixed with binding agents that make it toxic and highly harmful to health. It is cut into the shape of bricks (ladrillos) and sold for a low price in Argentina, Brazil, Chile, Peru, Venezuela, and even the United States. History Ancient history Cannabis is indigenous to Central or South Asia and its uses for fabric and rope dates back to the Neolithic age in China and Japan. It is unclear when cannabis first became known for its psychoactive properties. The oldest archeological evidence for the burning of cannabis was found in Romanian kurgans dated 3,500 BC, and scholars suggest that the drug was first used in ritual ceremonies by Proto-Indo-European tribes living in the Pontic-Caspian steppe during the Chalcolithic period, a custom they eventually spread throughout Western Eurasia during the Indo-European migrations. Some research suggests that the ancient Indo-Iranian drug soma, mentioned in the Vedas, sometimes contained cannabis. This is based on the discovery of a basin containing cannabis in a shrine of the second millennium BC in Turkmenistan. Cannabis was known to the ancient Assyrians, who discovered its psychoactive properties through the Iranians. Using it in some religious ceremonies, they called it qunubu (meaning "way to produce smoke"), a probable origin of the modern word cannabis. The Iranians also introduced cannabis to the Scythians, Thracians and Dacians, whose shamans (the kapnobatai"those who walk on smoke/clouds") burned cannabis infructescences to induce trance. The plant was used in China before 2800 BC, and found therapeutic use in India by 1000 BC, where it was used in food and drink, including bhang. Cannabis has an ancient history of ritual use and has been used by religions around the world. It has been used as a drug for both recreational and entheogenic purposes and in various traditional medicines for centuries. The earliest evidence of cannabis smoking has been found in the 2,500-year-old tombs of Jirzankal Cemetery in the Pamir Mountains in Western China, where cannabis residue were found in burners with charred pebbles possibly used during funeral rituals. Hemp seeds discovered by archaeologists at Pazyryk suggest early ceremonial practices like eating by the Scythians occurred during the 5th to 2nd century BC, confirming previous historical reports by Herodotus. It was used by Muslims in various Sufi orders as early as the Mamluk period, for example by the Qalandars. Smoking pipes uncovered in Ethiopia and carbon-dated to around AD 1320 were found to have traces of cannabis. Modern history Cannabis was introduced to the New World by the Spaniards in 1530–1545. Following an 1836–1840 travel in North Africa and the Middle East, French physician Jacques-Joseph Moreau wrote on the psychological effects of cannabis use; he founded the Paris' Club des Hashischins in 1844. In 1842, Irish physician William Brooke O'Shaughnessy, who had studied the drug while working as a medical officer in Bengal with the East India Company, brought a quantity of cannabis with him on his return to Britain, provoking renewed interest in the West. Examples of classic literature of the period featuring cannabis include Les paradis artificiels (1860) by Charles Baudelaire and The Hasheesh Eater (1857) by Fitz Hugh Ludlow. Cannabis was criminalized in some countries beginning in the 14th century and was illegal in most countries by the middle of the 20th century. The colonial government of Mauritius banned cannabis in 1840 over concerns on its effect on Indian indentured workers; the same occurred in Singapore in 1870. In the United States, the first restrictions on sale of cannabis came in 1906 (in the District of Columbia). Canada criminalized cannabis in The Opium and Narcotic Drug Act, 1923, before any reports of the use of the drug in Canada, but eventually legalized its consumption for recreational and medicinal purposes in 2018. In 1925, a compromise was made at an international conference in The Hague about the International Opium Convention that banned exportation of "Indian hemp" to countries that had prohibited its use, and requiring importing countries to issue certificates approving the importation and stating that the shipment was required "exclusively for medical or scientific purposes". It also required parties to "exercise an effective control of such a nature as to prevent the illicit international traffic in Indian hemp and especially in the resin". In the United States in 1937, the Marihuana Tax Act was passed, and prohibited the production of hemp in addition to cannabis. In 1972, the Dutch government divided drugs into more- and less-dangerous categories, with cannabis being in the lesser category. Accordingly, possession of or less was made a misdemeanor. Cannabis has been available for recreational use in coffee shops since 1976. Cannabis products are only sold openly in certain local "coffeeshops" and possession of up to for personal use is decriminalized, however: the police may still confiscate it, which often happens in car checks near the border. Other types of sales and transportation are not permitted, although the general approach toward cannabis was lenient even before official decriminalization. In Uruguay, President Jose Mujica signed legislation to legalize recreational cannabis in December 2013, making Uruguay the first country in the modern era to legalize cannabis. In August 2014, Uruguay legalized growing up to six plants at home, as well as the formation of growing clubs (Cannabis social club), and a state-controlled marijuana dispensary regime. , when recreational use of cannabis was legalized in Canada, dietary supplements for human use and veterinary health products containing not more than 10 parts per million of THC extract were approved for marketing; Nabiximols (as Sativex) is used as a prescription drug in Canada. The United Nations' World Drug Report stated that cannabis "was the world's most widely produced, trafficked, and consumed drug in the world in 2010", and estimated between 128 million and 238 million users globally in 2015. Culture, legality and economics Culture Cannabis has been one of the most used psychoactive drugs in the world since the late 20th century, following only tobacco and alcohol in popularity. According to Vera Rubin, the use of cannabis has been encompassed by two major cultural complexes over time: a continuous, traditional folk stream, and a more circumscribed, contemporary configuration. The former involves both sacred and secular use, and is usually based on small-scale cultivation: the use of the plant for cordage, clothing, medicine, food, and a "general use as an euphoriant and symbol of fellowship." The second stream of expansion of cannabis use encompasses "the use of hemp for commercial manufacturers utilizing large-scale cultivation primarily as a fiber for mercantile purposes"; but it is also linked to the search for psychedelic experiences (which can be traced back to the formation of the Parisian Club des Hashischins). Legality Since the beginning of the 20th century, most countries have enacted laws against the cultivation, possession or transfer of cannabis. These laws have had an adverse effect on cannabis cultivation for non-recreational purposes, but there are many regions where handling of cannabis is legal or licensed. Many jurisdictions have lessened the penalties for possession of small quantities of cannabis so that it is punished by confiscation and sometimes a fine, rather than imprisonment, focusing more on those who traffic the drug on the black market. In some areas where cannabis use had been historically tolerated, new restrictions were instituted, such as the closing of cannabis coffee shops near the borders of the Netherlands, and closing of coffee shops near secondary schools in the Netherlands. In Copenhagen, Denmark in 2014, mayor Frank Jensen discussed possibilities for the city to legalize cannabis production and commerce. Some jurisdictions use free voluntary or mandatory treatment programs for frequent known users. Simple possession can carry long prison terms in some countries, particularly in East Asia, where the sale of cannabis may lead to a sentence of life in prison or even execution. Political parties, non-profit organizations, and causes based on the legalization of medical cannabis or legalizing the plant entirely (with some restrictions) have emerged in such countries as China and Thailand. In December 2012, the U.S. state of Washington became the first state to officially legalize cannabis in a state law (Washington Initiative 502) (but still illegal by federal law), with the state of Colorado following close behind (Colorado Amendment 64). On 1 January 2013, the first cannabis "club" for private marijuana smoking (no buying or selling, however) was allowed for the first time in Colorado. The California Supreme Court decided in May 2013 that local governments can ban medical cannabis dispensaries despite a state law in California that permits the use of cannabis for medical purposes. At least 180 cities across California have enacted bans in recent years. On 30 April 2024, the United States Department of Justice announced it would move to reclassify cannabis from a Schedule I to a Schedule III controlled substance. In December 2013, Uruguay became the first country to legalize growing, sale and use of cannabis. After a long delay in implementing the retail component of the law, in 2017 sixteen pharmacies were authorized to sell cannabis commercially. On 19 June 2018, the Canadian Senate passed a bill and the Prime Minister announced the effective legalization date as 17 October 2018. Canada is the second country to legalize the drug. In November 2015, Uttarakhand became the first state of India to legalize the cultivation of hemp for industrial purposes. Usage within the Hindu and Buddhist cultures of the Indian subcontinent is common, with many street vendors in India openly selling products infused with cannabis, and traditional medical practitioners in Sri Lanka selling products infused with cannabis for recreational purposes and well as for religious celebrations. Indian laws criminalizing cannabis date back to the colonial period. India and Sri Lanka have allowed cannabis to be taken in the context of traditional culture for recreational/celebratory purposes and also for medicinal purposes. On 17 October 2015, Australian health minister Sussan Ley presented a new law that will allow the cultivation of cannabis for scientific research and medical trials on patients. On 17 October 2018, Canada legalized cannabis for recreational adult use making it the second country in the world to do so after Uruguay and the first G7 nation. This legalization comes with regulation similar to that of alcohol in Canada, age restrictions, limiting home production, distribution, consumption areas and sale times. Laws around use vary from province to province including age limits, retail structure, and growing at home. The Canadian Licensed Producer system aims to become the Gold Standard in the world for safe and secure cannabis production, including provisions for a robust craft cannabis industry where many expect opportunities for experimenting with different strains. As the drug has increasingly been seen as a health issue instead of criminal behavior, cannabis has also been legalized or decriminalized in: Czech Republic, Colombia, Ecuador, Portugal, South Africa and Canada. Medical marijuana was legalized in Mexico in mid-2017 and legalized for recreational use in June 2021. Germany legalized cannabis for recreational use in April 2024. Legal status by country As of 2022, Uruguay and Canada are the only countries that have fully legalized the cultivation, consumption and bartering of recreational cannabis nationwide. In the United States, 24 states, 3 territories, and the District of Columbia have legalized the recreational use of cannabis – though the drug remains illegal at the federal level. Laws vary from state to state when it comes to the commercial sale. Court rulings in Georgia and South Africa have led to the legalization of cannabis consumption, but not legal sales. A policy of limited enforcement has also been adopted in many countries, in particular Spain and the Netherlands where the sale of cannabis is tolerated at licensed establishments. Contrary to popular belief, cannabis is not legal in the Netherlands, but it has been decriminalized since the 1970s. In 2021, Malta was the first European Union member to legalize the use of cannabis for recreational purposes. In Estonia, it is only legal to sell cannabis products with a THC content of less than 0.2%, although products may contain more cannabidiol. Lebanon has recently become the first Arab country to legalize the plantation of cannabis for medical use. Penalties for illegal recreational use ranges from confiscation or small fines to jail time and even death. In some countries citizens can be punished if they have used the drug in another country, including Singapore and South Korea. Economics Production (Spanish for "without seed") is the dried, seedless (i.e. parthenocarpic) infructescences of female cannabis plants. Because THC production drops off once pollination occurs, the male plants (which produce little THC themselves) are eliminated before they shed pollen to prevent pollination, thus inducing the development of parthenocarpic fruits gathered in dense infructescences. Advanced cultivation techniques such as hydroponics, cloning, high-intensity artificial lighting, and the sea of green method are frequently employed as a response (in part) to prohibition enforcement efforts that make outdoor cultivation more risky. "Skunk" refers to several named strains of potent cannabis, grown through selective breeding and sometimes hydroponics. It is a cross-breed of Cannabis sativa and C. indica (although other strains of this mix exist in abundance). Skunk cannabis potency ranges usually from 6% to 15% and rarely as high as 20%. The average THC level in coffee shops in the Netherlands is about 18–19%. The average levels of THC in cannabis sold in the United States rose dramatically between the 1970s and 2000. This is disputed for various reasons, and there is little consensus as to whether this is a fact or an artifact of poor testing methodologies. According to Daniel Forbes writing for slate.com, the relative strength of modern strains are likely skewed because undue weight is given to much more expensive and potent, but less prevalent, samples. Some suggest that results are skewed by older testing methods that included low-THC-content plant material such as leaves in the samples, which are excluded in contemporary tests. Others believe that modern strains actually are significantly more potent than older ones. The main producing countries of cannabis are Afghanistan, Canada, China, Colombia, India, Jamaica, Lebanon, Mexico, Morocco, the Netherlands, Pakistan, Paraguay, Spain, Thailand, Turkey, the United Kingdom, and the United States. Price The price or street value of cannabis varies widely depending on geographic area and potency. Prices and overall markets have also varied considerably over time. In 1997, cannabis was estimated to be overall the number four value crop in the US, and number one or two in many states, including California, New York, and Florida. This estimate is based on a value to growers of ~60% of retail value, or . In 2006, cannabis was estimated to have been a $36 billion market. This estimate has been challenged as exaggerated. The UN World Drug Report (2008) estimated that 2006 street prices in the US and Canada ranged from about US$8.8 to $25 per gram (approximately $250 to $700 per ounce), depending on quality. Typical U.S. retail prices were $10–15 per gram (approximately $280–420 per ounce). In 2017, the U.S. was estimated to constitute 90% of the worldwide $9.5 billion legal trade in cannabis. After some U.S. states legalized cannabis, street prices began to drop. In Colorado, the price of smokable buds (infructescences) dropped 40 percent between 2014 and 2019, from $200 per ounce to $120 per ounce ($7 per gram to $4.19 per gram). The European Monitoring Centre for Drugs and Drug Addiction reports that typical retail prices in Europe for cannabis varied from €2 to €20 per gram in 2008, with a majority of European countries reporting prices in the range €4–10. Cannabis as a gateway drug The gateway hypothesis states that cannabis use increases the probability of trying "harder" drugs. The hypothesis has been hotly debated as it is regarded by some as the primary rationale for the United States prohibition on cannabis use. A Pew Research Center poll found that political opposition to marijuana use was significantly associated with concerns about the health effects and whether legalization would increase cannabis use by children. Some studies state that while there is no proof for the gateway hypothesis, young cannabis users should still be considered as a risk group for intervention programs. Other findings indicate that hard drug users are likely to be poly-drug users, and that interventions must address the use of multiple drugs instead of a single hard drug. Almost two-thirds of the poly drug users in the 2009–2010 Scottish Crime and Justice Survey used cannabis. The gateway effect may appear due to social factors involved in using any illegal drug. Because of the illegal status of cannabis, its consumers are likely to find themselves in situations allowing them to acquaint with individuals using or selling other illegal drugs. Studies have shown that alcohol and tobacco may additionally be regarded as gateway drugs; however, a more parsimonious explanation could be that cannabis is simply more readily available (and at an earlier age) than illegal hard drugs. In turn, alcohol and tobacco are typically easier to obtain at an earlier age than is cannabis (though the reverse may be true in some areas), thus leading to the "gateway sequence" in those individuals, since they are most likely to experiment with any drug offered. A related alternative to the gateway hypothesis is the common liability to addiction (CLA) theory. It states that some individuals are, for various reasons, willing to try multiple recreational substances. The "gateway" drugs are merely those that are (usually) available at an earlier age than the harder drugs. Researchers have noted in an extensive review that it is dangerous to present the sequence of events described in gateway "theory" in causative terms as this hinders both research and intervention. In 2020, the National Institute on Drug Abuse released a study backing allegations that marijuana is a gateway to harder drugs, though not for the majority of marijuana users. The National Institute on Drug Abuse determined that marijuana use is "likely to precede use of other licit and illicit substances" and that "adults who reported marijuana use during the first wave of the survey were more likely than adults who did not use marijuana to develop an alcohol use disorder within 3 years; people who used marijuana and already had an alcohol use disorder at the outset were at greater risk of their alcohol use disorder worsening. Marijuana use is also linked to other substance use disorders including nicotine addiction." It also reported that "These findings are consistent with the idea of marijuana as a "gateway drug". However, the majority of people who use marijuana do not go on to use other, "harder" substances. Also, cross-sensitization is not unique to marijuana. Alcohol and nicotine also prime the brain for a heightened response to other drugs and are, like marijuana, also typically used before a person progresses to other, more harmful substances." Research Research on cannabis is challenging since the plant is illegal in most countries. Research-grade samples of the drug are difficult to obtain for research purposes, unless granted under authority of national regulatory agencies, such as the US Food and Drug Administration. There are also other difficulties in researching the effects of cannabis. Many people who smoke cannabis also smoke tobacco. This causes confounding factors, where questions arise as to whether the tobacco, the cannabis, or both that have caused a cancer. Another difficulty researchers have is in recruiting people who smoke cannabis into studies. Because cannabis is an illegal drug in many countries, people may be reluctant to take part in research, and if they do agree to take part, they may not say how much cannabis they actually smoke.
Biology and health sciences
Drugs and pharmacology
null
1483403
https://en.wikipedia.org/wiki/Carbon%20planet
Carbon planet
A carbon planet is a hypothetical type of planet that contains more carbon than oxygen. Carbon is the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Marc Kuchner and Sara Seager coined the term "carbon planet" in 2005 and investigated such planets following the suggestion of Katharina Lodders that Jupiter formed from a carbon-rich core. Prior investigations of planets with high carbon-to-oxygen ratios include Fegley & Cameron 1987. Carbon planets could form if protoplanetary discs are carbon-rich and oxygen-poor. They would develop differently from Earth, Mars, and Venus, which are composed mostly of silicon–oxygen compounds. Different planetary systems have different carbon-to-oxygen ratios, with the Solar System's terrestrial planets closer to being "oxygen planets" with C/O molar ratio of 0.55. In 2020, survey of the 249 nearby solar analog stars found 12% of stars have C/O ratios above 0.65, making them candidates for the carbon-rich planetary systems. The exoplanet 55 Cancri e, orbiting a host star with C/O molar ratio of 0.78, is a possible example of a carbon planet. Definition Such a planet would probably have an iron-rich core like the known terrestrial planets. Surrounding that would be molten silicon carbide and titanium carbide. Above that, a layer of carbon in the form of graphite, possibly with a kilometers-thick substratum of diamond if there is sufficient pressure. During volcanic eruptions, it is possible that diamonds from the interior could come up to the surface, resulting in mountains of diamonds and silicon carbides. The surface would contain frozen or liquid hydrocarbons (e.g., tar and methane) and carbon monoxide. A weather cycle is hypothetically possible on carbon planets with an atmosphere, provided that the average surface temperature is below 77 °C. However, carbon planets will probably be devoid of water, which cannot form because any oxygen delivered by comets or asteroids will react with the carbon on the surface. The atmosphere on a relatively cool carbon planet would consist primarily of carbon dioxide or carbon monoxide with a significant amount of carbon smog. Composition Carbon planets are predicted to be of similar diameter to silicate and water planets of the same mass, potentially making them difficult to distinguish. The equivalents of geologic features on Earth may also be present, but with different compositions. For instance, the rivers might consist of oils. If the temperature is low enough (below 350 K), then gasses may be able to photochemically synthesize into long-chain hydrocarbons, which could rain down onto the surface. In 2011, NASA cancelled a mission called TPF, which was to be an observatory much bigger than the Hubble Space Telescope that would have been able to detect such planets. The spectra of carbon planets would lack water, but show the presence of carbonaceous substances, such as carbon monoxide. Possible candidates Draugr, Poltergeist and Phobetor The pulsar planets Draugr, Poltergeist and Phobetor may be carbon planets that formed from the disruption of a carbon-producing star. Carbon planets might also be located near the Galactic Center or globular clusters orbiting the galaxy, where stars have a higher carbon-to-oxygen ratio than the Sun. When old stars die, they spew out large quantities of carbon. As time passes and more and more generations of stars end, the concentration of carbon and carbon planets, will increase. Janssen In October 2012, it was announced that Janssen showed evidence for being a carbon planet. It has eight times the mass of Earth and twice the radius. Research indicates that the planet is "covered in graphite and diamond rather than water and granite". It orbits the star Copernicus once every 18 hours. Other carbon-rich objects In August 2011, Matthew Bailes and colleagues from Swinburne University of Technology in Australia reported that the millisecond pulsar PSR J1719-1438 may have a binary companion star that has been crushed into a much smaller planet made largely of solid diamond. They deduced that a small companion planet must be orbiting the pulsar and causing a detectable gravitational pull. Further examination revealed that although the planet is relatively small (60,000 km in  diameter, or five times bigger than the Earth) its mass is slightly more than that of Jupiter. The high density of the planet gave the team a clue to its likely makeup of carbon and oxygen—and suggested the crystalline form of the elements. However, this "planet" is hypothesized to be the remains of an evaporated white dwarf companion, being only the remnant inner core. According to some definitions of planet, this would not qualify because it formed as a star. At a distance of pc (approximately 870 light-years), PSR J2222−0137 is a nearby intermediate-mass binary pulsar whose low-mass neutron star's companion is a white dwarf (PSR J2222−0137 B). The white dwarf has a relatively large mass of and a temperature less than 3,000 K, meaning it is likely crystallized, leading to this Earth-sized white dwarf being described as a "diamond-star". Brown dwarfs Planets around brown dwarfs are likely to be carbon planets depleted of water.
Physical sciences
Planetary science
Astronomy
3983697
https://en.wikipedia.org/wiki/Belt%20of%20Venus
Belt of Venus
The Belt of Venusalso called Venus's Girdle, the antitwilight arch, or antitwilightis an atmospheric phenomenon visible shortly before sunrise or after sunset, during civil twilight. It is a pinkish glow that surrounds the observer, extending roughly 10–20° above the horizon. It appears opposite to the afterglow, which it also reflects. In a way, the Belt of Venus is actually alpenglow visible near the horizon during twilight, above the antisolar point. Like alpenglow, the backscatter of reddened sunlight also creates the Belt of Venus. Though unlike alpenglow, the sunlight scattered by fine particulates that cause the rosy arch of the Belt shines high in the atmosphere and lasts for a while after sunset or before sunrise. As twilight progresses, the arch is separated from the horizon by the dark band of Earth's shadow, or the "twilight wedge". The pinkish glow is due to the Rayleigh scattering of light from the rising or setting Sun, which is then backscattered by particulates. A similar effect can be seen on a "blood moon" during a total lunar eclipse. The zodiacal light and gegenschein, which are caused by the diffuse reflection of sunlight from interplanetary dust in the Solar System, are also similar phenomena. The Belt of Venus can be observed as having a vivider pink color during the winter months, as opposed to the summer months, when it appears faded and dim above the yellowish-orange band near the horizon. The name of the phenomenon alludes to the cestus, a girdle or breast-band, of the Ancient Greek goddess Aphrodite, customarily equated with the Roman goddess Venus. Since the greatest elongation (angular separation between the Sun and a Solar System body) of Venus is only 45–48°, the inferior planet never appears in the opposite of the Sun's direction (180° difference in ecliptic longitude) from Earth and is thus never located in the Belt of Venus.
Physical sciences
Celestial mechanics
Astronomy
3988315
https://en.wikipedia.org/wiki/Late%20Paleozoic%20icehouse
Late Paleozoic icehouse
The late Paleozoic icehouse, also known as the Late Paleozoic Ice Age (LPIA) and formerly known as the Karoo ice age, was an ice age that began in the Late Devonian and ended in the Late Permian, occurring from 360 to 255 million years ago (Mya), and large land-based ice sheets were then present on Earth's surface. It was the second major icehouse period of the Phanerozoic, after the Late Ordovician Andean-Saharan glaciation. Timeline Interpretations of the LPIA vary, with some researchers arguing it represented one continuous glacial event and others concluding that as many as twenty-five separate ice sheets across Gondwana developed, waxed, and waned independently and diachronously over the course of the Carboniferous and Permian, with the distribution of ice centres shifting as Gondwana drifted and its position relative to the South Pole changed. At the beginning of the LPIA, ice centres were concentrated in western South America; they later shifted eastward across Africa and by the end of the ice age were concentrated in Australia. Evidence from sedimentary basins suggests individual ice centres lasted for approximately 10 million years, with their peaks alternating with periods of low or absent permanent ice coverage. The first glacial episodes of the LPIA occurred during the late Famennian and the Tournaisian, with δ15N evidence showing that the transition from greenhouse to icehouse was a stepwise process and not an immediate change. These Early Mississippian glaciations were transient and minor, with them sometimes being considered discrete glaciations separate from and preceding the LPIA proper. Between 335 and 330 Mya, or sometime between the middle Viséan and earliest Serpukhovian, the LPIA proper began. A start in glacioeustatic sea level changes is recorded from Idaho at around this time. The first major glacial period occurred from the Serpukhovian to the Moscovian: ice sheets expanded from a core in southern Africa and South America. During the Bashkirian, a global eustatic sea level drop occurred, signifying the first major glacial maximum of the LPIA. The Lhasa terrane became glaciated during this stage of the Carboniferous. A relatively warm interglacial interval spanning the Kasimovian and Gzhelian, coinciding with the Alykaevo Climatic Optimum, occurred between this first major glacial period and the later second major glacial period. The Paraná Basin nonetheless experienced its final glaciation during the early Gzhelian. The second glacial period occurred from the late Gzhelian across the Carboniferous-Permian boundary to the early Sakmarian; ice sheets expanded from a core in Australia and India. This was the most intense interval of glaciation of the LPIA; in Australia, it is known as P1. An exceptionally intense cooling event occurred at 300 Ma. From the late Sakmarian onward, and especially following the Artinskian Warming Event (AWE), these ice sheets declined, as indicated by a negative δ18O excursion. Ice sheets retreated southward across Central Africa and in the Karoo Basin. A regional glaciation spanning the latest Sakmarian and the Artinskian, known as P2, occurred in Australia amidst this global pulse of net warming and deglaciation. This massive deglaciation during the late Sakmarian and Artinskian is sometimes considered to be the end of the LPIA proper, with the Artinskian-Kungurian boundary and the associated Kungurian Carbon Isotopic Excursion used as the boundary demarcating the ice age's end. Nonetheless, ice caps of a much lower volume and area remained in Australia. Another long regional interval also limited to Australia from the middle Kungurian to the early Capitanian, known as P3, though unlike the previous glaciations, this one and the following P4 glaciation was largely limited to alpine glaciation. A final regional Australian interval lasted from the middle Capitanian to the late Wuchiapingian, known as P4. As with P3, P4's ice sheets were primarily high altitude glaciers. This glacial period was interrupted by a rapid warming interval corresponding to a surge in activity from the Emeishan Traps and corresponding Capitanian mass extinction event. The final alpine glaciers of the LPIA melted in what is now eastern Australia around 255 Mya, during the late Wuchiapingian. The time intervals here referred to as glacial and interglacial periods represented intervals of several million years corresponding to colder and warmer icehouse intervals, respectively, were influenced by long term variations in palaeogeography, greenhouse gas levels, and geological processes such as rates of volcanism and of silicate weathering and should not be confused with shorter term cycles of glacials and interglacials that are driven by astronomical forcing caused by Milankovitch cycles. Geologic effects According to Eyles and Young, "Renewed Late Devonian glaciation is well documented in three large intracratonic basins in Brazil (Solimoes, Amazonas and Paranaiba basins) and in Bolivia. By the Early Carboniferous (c. 350 Ma) glacial strata were beginning to accumulate in sub-Andean basins of Bolivia, Argentina and Paraguay. By the mid-Carboniferous glaciation had spread to Antarctica, Australia, southern Africa, the Indian Subcontinent, Asia and the Arabian Peninsula. During the Late Carboniferous glacial accumulation (c. 300 Ma) a very large area of Gondwana land mass was experiencing glacial conditions. The thickest glacial deposits of Permo-Carboniferous age are the Dwyka Formation (1000 m thick) in the Karoo Basin in southern Africa, the Itararé Group of the Paraná Basin, Brazil (1400 m) and the Carnarvon Basin in eastern Australia. The Permo-Carboniferous glaciations are significant because of the marked glacio-eustatic changes in sea level that resulted and which are recorded in non-glacial basins. Late Paleozoic glaciation of Gondwana could be explained by the migration of the supercontinent across the South Pole." In northern Ethiopia glacial landforms like striations, rôche moutonnées and chatter marks can be found buried beneath Late Carboniferous-Early Permian glacial deposits (Edaga Arbi Glacials). Glaciofluvial sandstones, moraines, boulder beds, glacially striated pavements, and other glacially derived geologic structures and beds are also known throughout the southern part of the Arabian Peninsula. In southern Victoria Land, Antarctica, the Metschel Tillite, made up of reworked Devonian Beacon Supergroup sedimentary strata along with Cambrian and Ordovician granitoids and some Neoproterozoic metamorphic rocks, preserves glacial sediments indicating the presence of major ice sheets. Northern Victoria Land and Tasmania hosted a distinct ice sheet from the one in southern Victoria Land that flowed west-northwestward. The Sydney Basin of eastern Australia lay at a palaeolatitude of around 60°S to 70°S during the Early and Middle Permian, and its sedimentary successions preserve at least four phases of glaciation throughout this time. Debate exists as to whether the Northern Hemisphere experienced glaciation like the Southern Hemisphere did, with most palaeoclimate models suggesting that ice sheets did exist in Northern Pangaea but that they were very negligible in volume. Diamictites from the Atkan Formation of Magadan Oblast, Russia have been interpreted as being glacigenic, although recent analyses have challenged this interpretation, suggesting that these diamictites formed during a Capitanian integrlacial interval as a result of volcanogenic debris flows associated with the formation of the Okhotsk–Taigonos Volcanic Arc. The tropics experienced a cyclicity between wetter and drier periods that may have been related to changes between cold glacials and warm interglacials. In the Midland Basin of Texas, increased aeolian sedimentation reflective of heightened aridity occurred during warmer intervals, as it did in the Paradox Basin of Utah. Causes Greenhouse gas reduction The evolution of plants following the Silurian-Devonian Terrestrial Revolution and the subsequent adaptive radiation of vascular plants on land began a long-term increase in planetary oxygen levels. Large tree ferns, growing to high, were secondarily dominant to the large arborescent lycopods (30–40 m high) of the Carboniferous coal forests that flourished in equatorial swamps stretching from Appalachia to Poland, and later on the flanks of the Urals. The enhanced carbon sequestration raised the atmospheric oxygen levels to a peak of 35%, and lowered carbon dioxide level below the 300 parts per million (ppm), possibly as low as 180 ppm during the Kasimovian, which is today associated with glacial periods. This reduction in the greenhouse effect was coupled with burial of organic carbon as charcoal or coal, with lignin and cellulose (as tree trunks and other vegetation debris) accumulating and being buried in the great Carboniferous coal measures. The reduction of carbon dioxide levels in the atmosphere would be enough to begin the process of changing polar climates, leading to cooler summers which could not melt the previous winter's snow accumulations. The growth in snowfields to 6 m deep would create sufficient pressure to convert the lower levels to ice. Research indicates that changing carbon dioxide concentrations were the dominant driver of changes between colder and warmer intervals during the Early and Middle Permian portions of the LPIA. The tectonic assembly of the continents of Euramerica and Gondwana into Pangaea, in the Hercynian-Alleghany Orogeny, made a major continental land mass within the Antarctic region and an increase in carbon sequestration via silicate weathering, which led to progressive cooling of summers, and the snowfields accumulating in winters, which caused mountainous alpine glaciers to grow, and then spread out of highland areas. That made continental glaciers, which spread to cover much of Gondwana. Modelling evidence points to tectonically induced carbon dioxide removal via silicate weathering to have been sufficient to generate the ice age. The closure of the Rheic Ocean and Iapetus Ocean saw disruption of warm-water currents in the Panthalassa Ocean and Paleotethys Sea, which may have also been a factor in the development of the LPIA. The capture of CO2 through weathering of large igneous provinces emplaced during the Kungurian brought about the P3 glaciation. Topographic changes The Mississippian witnessed major uplift in southwestern Gondwana, where the earliest glaciations of the LPIA began. The uplift, driven by mantle dynamics rather than by crustal tectonic processes, is evidenced by the increase in temperature of the southwestern Gondwanan crust as shown by changing compositions of granites formed at this time. Milankovitch cycles The LPIA, like the present Quaternary glaciation, saw glacial-interglacial cycles governed by Milankovitch cycles acting on timescales of tens of thousands to millions of years. Periods of low obliquity, which decreased annual insolation at the poles, were associated with high moisture flux from low latitudes and glacial expansion at high latitudes, while periods of high obliquity corresponded to warmer, interglacial periods. Data from Serpukhovian and Moscovian marine strata of South China point to glacioeustasy being driven primarily by long-period eccentricity, with a cyclicity of about 0.405 million years, and the modulation of the amplitude of Earth's obliquity, with a cyclicity of approximately 1.2 million years. This is most similar to the early part of the Late Cenozoic Ice Age, from the Oligocene to the Pliocene, before the formation of the Arctic ice cap, suggesting the climate of this episode of time was relatively warm for an icehouse period. Evidence from the Middle Permian Lucaogou Formation of Xinjiang, China indicates that the climate of the time was particularly sensitive to the 1.2 million year long-period modulation cycle of obliquity. It also suggests that palaeolakes such as those found in the Junggar Basin likely played an important role as a carbon sink during the later stages of the LPIA, with their absorption and release of carbon dioxide acting as powerful feedback loops during Milankovitch cycle driven glacial and interglacial transitions. Also during this time, unique sedimentary sequences called cyclothems were deposited. These were produced by the repeated alterations of marine and nonmarine environments resulting from glacioeustatic rises and falls of sea levels linked to Milankovitch cycles. Biotic effects The development of high-frequency, high-amplitude glacioeustasy, which resulted in sea level changes of up to 120 metres between warmer and colder intervals, during the beginning of the LPIA, combined with the increased geographic separation of marine ecoregions and decrease in ocean circulation it caused in conjunction with closure of the Rheic Ocean, has been hypothesised to have been the cause of the Carboniferous-Earliest Permian Biodiversification Event. Milankovitch cycles profound impacts on marine life at the height of the LPIA, with high-latitude species being more strongly affected by glacial-interglacial cycles than low-latitude species. At the beginning of the LPIA, the transition from a greenhouse to an icehouse climate, in conjunction with increases in atmospheric oxygen concentrations, reduced thermal stratification and increased the vertical extent of the mixed layer, which promoted higher rates of microbial nitrification as revealed by an increase in δ15Nbulk values. The rising levels of oxygen during the late Paleozoic icehouse had major effects upon evolution of plants and animals. Higher oxygen concentration (and accompanying higher atmospheric pressure) enabled energetic metabolic processes which encouraged evolution of large land-dwelling arthropods and flight, with the dragonfly-like Meganeura, an aerial predator, with a wingspan of 60 to 75 cm. The herbivorous stocky-bodied and armoured millipede-like Arthropleura was long, and the semiterrestrial Hibbertopterid eurypterids were perhaps as large, and some scorpions reached . Termination Earth's increased planetary albedo produced by the expanding ice sheets would lead to positive feedback loops, spreading the ice sheets still further, until the process hit a limit. Falling global temperatures would eventually limit plant growth, and the rising levels of oxygen would increase the frequency of fire-storms because damp plant matter could burn. Both these effects return carbon dioxide to the atmosphere, reversing the "snowball" effect and forcing the greenhouse effect, with CO2 levels rising to 300 ppm in the following Permian period. Once these factors brought a halt and a small reversal in the spread of ice sheets, the lower planetary albedo resulting from the fall in size of the glaciated areas would have been enough for warmer summers and winters and thus limit the depth of snowfields in areas from which the glaciers expanded. Rising sea levels produced by global warming drowned the large areas of flatland where previously anoxic swamps assisted in burial and removal of carbon (as coal). With a smaller area for deposition of carbon, more carbon dioxide was returned to the atmosphere, further warming the planet. Over the course of the Early and Middle Permian, glacial periods became progressively shorter while warm interglacials became longer, gradually transitioning the world from an icehouse to a greenhouse as the Permian progressed. Obliquity nodes that triggered glacial expansion and increased tropical precipitation before 285.1 Mya became linked to intervals of marine anoxia and increased terrestrial aridification after this point, a turning point signifying the icehouse-greenhouse transition. Increased lacustrine methane emissions acted as a positive feedback enhancing warming. The LPIA finally ended for good around 255 Ma.
Physical sciences
Events
Earth science
23210152
https://en.wikipedia.org/wiki/Tarpon
Tarpon
Tarpon are fish of the genus Megalops. They are the only members of the family Megalopidae. Of the two species, one (M. atlanticus) is native to the Atlantic, and the other (M. cyprinoides) to the Indo-Pacific Oceans. Species and habitats The two species of tarpon are M. atlanticus (Atlantic tarpon) and M. cyprinoides (Indo-Pacific tarpon): M. atlanticus is found on the western Atlantic coast from Virginia to Brazil, throughout the Caribbean and the coast of the Gulf of Mexico. Tarpon are also found along the eastern Atlantic coast from Senegal to South Angola. M. cyprinoides is found along the eastern African coast, throughout Southeast Asia, Japan, Tahiti, and Australia. Both species are found in marine and freshwater habitats, usually ascending rivers to access freshwater marshes. They are able to survive in brackish water, waters of varying pH, and habitats with low dissolved content due to their swim bladders, which they use primarily to breathe. They can also rise to the surface and take gulps of air, giving them a short burst of energy. The habitats of tarpon vary greatly with their developmental stages. Stage-one larvae are usually found in clear, warm, oceanic waters, relatively close to the surface. Stage-two and -three larvae are found in salt marshes, tidal pools, creeks, and rivers. Their habitats are characteristically warm, shallow, dark bodies of water with sandy mud bottoms. Tarpon commonly ascend rivers into fresh water. As they progress from the juvenile stage to adulthood, they return to the ocean's open waters, though many remain in freshwater habitats. Fossil species Fossils of this genus go back to the Cretaceous during the Albian stage 113.0 million years ago (Mya). M. priscus (Woodward 1901): A species from the Ypresian stage of the Eocene, 56–47 Mya. M. oblongus (Woodward 1901): A species also from the Ypresian stage of the Eocene, 56–47 Mya. It lived in England along with M. priscus. M. vigilax (Jordan 1927): A fossil species from California dating to the Miocene. Physical characteristics Tarpon grow to about long and weigh . They have dorsal and anal soft rays and bluish or greenish backs. Tarpons possess shiny, silvery scales that cover most of their bodies, excluding the head. They have large eyes with adipose eyelids and broad mouths with prominent lower jaws that jut out farther than the rest of the face. Reproduction and lifecycle Tarpon breed offshore in warm, isolated areas. Females have high fecundity and can lay up to 12 million eggs at once. They reach sexual maturity once they are about in length. Spawning usually occurs in late spring to early summer. Their three distinct levels of development usually occur in varying habitats. Stage one, or the leptocephalus stage, is completed after 20–30 days. It occurs in clear, warm oceanic waters, usually within 10–20 m of the surface. The leptocephalus shrinks as it develops into a larva; the most shrunken larva, stage two, develops by day 70. This is due to a negative growth phase followed by a sluggish growth phase. By day 70, the juvenile growth phase (stage three) begins, and the fish grows rapidly until sexual maturity. Diet Stage-one developing tarpon do not forage for food but instead absorb nutrients from seawater using integumentary absorption. Stage-two and -three juveniles feed primarily on zooplankton, insects, and small fish. As they progress in juvenile development, especially those developing in freshwater environments, their consumption of insects, fish, crabs, and grass shrimp increases. Adults are strictly carnivorous and feed on midwater prey; they hunt nocturnally and swallow their food whole. Predation The main predators of Megalops during stage-one and early stage-two development are other fish, depending on their size. Juveniles are subject to predation by other juvenile Megalops and piscivorous birds. They are especially vulnerable to birds such as ospreys or other raptors when they come to the surface for air due to the rolling manner in which they move to take in air, as well as the silver scales lining their sides. Adults occasionally fall prey to sharks, porpoises, crocodiles, and alligators. Swim bladder One of the unique features of Megalops is the swim bladder, which, in addition to controlling the buoyancy, can be used as an accessory respiratory organ. It arises dorsally from the posterior pharynx, and the respiratory surface is coated with blood capillaries with a thin epithelium over the top. This is the basis of the alveolar tissue found in the swim bladder and is believed to be one of the primary methods by which Megalops "breathes". This trait is essential due to the mangrove and marsh ecosystems the fish use as nursery habitats, which often have stagnant waters low in oxygen. The young fish will also ride the water into remote semi-landlocked ponds during storms and king tides, where they will stay from one to three years. These ponds, some of which are brackish or freshwater, often become so low in oxygen that tarpons and snooks are the only fish able to survive in these environments. The juveniles therefore face fewer competitors and predators, but need to breathe atmospheric oxygen to survive. The ability to breathe air is retained in the adults. Even if they live in more oxygenated marine coastal habitats, they have high rates of aerobic metabolism and also occasionally occur in hypoxic waters. These fish are obligate air breathers and will die without sufficient access to the surface. Gas exchange occurs at the surface through a rolling motion commonly associated with tarpon sightings. This "breathing" is believed to be mediated by visual cues, and the frequency of breathing is inversely correlated to the dissolved content of the water in which they live. Megalops and humans Tarpon are considered some of the greatest saltwater game fishes, prized not only because of their great size but also because of their fight and spectacular leaping ability. After the International Game Fish Association took responsibility for fly fishing records in salt water (1978), fly fishing for tarpon became increasingly popular, despite declining populations (correlated with the decline of freshwater rivers flowing into the seas around Florida.) Tarpon meat is not desirable, so most are released after being caught. Numerous tournaments are focused on catching tarpon. The Atlantic tarpon adapts well to water bodies in urban and suburban environments due to their tolerance for boat traffic and low water quality. Around humans, Atlantic tarpon are primarily nocturnal. Geographical distribution and migration Since tarpon are not commercially valuable as a food fish, very little has been documented concerning their geographical distribution and migrations. They inhabit both sides of the Atlantic Ocean, and their range in the eastern Atlantic has been reliably established from Senegal to the Congo. Tarpon inhabiting the western Atlantic are principally found to populate warmer coastal waters, primarily in the Caribbean, Gulf of Mexico, Florida, and the West Indies. Nonetheless, tarpon are regularly caught by anglers at Cape Hatteras and as far north as Nova Scotia, Bermuda, and south to Argentina. Scientific studies indicate that schools of tarpon have routinely migrated through the Panama Canal from the Atlantic to the Pacific and back for over 70 years. However, they have not been found to breed in the Pacific Ocean. Nevertheless, anecdotal evidence from tarpon fishing guides and anglers would tend to validate this notion, as over the last 60 years, many small juvenile tarpon as well as mature giants have been caught and documented principally on the Pacific side of Panama at the Bayano River, the Gulf of San Miguel and its tributaries, Coiba Island in the Gulf of Chiriquí, and Piñas Bay in the Gulf of Panama. Since tarpon tolerate wide ranges of salinity throughout their lives and eat almost anything dead or alive, their migrations seemingly are only limited by water temperatures. Tarpon prefer water temperatures of ; below they become inactive, and temperatures under can be lethal.
Biology and health sciences
Anguilliformes
Animals
23215561
https://en.wikipedia.org/wiki/Gram%20per%20cubic%20centimetre
Gram per cubic centimetre
The gram per cubic centimetre is a unit of density in the CGS system, and is commonly used in chemistry. It is defined by dividing the CGS unit of mass, the gram, by the CGS unit of volume, the cubic centimetre. The official SI symbols are g/cm3, g·cm−3, or g cm−3. It is equivalent to the units gram per millilitre (g/mL) and kilogram per litre (kg/L). The density of water is about 1 g/cm3, since the gram was originally defined as the mass of one cubic centimetre of water at its maximum density at . Conversions 1 g/cm3 is equivalent to: = 1000 g/L (exactly) = 1000 kg/m3 (exactly) ≈ (approximately) ≈ (approximately) 1 kg/m3 = 0.001 g/cm3(exactly) 1 lb/cu ft ≈ (approximately) 1 oz/US gal ≈ (approximately)
Physical sciences
Density
Basics and measurement
29320146
https://en.wikipedia.org/wiki/Event%20horizon
Event horizon
In astrophysics, an event horizon is a boundary beyond which events cannot affect an outside observer. Wolfgang Rindler coined the term in the 1950s. In 1784, John Michell proposed that gravity can be strong enough in the vicinity of massive compact objects that even light cannot escape. At that time, the Newtonian theory of gravitation and the so-called corpuscular theory of light were dominant. In these theories, if the escape velocity of the gravitational influence of a massive object exceeds the speed of light, then light originating inside or from it can escape temporarily but will return. In 1958, David Finkelstein used general relativity to introduce a stricter definition of a local black hole event horizon as a boundary beyond which events of any kind cannot affect an outside observer, leading to information and firewall paradoxes, encouraging the re-examination of the concept of local event horizons and the notion of black holes. Several theories were subsequently developed, some with and some without event horizons. One of the leading developers of theories to describe black holes, Stephen Hawking, suggested that an apparent horizon should be used instead of an event horizon, saying, "Gravitational collapse produces apparent horizons but no event horizons." He eventually concluded that "the absence of event horizons means that there are no black holes – in the sense of regimes from which light can't escape to infinity." Any object approaching the horizon from the observer's side appears to slow down, never quite crossing the horizon. Due to gravitational redshift, its image reddens over time as the object moves closer to the horizon. In an expanding universe, the speed of expansion reaches — and even exceeds — the speed of light, preventing signals from traveling to some regions. A cosmic event horizon is a real event horizon because it affects all kinds of signals, including gravitational waves, which travel at the speed of light. More specific horizon types include the related but distinct absolute and apparent horizons found around a black hole. Other distinct types include: The Cauchy and Killing horizons. The photon spheres and ergospheres of the Kerr solution. Particle and cosmological horizons relevant to cosmology. Isolated and dynamical horizons, which are important in current black hole research. Cosmic event horizon In cosmology, the event horizon of the observable universe is the largest comoving distance from which light emitted now can ever reach the observer in the future. This differs from the concept of the particle horizon, which represents the largest comoving distance from which light emitted in the past could reach the observer at a given time. For events that occur beyond that distance, light has not had enough time to reach our location, even if it was emitted at the time the universe began. The evolution of the particle horizon with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, parts of the universe will never be observable, no matter how long the observer waits for the light from those regions to arrive. The boundary beyond which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon. The criterion for determining whether a particle horizon for the universe exists is as follows. Define a comoving distance dp as In this equation, a is the scale factor, c is the speed of light, and t0 is the age of the Universe. If (i.e., points arbitrarily as far away as can be observed), then no event horizon exists. If , a horizon is present. Examples of cosmological models without an event horizon are universes dominated by matter or by radiation. An example of a cosmological model with an event horizon is a universe dominated by the cosmological constant (a de Sitter universe). A calculation of the speeds of the cosmological event and particle horizons was given in a paper on the FLRW cosmological model, approximating the Universe as composed of non-interacting constituents, each one being a perfect fluid. Apparent horizon of an accelerated particle If a particle is moving at a constant velocity in a non-expanding universe free of gravitational fields, any event that occurs in that Universe will eventually be observable by the particle, because the forward light cones from these events intersect the particle's world line. On the other hand, if the particle is accelerating, in some situations light cones from some events never intersect the particle's world line. Under these conditions, an apparent horizon is present in the particle's (accelerating) reference frame, representing a boundary beyond which events are unobservable. For example, this occurs with a uniformly accelerated particle. A spacetime diagram of this situation is shown in the figure to the right. As the particle accelerates, it approaches, but never reaches, the speed of light with respect to its original reference frame. On the spacetime diagram, its path is a hyperbola, which asymptotically approaches a 45-degree line (the path of a light ray). An event whose light cone's edge is this asymptote or is farther away than this asymptote can never be observed by the accelerating particle. In the particle's reference frame, there is a boundary behind it from which no signals can escape (an apparent horizon). The distance to this boundary is given by , where is the constant proper acceleration of the particle. While approximations of this type of situation can occur in the real world (in particle accelerators, for example), a true event horizon is never present, as this requires the particle to be accelerated indefinitely (requiring arbitrarily large amounts of energy and an arbitrarily large apparatus). Interacting with a cosmic horizon In the case of a horizon perceived by a uniformly accelerating observer in empty space, the horizon seems to remain a fixed distance from the observer no matter how its surroundings move. Varying the observer's acceleration may cause the horizon to appear to move over time or may prevent an event horizon from existing, depending on the acceleration function chosen. The observer never touches the horizon and never passes a location where it appeared to be. In the case of a horizon perceived by an occupant of a de Sitter universe, the horizon always appears to be a fixed distance away for a non-accelerating observer. It is never contacted, even by an accelerating observer. Event horizon of a black hole One of the best-known examples of an event horizon derives from general relativity's description of a black hole, a celestial object so dense that no nearby matter or radiation can escape its gravitational field. Often, this is described as the boundary within which the black hole's escape velocity is greater than the speed of light. However, a more detailed description is that within this horizon, all lightlike paths (paths that light could take) (and hence all paths in the forward light cones of particles within the horizon) are warped so as to fall farther into the hole. Once a particle is inside the horizon, moving into the hole is as inevitable as moving forward in time – no matter in what direction the particle is travelling – and can be thought of as equivalent to doing so, depending on the spacetime coordinate system used. The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body that fits inside this radius (although a rotating black hole operates slightly differently). The Schwarzschild radius of an object is proportional to its mass. Theoretically, any amount of matter will become a black hole if compressed into a space that fits within its corresponding Schwarzschild radius. For the mass of the Sun, this radius is approximately ; for Earth, it is about . In practice, however, neither Earth nor the Sun have the necessary mass (and, therefore, the necessary gravitational force) to overcome electron and neutron degeneracy pressure. The minimal mass required for a star to collapse beyond these pressures is the Tolman–Oppenheimer–Volkoff limit, which is approximately three solar masses. According to the fundamental gravitational collapse models, an event horizon forms before the singularity of a black hole. If all the stars in the Milky Way would gradually aggregate towards the galactic center while keeping their proportionate distances from each other, they will all fall within their joint Schwarzschild radius long before they are forced to collide. Up to the collapse in the far future, observers in a galaxy surrounded by an event horizon would proceed with their lives normally. Black hole event horizons are widely misunderstood. Common, although erroneous, is the notion that black holes "vacuum up" material in their neighborhood, where in fact they are no more capable of seeking out material to consume than any other gravitational attractor. As with any mass in the universe, matter must come within its gravitational scope for the possibility to exist of capture or consolidation with any other mass. Equally common is the idea that matter can be observed falling into a black hole. This is not possible. Astronomers can detect only accretion disks around black holes, where material moves with such speed that friction creates high-energy radiation that can be detected (similarly, some matter from these accretion disks is forced out along the axis of spin of the black hole, creating visible jets when these streams interact with matter such as interstellar gas or when they happen to be aimed directly at Earth). Furthermore, a distant observer will never actually see something reach the horizon. Instead, while approaching the hole, the object will seem to go ever more slowly, while any light it emits will be further and further redshifted. Topologically, the event horizon is defined from the causal structure as the past null cone of future conformal timelike infinity. A black hole event horizon is teleological in nature, meaning that it is determined by future causes. More precisely, one would need to know the entire history of the universe and all the way into the infinite future to determine the presence of an event horizon, which is not possible for quasilocal observers (not even in principle). In other words, there is no experiment and/or measurement that can be performed within a finite-size region of spacetime and within a finite time interval that answers the question of whether or not an event horizon exists. Because of the purely theoretical nature of the event horizon, the traveling object does not necessarily experience strange effects and does, in fact, pass through the calculated boundary in a finite amount of its proper time. Interacting with black hole horizons A misconception concerning event horizons, especially black hole event horizons, is that they represent an immutable surface that destroys objects that approach them. In practice, all event horizons appear to be some distance away from any observer, and objects sent towards an event horizon never appear to cross it from the sending observer's point of view (as the horizon-crossing event's light cone never intersects the observer's world line). Attempting to make an object near the horizon remain stationary with respect to an observer requires applying a force whose magnitude increases unboundedly (becoming infinite) the closer it gets. In the case of the horizon around a black hole, observers stationary with respect to a distant object will all agree on where the horizon is. While this seems to allow an observer lowered towards the hole on a rope (or rod) to contact the horizon, in practice this cannot be done. The proper distance to the horizon is finite, so the length of rope needed would be finite as well, but if the rope were lowered slowly (so that each point on the rope was approximately at rest in Schwarzschild coordinates), the proper acceleration (G-force) experienced by points on the rope closer and closer to the horizon would approach infinity, so the rope would be torn apart. If the rope is lowered quickly (perhaps even in freefall), then indeed the observer at the bottom of the rope can touch and even cross the event horizon. But once this happens it is impossible to pull the bottom of rope back out of the event horizon, since if the rope is pulled taut, the forces along the rope increase without bound as they approach the event horizon and at some point the rope must break. Furthermore, the break must occur not at the event horizon, but at a point where the second observer can observe it. Assuming that the possible apparent horizon is far inside the event horizon, or there is none, observers crossing a black hole event horizon would not actually see or feel anything special happen at that moment. In terms of visual appearance, observers who fall into the hole perceive the eventual apparent horizon as a black impermeable area enclosing the singularity. Other objects that had entered the horizon area along the same radial path but at an earlier time would appear below the observer as long as they are not entered inside the apparent horizon, and they could exchange messages. Increasing tidal forces are also locally noticeable effects, as a function of the mass of the black hole. In realistic stellar black holes, spaghettification occurs early: tidal forces tear materials apart well before the event horizon. However, in supermassive black holes, which are found in centers of galaxies, spaghettification occurs inside the event horizon. A human astronaut would survive the fall through an event horizon only in a black hole with a mass of approximately 10,000 solar masses or greater. Beyond general relativity A cosmic event horizon is commonly accepted as a real event horizon, whereas the description of a local black hole event horizon given by general relativity is found to be incomplete and controversial. When the conditions under which local event horizons occur are modeled using a more comprehensive picture of the way the Universe works, that includes both relativity and quantum mechanics, local event horizons are expected to have properties that are different from those predicted using general relativity alone. At present, it is expected by the Hawking radiation mechanism that the primary impact of quantum effects is for event horizons to possess a temperature and so emit radiation. For black holes, this manifests as Hawking radiation, and the larger question of how the black hole possesses a temperature is part of the topic of black hole thermodynamics. For accelerating particles, this manifests as the Unruh effect, which causes space around the particle to appear to be filled with matter and radiation. According to the controversial black hole firewall hypothesis, matter falling into a black hole would be burned to a crisp by a high energy "firewall" at the event horizon. An alternative is provided by the complementarity principle, according to which, in the chart of the far observer, infalling matter is thermalized at the horizon and reemitted as Hawking radiation, while in the chart of an infalling observer matter continues undisturbed through the inner region and is destroyed at the singularity. This hypothesis does not violate the no-cloning theorem as there is a single copy of the information according to any given observer. Black hole complementarity is actually suggested by the scaling laws of strings approaching the event horizon, suggesting that in the Schwarzschild chart they stretch to cover the horizon and thermalize into a Planck length-thick membrane. A complete description of local event horizons generated by gravity is expected to, at minimum, require a theory of quantum gravity. One such candidate theory is M-theory. Another such candidate theory is loop quantum gravity.
Physical sciences
Basics_3
null
29322627
https://en.wikipedia.org/wiki/Completeness%20of%20the%20real%20numbers
Completeness of the real numbers
Completeness is a property of the real numbers that, intuitively, implies that there are no "gaps" (in Dedekind's terminology) or "missing points" in the real number line. This contrasts with the rational numbers, whose corresponding number line has a "gap" at each irrational value. In the decimal number system, completeness is equivalent to the statement that any infinite string of decimal digits is actually a decimal representation for some real number. Depending on the construction of the real numbers used, completeness may take the form of an axiom (the completeness axiom), or may be a theorem proven from the construction. There are many equivalent forms of completeness, the most prominent being Dedekind completeness and Cauchy completeness (completeness as a metric space). Forms of completeness The real numbers can be defined synthetically as an ordered field satisfying some version of the completeness axiom. Different versions of this axiom are all equivalent in the sense that any ordered field that satisfies one form of completeness satisfies all of them, apart from Cauchy completeness and nested intervals theorem, which are strictly weaker in that there are non Archimedean fields that are ordered and Cauchy complete. When the real numbers are instead constructed using a model, completeness becomes a theorem or collection of theorems. Least upper bound property The least-upper-bound property states that every nonempty subset of real numbers having an upper bound (or bounded above) must have a least upper bound (or supremum) in the set of real numbers. The rational number line Q does not have the least upper bound property. An example is the subset of rational numbers This set has an upper bound. However, this set has no least upper bound in : the least upper bound as a subset of the reals would be , but it does not exist in . For any upper bound , there is another upper bound with . For instance, take , then is certainly an upper bound of , since is positive and ; that is, no element of is larger than . However, we can choose a smaller upper bound, say ; this is also an upper bound of for the same reasons, but it is smaller than , so is not a least-upper-bound of . We can proceed similarly to find an upper bound of that is smaller than , say , etc., such that we never find a least-upper-bound of in . The least upper bound property can be generalized to the setting of partially ordered sets. See completeness (order theory). Dedekind completeness Dedekind completeness is the property that every Dedekind cut of the real numbers is generated by a real number. In a synthetic approach to the real numbers, this is the version of completeness that is most often included as an axiom. The rational number line Q is not Dedekind complete. An example is the Dedekind cut L does not have a maximum and R does not have a minimum, so this cut is not generated by a rational number. There is a construction of the real numbers based on the idea of using Dedekind cuts of rational numbers to name real numbers; e.g. the cut (L,R) described above would name . If one were to repeat the construction of real numbers with Dedekind cuts (i.e., "close" the set of real numbers by adding all possible Dedekind cuts), one would obtain no additional numbers because the real numbers are already Dedekind complete. Cauchy completeness Cauchy completeness is the statement that every Cauchy sequence of real numbers converges to a real number. The rational number line Q is not Cauchy complete. An example is the following sequence of rational numbers: Here the nth term in the sequence is the nth decimal approximation for pi. Though this is a Cauchy sequence of rational numbers, it does not converge to any rational number. (In this real number line, this sequence converges to pi.) Cauchy completeness is related to the construction of the real numbers using Cauchy sequences. Essentially, this method defines a real number to be the limit of a Cauchy sequence of rational numbers. In mathematical analysis, Cauchy completeness can be generalized to a notion of completeness for any metric space. See complete metric space. For an ordered field, Cauchy completeness is weaker than the other forms of completeness on this page. But Cauchy completeness and the Archimedean property taken together are equivalent to the others. Nested intervals theorem The nested interval theorem is another form of completeness. Let be a sequence of closed intervals, and suppose that these intervals are nested in the sense that Moreover, assume that as . The nested interval theorem states that the intersection of all of the intervals contains exactly one point. The rational number line does not satisfy the nested interval theorem. For example, the sequence (whose terms are derived from the digits of pi in the suggested way) is a nested sequence of closed intervals in the rational numbers whose intersection is empty. (In the real numbers, the intersection of these intervals contains the number pi.) Nested intervals theorem shares the same logical status as Cauchy completeness in this spectrum of expressions of completeness. In other words, nested intervals theorem by itself is weaker than other forms of completeness, although taken together with Archimedean property, it is equivalent to the others. The open induction principle The open induction principle states that a non-empty open subset of the interval must be equal to the entire interval, if for any , we have that implies . The open induction principle can be shown to be equivalent to Dedekind completeness for arbitrary ordered sets under the order topology, using proofs by contradiction. In weaker foundations such as in constructive analysis where the law of the excluded middle does not hold, the full form of the least upper bound property fails for the Dedekind reals, while the open induction property remains true in most models (following from Brouwer's bar theorem) and is strong enough to give short proofs of key theorems. Monotone convergence theorem The monotone convergence theorem (described as the fundamental axiom of analysis by Körner) states that every nondecreasing, bounded sequence of real numbers converges. This can be viewed as a special case of the least upper bound property, but it can also be used fairly directly to prove the Cauchy completeness of the real numbers. Bolzano–Weierstrass theorem The Bolzano–Weierstrass theorem states that every bounded sequence of real numbers has a convergent subsequence. Again, this theorem is equivalent to the other forms of completeness given above. The intermediate value theorem The intermediate value theorem states that every continuous function that attains both negative and positive values has a root. This is a consequence of the least upper bound property, but it can also be used to prove the least upper bound property if treated as an axiom. (The definition of continuity does not depend on any form of completeness, so there is no circularity: what is meant is that the intermediate value theorem and the least upper bound property are equivalent statements.)
Mathematics
Real analysis
null
30826411
https://en.wikipedia.org/wiki/Walnut
Walnut
A walnut is the edible seed of any tree of the genus Juglans (family Juglandaceae), particularly the Persian or English walnut, Juglans regia. They are accessory fruit because the outer covering of the fruit is technically an involucre and thus not morphologically part of the carpel; this means it cannot be a drupe but is instead a drupe-like nut. After full ripening, the shell is discarded, and the kernel is eaten. Nuts of the eastern black walnut (Juglans nigra) and butternuts (Juglans cinerea) are less commonly consumed. Description Walnuts are the round, single-seed stone fruits of the walnut tree. They ripen between September and November in the northern hemisphere. The brown, wrinkly walnut shell is enclosed in a husk. Shells of walnuts available in commerce usually have two segments (but three or four-segment shells can also form). During the ripening process, the husk becomes brittle and the shell hard. The shell encloses the kernel or meat, which is usually in two halves separated by a membranous partition. The seed kernels – commonly available as shelled walnuts – are enclosed in a brown seed coat which contains antioxidants. The antioxidants protect the oil-rich seed from atmospheric oxygen, preventing rancidity. Walnut trees are late to grow leaves, typically not doing so until more than halfway through the spring. Chemistry Walnut hulls contain diverse phytochemicals, such as polyphenols, that stain hands and can cause skin irritation. Seven phenolic compounds, including ferulic acid, vanillic acid, coumaric acid, syringic acid, myricetin, and juglone, were identified in walnut husks; juglone had concentrations of 2-4% fresh weight. Walnuts also contain the ellagitannin, pedunculagin. Regiolone has been isolated with juglone, betulinic acid and sitosterol from the stem bark of J. regia. Species The three species of walnuts most commonly grown for their seeds are the Persian (or English) walnut (J. regia), originating from Iran, the black walnut (J. nigra) – native to eastern North America – and the Japanese walnut, also known as the heartnut (J. ailantifolia). Other species include J. californica, the California black walnut (often used as a rootstock for commercial propagation of J. regia), J. cinerea (butternuts), and J. major, the Arizona walnut. Other sources list J. californica californica as native to southern California, and Juglans californica hindsii, or just J. hindsii, as native to northern California; in at least one case, these are given as "geographic variants" instead of subspecies (Botanica). Numerous walnut cultivars have been developed commercially, which are nearly all hybrids of the English walnut. Cultivation History During the Byzantine era, the walnut was also known by the name "royal nut". An article on walnut tree cultivation in Spain is included in Ibn al-'Awwam's 12th-century Book on Agriculture. The wal element in the name is Germanic and means foreign, especially in the sense of Latin or non-Germanic. Compare, for example, Wales, Walloons, Wallachia. The wal element is present in other Germanic-language words for the same nut, such as: German , Dutch , Danish , and Swedish . Storage Walnuts, like other tree nuts, must be processed and stored properly. Poor storage makes walnuts susceptible to insect and fungal mold infestations; the latter produces aflatoxin – a potent carcinogen. A batch that contains mold-infested walnuts should be entirely discarded. The ideal temperature for the extended storage of walnuts is with low humidity for industrial and home storage. However, such refrigeration technologies are unavailable in developing countries where walnuts are produced in large quantities; walnuts are best stored below with low humidity. Temperatures above and humidity levels above 70 percent can lead to rapid and high spoilage losses. Above 75 percent humidity threshold, fungal molds that release aflatoxin can form. Cultivars Production In 2022, world production of walnuts (in shell) was 3.9 million tonnes, with China contributing 36% of the total (table). Other significant producers (in the order of decreasing harvest) were the United States, Iran, and Turkey. Nutrition English walnuts without shells are 4% water, 15% protein, 65% fat, and 14% carbohydrates, including 7% dietary fiber (table). In a reference amount of , walnuts provide and rich contents (20% or more of the Daily Value, DV) of several dietary minerals, particularly manganese at 148% DV, along with significant amounts of B vitamins (table). Unlike most nuts, which are high in monounsaturated fatty acids, walnut oil is composed largely of polyunsaturated fatty acids (72% of total fats), particularly alpha-linolenic acid (14%) and linoleic acid (58%), although it does contain oleic acid as 13% of total fats (table source). Health claims In 2004, the US Food and Drug Administration (FDA) provided a qualified health claim allowing products containing walnuts to state: "Supportive but not conclusive research shows that eating per day of walnuts, as part of a low saturated fat and low cholesterol diet and not resulting in increased caloric intake, may reduce the risk of coronary heart disease." At the same time, the agency refused to authorize the claim that "Diets including walnuts can reduce the risk of heart disease" and in 2010, it sent a warning letter to Diamond Foods stating there is "not sufficient evidence to identify a biologically active substance in walnuts that reduces the risk of coronary heart disease." In 2011, a scientific panel for the European Food Safety Authority recommended a health claim that "Walnuts contribute to the improvement of endothelium-dependent vasodilation" at a daily intake of ; it also found that a cause and effect relationship did not exist between consuming walnuts and reduction of blood LDL-cholesterol levels. The recommended health claim was later authorized by the European Commission. Research A 2020 systematic review assessing the effect of walnut supplementation on blood pressure found insufficient evidence to support walnut consumption as a blood pressure-lowering strategy. , the relationship between walnut consumption and cognitive health is inconclusive. Uses Culinary Walnut meats are available in two forms: in their shells or de-shelled. Due to processing, the meats may be whole, halved, or in smaller portions. All walnuts can be eaten on their own (raw, toasted, or pickled), or as part of a mix such as muesli, or as an ingredient of a dish: e.g. walnut soup, walnut pie, walnut coffee cake, banana cake, brownie, fudge. Walnuts are often candied or pickled. Pickled walnuts that are the whole fruit can be savory or sweet depending on the preserving solution. Walnuts may be used as an ingredient in other foodstuffs. Walnut is an important ingredient in baklava, Circassian chicken, potica (a traditional festive pastry from Slovenia), satsivi (chicken in walnut sauce), tarator (a summer soup in Bulgarian cuisine), and poultry or meat ball stew from Iranian cuisine. Walnuts are also popular as an ice cream topping, and walnut pieces are used as a garnish on some foods. Nocino is a liqueur made from unripe green walnuts steeped in alcohol with syrup added. Walnut oil is available commercially and is chiefly used as a food ingredient, particularly in salad dressings. It has a low smoke point, which limits its use for frying. Inks and dyes Walnut husks can be used to make durable ink for writing and drawing. It is thought to have been used by artists including Leonardo da Vinci and Rembrandt. Walnut husk pigments are used as a brown dye for fabric and were used in classical Rome and medieval Europe for dyeing hair. Cleaning The US Army once used ground walnut shells for abrasive blasting to clean aviation parts because of low cost and low abrasive qualities. However, an investigation of a fatal Boeing CH-47 Chinook helicopter crash (11 September 1982, in Mannheim, Germany) revealed that walnut shell grit had clogged an oil port, leading to the accident and the discontinuation of walnut shells as a cleaning agent. Commercially, crushed walnut shells are still used outside of aviation for low-abrasive, less-toxic cleaning and blasting applications. In the oil and gas industry, deep bed filters of ground walnut shell are used for "polishing" (filtering) oily contaminates from water. Cat litter At least two companies, LitterMaid and Naturally Fresh, make cat litter from ground walnut shells. Advantages cited over conventional clay litter include environmental sustainability of using what would otherwise be a waste product, superior natural biodegradability, and odor control as good or better than clay litter. Disadvantages include the possibility of allergic reactions among humans and cats. Folk medicine Walnuts have been listed as one of the 38 substances used to prepare Bach flower remedies, a herbal remedy promoted in folk medicine practices for its supposed effect on health. According to Cancer Research UK, "there is no scientific evidence to prove that flower remedies can control, cure or prevent any type of disease, including cancer". In culture Large, symmetrically shaped, and sometimes intricately carved walnut shells (mainly from J. hopeiensis) are valued collectibles in China where they are rotated in hand as a plaything or as decoration. They are also an investment and status symbol, with some carvings having high monetary value if unique. Pairs of walnuts are sometimes sold in their green husks for a form of gambling known as du qing pi. Gallery
Biology and health sciences
Fagales
null
27973777
https://en.wikipedia.org/wiki/ISCC%E2%80%93NBS%20system
ISCC–NBS system
The ISCC–NBS System of Color Designation is a system for naming colors based on a set of 13 basic color terms and a small set of adjective modifiers. It was first established in the 1930s by a joint effort of the Inter-Society Color Council (ISCC), made up of delegates from various American trade organizations, and the National Bureau of Standards (NBS), a US government agency. As suggested in 1932 by the first chairman of the ISCC, the system's goal is to be "a means of designating colors in the United States Pharmacopoeia, in the National Formulary, and in general literature ... such designation to be sufficiently standardized as to be acceptable and usable by science, sufficiently broad to be appreciated and used by science, art, and industry, and sufficiently commonplace to be understood, at least in a general way, by the whole public." The system aims to provide a basis on which color definitions in fields from fashion and printing to botany and geology can be systematized and regularized, so that each industry need not invent its own incompatible color system. In 1939, the system's approach was published in the Journal of Research of the National Bureau of Standards, and the ISCC formally approved the system, which consisted of a set of blocks within the color space defined by the Munsell color system as embodied by the Munsell Book of Color. Over the following decades, the ISCC–NBS system's boundaries were tweaked and its relation to various other color standards were defined, including for instance those for plastics, building materials, botany, paint, and soil. After the definition of the Munsell system was slightly altered by its 1943 renotations, the ISCC–NBS system was redefined in the 1950s in relation to the new Munsell coordinates. In 1955, the NBS published The Color Names Dictionary, which cross-referenced terms from several other color systems and dictionaries, relating them to the ISCC–NBS system and thereby to each other. In 1965, the NBS published Centroid Color Charts made up of color samples demonstrating the central color in each category, as a physical representation of the system usable by the public, and also published The Universal Color Language, a more general system for color designation with various degrees of precision from completely generic (13 broad categories) to extremely precise (numeric values from spectrophotometric measurement). In 1976, The Color Names Dictionary and The Universal Color Language were combined and updated with the publication of Color: Universal Language and Dictionary of Names, the definitive source on the ISCC–NBS system. Color categories The backbone of the ISCC–NBS system is a set of 13 basic color categories, made up of 10 hue names and three neutral categories. This includes the 11 basic color terms defined by Berlin and Kay, plus olive and yellow green: Between these lie a further 16 intermediate categories: These categories can be further subdivided into 267 named categories by combining a hue name with modifiers (the example centroids shown here are for the hue name "purple"): However, not all modifiers apply to every hue name. For example, there is no brilliant brown or very deep pink. Each of the 267 ISCC–NBS categories is defined by one or more "blocks" within the color solid of the Munsell color system, where each block includes colors falling in a specific interval in hue, value, and chroma, resulting in a shape which "might be called a sector of a right cylindrical annulus (like a piece of pie with the point bitten off)". The blocks fill the color solid, and are non-overlapping, so that every point falls into exactly one block. The Color Names Dictionary One of the primary original goals of the ISCC–NBS system was to relate several other common color systems and charts to a common frame of reference. To that end, in the late 1940s, the creators of the ISCC–NBS system measured several other significant color standards and charts, either spectrophotometrically or visually with reference to the Munsell Book of Color. Bibliography In chronological order: Deane B. Judd and Kenneth L. Kelly (1939). "Method of designating colors and a dictionary". Journal of Research of the National Bureau of Standards 23, p 355. RP1239. Dorothy Nickerson and Sidney Newhall (1941). "Central notations for ISCC–NBS color names". Journal of the Optical Society of America 31, p 587. Dorothy Nickerson and Sidney Newhall (1943). "A psychological color solid". Journal of the Optical Society of America 33, p 419. Kenneth L. Kelly and Deane B. Judd (1955). The ISCC–NBS method of designating colors and a dictionary of color names. NBS Circ. 553. Washington DC: US Government Printing Office. Kenneth L. Kelly (1965) "A Universal Color Language". Color engineering. 3(16). Inter-Society Color Council, National Bureau of Standards (1965). ISCC-NBS Color-name Charts Illustrated with Centroid Colors: (Supplement to NBS Circ. 533). Standard sample No. 2106. Kenneth L. Kelly and Deane B. Judd (1976). Color: Universal Language and Dictionary of Names. NBS Special Publication 440. Washington DC: US Department of Commerce.
Physical sciences
Basics
Physics
21732545
https://en.wikipedia.org/wiki/Aerospace%20engineering
Aerospace engineering
Aerospace engineering is the primary field of engineering concerned with the development of aircraft and spacecraft. It has two major and overlapping branches: aeronautical engineering and astronautical engineering. Avionics engineering is similar, but deals with the electronics side of aerospace engineering. "Aeronautical engineering" was the original term for the field. As flight technology advanced to include vehicles operating in outer space, the broader term "aerospace engineering" has come into use. Aerospace engineering, particularly the astronautics branch, is often colloquially referred to as "rocket science". Overview Flight vehicles are subjected to demanding conditions such as those caused by changes in atmospheric pressure and temperature, with structural loads applied upon vehicle components. Consequently, they are usually the products of various technological and engineering disciplines including aerodynamics, air propulsion, avionics, materials science, structural analysis and manufacturing. The interaction between these technologies is known as aerospace engineering. Because of the complexity and number of disciplines involved, aerospace engineering is carried out by teams of engineers, each having their own specialized area of expertise. History The origin of aerospace engineering can be traced back to the aviation pioneers around the late 19th to early 20th centuries, although the work of Sir George Cayley dates from the last decade of the 18th to the mid-19th century. One of the most important people in the history of aeronautics and a pioneer in aeronautical engineering, Cayley is credited as the first person to separate the forces of lift and drag, which affect any atmospheric flight vehicle. Early knowledge of aeronautical engineering was largely empirical, with some concepts and skills imported from other branches of engineering. Some key elements, like fluid dynamics, were understood by 18th-century scientists. In December 1903, the Wright Brothers performed the first sustained, controlled flight of a powered, heavier-than-air aircraft, lasting 12 seconds. The 1910s saw the development of aeronautical engineering through the design of World War I military aircraft. World War I In 1914, Robert Goddard was granted two U.S. patents for rockets using solid fuel, liquid fuel, multiple propellant charges, and multi-stage designs. This would set the stage for future applications in multi-stage propulsion systems for outer space. On March 3, 1915, the U.S. Congress established the first aeronautical research administration, known then as the National Advisory Committee for Aeronautics, or NACA. It was the first government-sponsored organization to support aviation research. Though intended as an advisory board upon inception, the Langley Aeronautical Laboratory became its first sponsored research and testing facility in 1920. Between World Wars I and II, great leaps were made in the field, accelerated by the advent of mainstream civil aviation. Notable airplanes of this era include the Curtiss JN 4, Farman F.60 Goliath, and Fokker Trimotor. Notable military airplanes of this period include the Mitsubishi A6M Zero, Supermarine Spitfire and Messerschmitt Bf 109 from Japan, United Kingdom, and Germany respectively. A significant development came with the first operational Jet engine-powered airplane, the Messerschmitt Me 262 which entered service in 1944 towards the end of the Second World War. The first definition of aerospace engineering appeared in February 1958, considering the Earth's atmosphere and outer space as a single realm, thereby encompassing both aircraft (aero) and spacecraft (space) under the newly coined term aerospace. Cold War In response to the USSR launching the first satellite, Sputnik, into space on October 4, 1957, U.S. aerospace engineers launched the first American satellite on January 31, 1958. The National Aeronautics and Space Administration was founded in 1958 after the Sputnik crisis. In 1969, Apollo 11, the first human space mission to the Moon, took place. It saw three astronauts enter orbit around the Moon, with two, Neil Armstrong and Buzz Aldrin, visiting the lunar surface. The third astronaut, Michael Collins, stayed in orbit to rendezvous with Armstrong and Aldrin after their visit. An important innovation came on January 30, 1970, when the Boeing 747 made its first commercial flight from New York to London. This aircraft made history and became known as the "Jumbo Jet" or "Queen of the Skies" due to its ability to hold up to 480 passengers. 1976: First passenger supersonic aircraft Another significant development came in 1976, with the development of the first passenger supersonic aircraft, the Concorde. The development of this aircraft was agreed upon by the French and British on November 29, 1962. On December 21, 1988, the Antonov An-225 Mriya cargo aircraft commenced its first flight. It holds the records for the world's heaviest aircraft, heaviest airlifted cargo, and longest airlifted cargo of any aircraft in operational service. On October 25, 2007, the Airbus A380 made its maiden commercial flight from Singapore to Sydney, Australia. This aircraft was the first passenger plane to surpass the Boeing 747 in terms of passenger capacity, with a maximum of 853. Though development of this aircraft began in 1988 as a competitor to the 747, the A380 made its first test flight in April 2005. Elements Some of the elements of aerospace engineering are: Radar cross-section – the study of vehicle signature apparent to remote sensing by radar. Fluid mechanics – the study of fluid flow around objects. Specifically aerodynamics concerning the flow of air over bodies such as wings or through objects such as wind tunnels (see also lift and aeronautics). Astrodynamics – the study of orbital mechanics including prediction of orbital elements when given a select few variables. While few schools in the United States teach this at the undergraduate level, several have graduate programs covering this topic (usually in conjunction with the Physics department of said college or university). Statics and Dynamics (engineering mechanics) – the study of movement, forces, moments in mechanical systems. Mathematics – in particular, calculus, differential equations, and linear algebra. Electrotechnology – the study of electronics within engineering. Propulsion – the energy to move a vehicle through the air (or in outer space) is provided by internal combustion engines, jet engines and turbomachinery, or rockets (see also propeller and spacecraft propulsion). A more recent addition to this module is electric propulsion and ion propulsion. Control engineering – the study of mathematical modeling of the dynamic behavior of systems and designing them, usually using feedback signals, so that their dynamic behavior is desirable (stable, without large excursions, with minimum error). This applies to the dynamic behavior of aircraft, spacecraft, propulsion systems, and subsystems that exist on aerospace vehicles. Aircraft structures – design of the physical configuration of the craft to withstand the forces encountered during flight. Aerospace engineering aims to keep structures lightweight and low-cost while maintaining structural integrity. Materials science – related to structures, aerospace engineering also studies the materials of which the aerospace structures are to be built. New materials with very specific properties are invented, or existing ones are modified to improve their performance. Solid mechanics – Closely related to material science is solid mechanics which deals with stress and strain analysis of the components of the vehicle. Nowadays there are several Finite Element programs such as MSC Patran/Nastran which aid engineers in the analytical process. Aeroelasticity – the interaction of aerodynamic forces and structural flexibility, potentially causing flutter, divergence, etc. Avionics – the design and programming of computer systems on board an aircraft or spacecraft and the simulation of systems. Software – the specification, design, development, test, and implementation of computer software for aerospace applications, including flight software, ground control software, test & evaluation software, etc. Risk and reliability – the study of risk and reliability assessment techniques and the mathematics involved in the quantitative methods. Noise control – the study of the mechanics of sound transfer. Aeroacoustics – the study of noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. Flight testing – designing and executing flight test programs in order to gather and analyze performance and handling qualities data in order to determine if an aircraft meets its design and performance goals and certification requirements. The basis of most of these elements lies in theoretical physics, such as fluid dynamics for aerodynamics or the equations of motion for flight dynamics. There is also a large empirical component. Historically, this empirical component was derived from testing of scale models and prototypes, either in wind tunnels or in the free atmosphere. More recently, advances in computing have enabled the use of computational fluid dynamics to simulate the behavior of the fluid, reducing time and expense spent on wind-tunnel testing. Those studying hydrodynamics or hydroacoustics often obtain degrees in aerospace engineering. Additionally, aerospace engineering addresses the integration of all components that constitute an aerospace vehicle (subsystems including power, aerospace bearings, communications, thermal control, life support system, etc.) and its life cycle (design, temperature, pressure, radiation, velocity, lifetime). Degree programs Aerospace engineering may be studied at the advanced diploma, bachelor's, master's, and Ph.D. levels in aerospace engineering departments at many universities, and in mechanical engineering departments at others. A few departments offer degrees in space-focused astronautical engineering. Some institutions differentiate between aeronautical and astronautical engineering. Graduate degrees are offered in advanced or specialty areas for the aerospace industry. A background in chemistry, physics, computer science and mathematics is important for students pursuing an aerospace engineering degree. In popular culture The term "rocket scientist" is sometimes used to describe a person of great intelligence since rocket science is seen as a practice requiring great mental ability, especially technically and mathematically. The term is used ironically in the expression "It's not rocket science" to indicate that a task is simple. Strictly speaking, the use of "science" in "rocket science" is a misnomer since science is about understanding the origins, nature, and behavior of the universe; engineering is about using scientific and engineering principles to solve problems and develop new technology. The more etymologically correct version of this phrase would be "rocket engineer". However, "science" and "engineering" are often misused as synonyms.
Technology
Disciplines
null
5358570
https://en.wikipedia.org/wiki/Coxeter%20graph
Coxeter graph
In the mathematical field of graph theory, the Coxeter graph is a 3-regular graph with 28 vertices and 42 edges. It is one of the 13 known cubic distance-regular graphs. It is named after Harold Scott MacDonald Coxeter. Properties The Coxeter graph has chromatic number 3, chromatic index 3, radius 4, diameter 4 and girth 7. It is also a 3-vertex-connected graph and a 3-edge-connected graph. It has book thickness 3 and queue number 2. The Coxeter graph is hypohamiltonian: it does not itself have a Hamiltonian cycle but every graph formed by removing a single vertex from it is Hamiltonian. It has rectilinear crossing number 11, and is the smallest cubic graph with that crossing number . Construction The simplest construction of a Coxeter graph is from a Fano plane. Take the 7C3 = 35 possible 3-combinations on 7 objects. Discard the 7 triplets that correspond to the lines of the Fano plane, leaving 28 triplets. Link two triplets if they are disjoint. The result is the Coxeter graph. (See image.) This construction exhibits the Coxeter graph as an induced subgraph of the odd graph O4, also known as the Kneser graph . The Coxeter graph may also be constructed from the smaller distance-regular Heawood graph by constructing a vertex for each 6-cycle in the Heawood graph and an edge for each disjoint pair of 6-cycles. The Coxeter graph may be derived from the Hoffman-Singleton graph. Take any vertex v in the Hoffman-Singleton graph. There is an independent set of size 15 that includes v. Delete the 7 neighbors of v, and the whole independent set including v, leaving behind the Coxeter graph. Algebraic properties The automorphism group of the Coxeter graph is a group of order 336. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore, the Coxeter graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the Foster census, the Coxeter graph, referenced as F28A, is the only cubic symmetric graph on 28 vertices. The Coxeter graph is also uniquely determined by its graph spectrum, the set of graph eigenvalues of its adjacency matrix. As a finite connected vertex-transitive graph that contains no Hamiltonian cycle, the Coxeter graph is a counterexample to a variant of the Lovász conjecture, but the canonical formulation of the conjecture asks for an Hamiltonian path and is verified by the Coxeter graph. Only five examples of vertex-transitive graph with no Hamiltonian cycles are known : the complete graph K2, the Petersen graph, the Coxeter graph and two graphs derived from the Petersen and Coxeter graphs by replacing each vertex with a triangle. The characteristic polynomial of the Coxeter graph is . It is the only graph with this characteristic polynomial, making it a graph determined by its spectrum. Gallery Layouts These are different representations of the Coxeter graph, using the same vertex labels. There are four colors, and seven vertices of each color. Each red, green or blue vertex is connected with two vertices of the same color (thin edges forming 7-cycles) and to one white vertex (thick edges). Properties
Mathematics
Graph theory
null
5363452
https://en.wikipedia.org/wiki/Nurse%20shark
Nurse shark
The nurse shark (Ginglymostoma cirratum) is an elasmobranch fish in the family Ginglymostomatidae. The conservation status of the nurse shark is globally assessed as Vulnerable in the IUCN List of Threatened Species. They are considered to be a species of least concern in the United States and in The Bahamas, but considered to be near threatened in the western Atlantic Ocean because of their vulnerable status in South America and reported threats throughout many areas of Central America and the Caribbean. They are directly targeted in some fisheries and considered by-catch in others. Nurse sharks are an important species for shark research. They are robust and able to tolerate capture, handling, and tagging extremely well. As inoffensive as nurse sharks may appear, they are ranked fourth in documented shark bites on humans, likely due to incautious behavior by divers on account of the nurse shark's calm, sedentary nature. Taxonomy The nurse shark genus Ginglymostoma is derived from Greek language meaning hinged mouth, whereas the species cirratum is derived from Latin meaning having curled ringlets. Based on morphological similarities, Ginglymostoma is believed to be the sister genus of Nebrius, with both being placed in a clade that also include species Pseudoginglymostoma brevicaudatum, Rhincodon typus, and Stegostoma fasciatum. The name "nurse" may have originated from antiquated spelling conventions. The Oxford English Dictionary notes that, in medieval times, the "n" of the word "an" was frequently transferred to a following word that began with a vowel. Huss, husse and hurse were antiquated names for dogfish and other sharks. Nurse survives and so does huss. Description The nurse shark has two rounded dorsal fins, rounded pectoral fins, an elongated caudal fin, and a broad head. Maximum adult length is currently documented as , whereas past reports of and corresponding weights of up to are likely to have been exaggerated. Adult nurse sharks are brownish in color. Newly born nurse sharks have a spotted coloration which fades with age and are about 30 cm in length when nascent. Distribution and habitat The nurse shark has a wide but patchy geographical distribution along tropical and subtropical coastal waters of the Eastern Atlantic, Western Atlantic, and Eastern Pacific. In the Eastern Atlantic it ranges from Cape Verde to Gabon (accidental north to France). In the Western Atlantic, including the Caribbean, it ranges from Rhode Island to southern Brazil, and in the East Pacific from Baja California to Peru. Nurse sharks are a typically inshore bottom-dwelling species. Juveniles are mostly found on the bottom of shallow coral reefs, seagrass flats, and around mangrove islands, whereas older individuals typically reside in and around deeper reefs and rocky areas, where they tend to seek shelter in crevices and under ledges during the day and leave their shelter at night to feed on the seabed in shallower areas. Nurse sharks are also subject to piebaldism, a genetic condition that results in a partial lack of body pigmentation and a speckled body. Biology and ecology Nurse sharks are opportunistic predators that feed primarily on small fish (e.g. stingrays) and some invertebrates (e.g. crustaceans, molluscs, tunicates). They are typically solitary nocturnal animals, rifling through bottom sediments in search of food at night, but are often gregarious during the day forming large sedentary groups. Nurse sharks are obligate suction feeders capable of generating suction forces that are among the highest recorded for any aquatic vertebrate to date. Although their small mouths may limit the size of prey, they can exhibit a suck-and-spit behavior and/or shake their head violently to reduce the size of food items. Nurse sharks are exceptionally sedentary unlike most other shark species. Nurse sharks show strong site fidelity (typical of reef sharks), and it is one of the few shark species known to exhibit mating site fidelity, as they will return to the same breeding grounds time and time again. American alligators (Alligator mississippiensis) and American crocodiles may occasionally prey on nurse sharks in some coastal habitats. Photographic evidence and historical accounts suggest that encounters between species are commonplace in their shared habitats. Reproduction Nurse sharks are ovoviviparous, with fertilized eggs hatching inside the female. The mating cycle of nurse sharks is biennial, with females taking up to 18 months to produce a new batch of eggs. The mating season runs from late June to the end of July, with a gestation period of six months and a typical litter of 21–29 pups. The young nurse sharks are born fully developed at about 30 cm long. Nurse sharks engage in multiple paternity during mating season. A study conducted over a ten-year span found that a brood of nurse sharks had more genotypes than broods with one father. 14 separate genotypes were found in the brood examined, which suggests that more than one father fertilized the mother's eggs. Engagement in multiple paternity promotes genetic variation.
Biology and health sciences
Sharks
null
23218956
https://en.wikipedia.org/wiki/Vector%20%28mathematics%20and%20physics%29
Vector (mathematics and physics)
In mathematics and physics, vector is a term that refers to quantities that cannot be expressed by a single number (a scalar), or to elements of some vector spaces. They have to be expressed by both magnitude and direction. Historically, vectors were introduced in geometry and physics (typically in mechanics) for quantities that have both a magnitude and a direction, such as displacements, forces and velocity. Such quantities are represented by geometric vectors in the same way as distances, masses and time are represented by real numbers. The term vector is also used, in some contexts, for tuples, which are finite sequences (of numbers or other objects) of a fixed length. Both geometric vectors and tuples can be added and scaled, and these vector operations led to the concept of a vector space, which is a set equipped with a vector addition and a scalar multiplication that satisfy some axioms generalizing the main properties of operations on the above sorts of vectors. A vector space formed by geometric vectors is called a Euclidean vector space, and a vector space formed by tuples is called a coordinate vector space. Many vector spaces are considered in mathematics, such as extension fields, polynomial rings, algebras and function spaces. The term vector is generally not used for elements of these vector spaces, and is generally reserved for geometric vectors, tuples, and elements of unspecified vector spaces (for example, when discussing general properties of vector spaces). Vectors in Euclidean geometry Vector quantities Vector spaces Vectors in algebra Every algebra over a field is a vector space, but elements of an algebra are generally not called vectors. However, in some cases, they are called vectors, mainly due to historical reasons. Vector quaternion, a quaternion with a zero real part Multivector or -vector, an element of the exterior algebra of a vector space. Spinors, also called spin vectors, have been introduced for extending the notion of rotation vector. In fact, rotation vectors represent well rotations locally, but not globally, because a closed loop in the space of rotation vectors may induce a curve in the space of rotations that is not a loop. Also, the manifold of rotation vectors is orientable, while the manifold of rotations is not. Spinors are elements of a vector subspace of some Clifford algebra. Witt vector, an infinite sequence of elements of a commutative ring, which belongs to an algebra over this ring, and has been introduced for handling carry propagation in the operations on p-adic numbers. Data represented by vectors The set of tuples of real numbers has a natural structure of vector space defined by component-wise addition and scalar multiplication. It is common to call these tuples vectors, even in contexts where vector-space operations do not apply. More generally, when some data can be represented naturally by vectors, they are often called vectors even when addition and scalar multiplication of vectors are not valid operations on these data. Here are some examples. Rotation vector, a Euclidean vector whose direction is that of the axis of a rotation and magnitude is the angle of the rotation. Burgers vector, a vector that represents the magnitude and direction of the lattice distortion of dislocation in a crystal lattice Interval vector, in musical set theory, an array that expresses the intervallic content of a pitch-class set Probability vector, in statistics, a vector with non-negative entries that sum to one. Random vector or multivariate random variable, in statistics, a set of real-valued random variables that may be correlated. However, a random vector may also refer to a random variable that takes its values in a vector space. Logical vector, a vector of 0s and 1s (Booleans). Vectors in calculus Calculus serves as a foundational mathematical tool in the realm of vectors, offering a framework for the analysis and manipulation of vector quantities in diverse scientific disciplines, notably physics and engineering. Vector-valued functions, where the output is a vector, are scrutinized using calculus to derive essential insights into motion within three-dimensional space. Vector calculus extends traditional calculus principles to vector fields, introducing operations like gradient, divergence, and curl, which find applications in physics and engineering contexts. Line integrals, crucial for calculating work along a path within force fields, and surface integrals, employed to determine quantities like flux, illustrate the practical utility of calculus in vector analysis. Volume integrals, essential for computations involving scalar or vector fields over three-dimensional regions, contribute to understanding mass distribution, charge density, and fluid flow rates.
Mathematics
Linear algebra
null
3991810
https://en.wikipedia.org/wiki/Berry%20%28botany%29
Berry (botany)
In botany, a berry is a fleshy fruit without a stone (pit) produced from a single flower containing one ovary. Berries so defined include grapes, currants, and tomatoes, as well as cucumbers, eggplants (aubergines), persimmons and bananas, but exclude certain fruits that meet the culinary definition of berries, such as strawberries and raspberries. The berry is the most common type of fleshy fruit in which the entire outer layer of the ovary wall ripens into a potentially edible "pericarp". Berries may be formed from one or more carpels from the same flower (i.e. from a simple or a compound ovary). The seeds are usually embedded in the fleshy interior of the ovary, but there are some non-fleshy exceptions, such as Capsicum species, with air rather than pulp around their seeds. Many berries are edible, but others, such as the fruits of the potato and the deadly nightshade, are poisonous to humans. A plant that bears berries is said to be bacciferous or baccate (from Latin ). In everyday English, a "berry" is any small edible fruit. Berries are usually juicy, round, brightly coloured, sweet or sour, and do not have a stone or pit, although many small seeds may be present. Botanical berries In botanical language, a berry is a simple fruit having seeds and fleshy pulp (the pericarp) produced from the ovary of a single flower. The ovary can be inferior or superior. It is indehiscent, i.e. it does not have a special "line of weakness" along which it splits to release the seeds when ripe. The pericarp is divided into three layers. The outer layer is called the "exocarp" or "epicarp"; the middle layer, the "mesocarp" or "sarcocarp"; the inner layer, the "endocarp". Botanists have not applied these terms consistently. Exocarp and endocarp may be restricted to more-or-less single-layered "skins", or may include tissues adjacent to them; thus on one view, the exocarp extends inwards to the layer of vascular bundles ("veins"). The inconsistency in usage has been described as "a source of confusion". The nature of the endocarp distinguishes a berry from a drupe, which has a hardened or stony endocarp (see also below). The two kinds of fruit intergrade, depending on the state of the endocarp. Some sources have attempted to quantify the difference, e.g. requiring the endocarp to be less than 2 mm thick in a berry. Examples of botanical berries include: Avocado contains a single large seed surrounded by an imperceptible endocarp. Avocados are, however, also sometimes classified as drupes. Banana Barberry (Berberis), Oregon-grape (Berberis aquifolium) and mayapple (Podophyllum spp.) (Berberidaceae) Strawberry tree (Arbutus unedo) (not to be confused with the strawberry (Fragaria), which is an accessory fruit), bearberry (Arctostaphylos spp.), bilberry, blueberry, cranberry, lingonberry/cowberry (Vaccinium vitis-idaea), crowberry (Empetrum spp.) (family Ericaceae) Coffee berries (Rubiaceae) (also described as drupes) Gooseberry and currant (Ribes spp.; Grossulariaceae), red, black, and white types Aubergine/Eggplant, tomato, goji berries (wolfberry) and other species of the family Solanaceae Elderberry (Sambucus niger; Adoxaceae) Indian gooseberry (Phyllanthus emblica) (Phyllanthaceae) Garcinia gummi-gutta, Garcinia mangostana (mangosteen) and Garcinia indica in the family Clusiaceae Sapodilla (Manilkara zapota), Sapotaceae Grape, Vitis vinifera in the family Vitaceae Honeysuckle: the berries of some species are edible and are called honeyberries, but others are poisonous (Lonicera spp.; Caprifoliaceae) Persimmon (Ebenaceae) Pumpkin, cucumber and watermelon in the family Cucurbitaceae Modified berries "True berries", or "baccae", may also be required to have a thin outer skin, not self-supporting when removed from the berry. This distinguishes, for example, a Vaccinium or Solanum berry from an Adansonia (baobab) amphisarca, which has a dry, more rigid and self-supporting skin. The fruit of citrus, such as the orange, kumquat and lemon, is a berry with a thick rind and a very juicy interior divided into segments by septa, that is given the special name "hesperidium". A specialized term, pepo, is also used for fruits of the gourd family, Cucurbitaceae, which are modified to have a hard outer rind, but are not internally divided by septa. The fruits of Passiflora (passion fruit) and Carica (papaya) are sometimes also considered pepos. Berries that develop from an inferior ovary are sometimes termed epigynous berries or false berries, as opposed to true berries, which develop from a superior ovary. In epigynous berries, the berry includes tissue derived from parts of the flower other than the ovary. The floral tube, formed from the basal part of the sepals, petals and stamens, can become fleshy at maturity and is united with the ovary to form the fruit. Common fruits that are sometimes classified as epigynous berries include bananas, coffee, members of the genus Vaccinium (e.g., cranberries and blueberries), and members of the family Cucurbitaceae (gourds, cucumbers, melons and squash). Berry-like fruits Many fruits which are berries in the culinary definition are not berries in the botanic sense, but fall into one of the following categories: Drupes Drupes are varyingly distinguished from botanical berries. Drupes are fleshy fruits produced from a (usually) single-seeded ovary with a hard woody layer (called the endocarp) surrounding the seed. Familiar examples include the stonefruits of the genus Prunus (peaches, plums and cherries), olives, coconut, dates, bayberry and Persea species. Some definitions make the mere presence of an internally differentiated endocarp the defining feature of a drupe; others qualify the nature of the endocarp required in a drupe, e.g. defining berries to have endocarp less than 2 mm thick. The term "drupaceous" is used of fruits that have the general structure and texture of a drupe, without necessarily meeting the full definition. Other drupe-like fruits with a single seed that lack the stony endocarp include sea-buckthorn (Hippophae rhamnoides, Elaeagnaceae), which is an achene, surrounded by a swollen hypanthium that provides the fleshy layer. Fruits of Coffea species are described as either drupes or berries. Pomes The pome fruits produced by plants in subtribe Pyrinae of family Rosaceae, such as apples and pears, have a structure (the core) in which tough tissue clearly separates the seeds from the outer softer pericarp. Although pomes are not botanical berries, Amelanchier pomes become soft at maturity, resembling a blueberry, and are commonly called Juneberries, serviceberries or Saskatoon berries. Aggregate fruits Aggregate or compound fruits contain seeds from different ovaries of a single flower, with the individual "fruitlets" joined at maturity to form the complete fruit. Examples of aggregate fruits commonly called "berries" include members of the genus Rubus, such as blackberry and raspberry. Botanically, these are not berries. Other large aggregate fruits, such as soursop (Annona muricata), are not usually called "berries", although some sources do use this term. Multiple fruits Multiple fruits are not botanical berries. Multiple fruits are the fruits of two or more multiple flowers that are merged or packed closely together. The mulberry is a berry-like example of a multiple fruit; it develops from a cluster of tiny separate flowers that become compressed as they develop into fruit. Accessory fruits Accessory fruits are not botanical berries. In accessory fruits, the edible part is not generated by the ovary. Berry-like examples include: Strawberry – the non-fleshy aggregate of seed-like achenes on its exterior is actually the "fruit", derived from an aggregate of ovaries; the fleshy part develops instead from the receptacle. Mock strawberry (Duchesnea indica) – structured just like a strawberry. Sea grape (Coccoloba uvifera; Polygonaceae) – the fruit is a dry capsule surrounded by fleshy calyx. Berry-like conifer seed cones The female seed cones of some conifers have fleshy and merged scales, giving them a berry-like appearance. Juniper "berries" (family Cupressaceae), in particular those of Juniperus communis, are used to flavour gin. The seed cones of species in the families Podocarpaceae and Taxaceae have a bright colour when fully developed, increasing the resemblance to true berries. The "berries" of yews (Taxus species) consist of a female seed cone with which develops a fleshy red aril partially enclosing the poisonous seed. History of terminology The Latin word or (plural ) was originally used for "any small round fruit". Andrea Caesalpinus (1519–1603) classified plants into trees and herbs, further dividing them by properties of their flowers and fruit. He did not make the modern distinction between "fruits" and "seeds", calling hard structures like nuts or seeds. A fleshy fruit was called a . For Caesalpinus, a true or berry was a derived from a flower with a superior ovary; one derived from a flower with an inferior ovary was called a . In 1751, Carl Linnaeus wrote Philosophia Botanica, considered to be the first textbook of descriptive systematic botany. He used eight different terms for fruits, one of which was or berry, distinguished from other types of fruit such as (drupe) and (pome). A was defined as "", meaning "unvalved solid pericarp, containing otherwise naked seeds". The adjective "" here has the sense of "solid with tissue softer than the outside; stuffed". A berry or was distinguished from a drupe and a pome, both of which also had an unvalved solid pericarp; a drupe also contained a nut () and a pome a capsule (), rather than the berry's naked seeds. Linnaeus' use of and was thus significantly different from that of Caesalpinus. Botanists continue to differ on how fruit should be classified. Joseph Gaertner published a two-volume work, De Fructibus et Seminibus Plantarum (on the fruits and seeds of plants) between 1788 and 1792. In addition to Linnaeus' eight terms, he introduced seven more, including for the berry-like fruits of cucurbits. A pepo was distinguished by being a fleshy berry with the seeds distant from the axis, and so nearer the fruit wall (i.e. by having "parietal placentation" in modern terminology). Nicaise Auguste Desvaux in 1813 used the terms and as further subdivisions of berries. A hesperidium, called by others (berry with a cortex), had separate internal compartments ("" in the original French) and a separable membraneous epicarp or skin. An amphisarca was described as woody on the outside and fleshy on the inside. "Hesperidium" remains in general use, but "amphisarca" is rarely used. There remains no universally agreed system of classification for fruits, and there continues to be "confusion over classification of fruit types and the definitions given to fruit terms". Evolution and phylogenetic significance By definition, berries have a fleshy, indehiscent pericarp, as opposed to a dry, dehiscent pericarp. Fossils show that early flowering plants had dry fruits; fleshy fruits, such as berries or drupes, appeared only towards the end of the Cretaceous Period or the beginning of the Paleogene Period, about . The increasing importance of seed dispersal by fruit-eating vertebrates, both mammals and birds, may have driven the evolution of fleshy fruits. Alternatively, the causal direction may be the other way round. Large fleshy fruits are associated with moist habitats with closed tree canopies, where wind dispersal of dry fruits is less effective. Such habitats were increasingly common in the Paleogene and the associated change in fruit type may have led to the evolution of fruit eating in mammals and birds. Fruit type has been considered to be a useful character in classification and in understanding the phylogeny of plants. The evolution of fruits with a berry-like pericarp has been studied in a wide range of flowering plant families. Repeated transitions between fleshy and dry pericarps have been demonstrated regularly. One well-studied family is the Solanaceae, because of the commercial importance of fruit such as tomatoes, bell peppers, and eggplants or aubergines. Capsules, which are dry dehiscent fruits, appear to be the original form of the fruit in the earliest diverging members of the family. Berries have then evolved at least three times: in Cestrum, Duboisia, and in the subfamily Solanoideae. Detailed anatomical and developmental studies have shown that the berries of Cestrum and those of the Solanoideae are significantly different; for example, expansion of the fruit during development involves cell divisions in the mesocarp in Solanoideae berries, but not in Cestrum berries. When fruits described as berries were studied in the family Melastomaceae, they were found to be highly variable in structure, some being soft with an endocarp that soon broke down, others having a hard, persistent endocarp, even woody in some species. Fruits classified as berries are thus not necessarily homologous, with the fleshy part being derived from different parts of the ovary, and with other structural and developmental differences. The presence or absence of berries is not a reliable guide to phylogeny. Indeed, fruit type in general has proved to be an unreliable guide to flowering plant relationships. Uses Culinary Berries, defined loosely, have been valuable as a food source to humans since before the start of agriculture, and remain among the primary food sources of other primates. Botanically defined berries with culinary uses include: Berries in the strictest sense: including bananas and plantains, blueberries, cranberries, coffee berries, gooseberries, red-, black- and white currants, tomatoes, grapes and peppers (Capsicum fruits) Hesperidia: citrus fruits, including oranges, lemons and limes Pepos: cucurbits, including squashes, cucumbers, melons and watermelons Some berries are brightly coloured, due to plant pigments such as anthocyanins and other flavonoids. These pigments are localized mainly in the outer surface and the seeds. Such pigments have antioxidant properties in vitro, but there is no reliable evidence that they have antioxidant or any other useful functions within the human body. Consequently, it is not permitted to claim that foods containing plant pigments have antioxidant health value on product labels in the United States or Europe. Some spices are prepared from berries. Allspice is made from the dried berries of Pimenta dioica. The fruits (berries) of different cultivars of Capsicum annuum are used to make paprika (mildly hot), chili pepper (hot) and cayenne pepper (very hot). Others Pepos, characterized by a hard outer rind, have also been used as containers by removing the inner flesh and seeds and then drying the remaining exocarp. The English name of Lagenaria siceraria, "bottle gourd", reflects its use as a liquid container. Some true berries have also been used as a source of dyes. In Hawaii, these included berries from a species of Dianella, used to produce blue, and berries from black nightshade (Solanum americanum), used to produce green. History Cucurbit berries or pepos, particularly from Cucurbita and Lagenaria, are the earliest plants known to be domesticated – before 9,000–10,000 BP in the Americas, and probably by 12,000–13,000 BP in Asia. Peppers were domesticated in Mesoamerica by 8,000 BP. Many other early cultivated plants were also berries by the strict botanical definition, including grapes, domesticated by 8,000 BP and known to have been used in wine production by 6,000 BP. Bananas were first domesticated in Papua New Guinea and Southeast Asia. Archaeological and palaeoenvironmental evidence at Kuk Swamp in the Western Highlands Province of Papua New Guinea suggests that banana cultivation there goes back to at least 7,000 BP, and possibly to 10,000 BP. The history of cultivated citrus fruit remains unclear, although some recent research suggests a possible origin in Papuasia rather than continental southeast Asia. Chinese documents show that mandarins and pomelos were established in cultivation there by around 4,200 BP. Commercial production According to FAOSTAT data, in 2013 four of the five top fruit crops in terms of world production by weight were botanical berries. The other was a pome (apples). According to FAOSTAT, in 2001, bananas (including plantains) and citrus comprised over 25% by value of the world's exported fruits and vegetables, citrus fruits being more valuable than bananas. Export quantities of fruit are not entirely comparable with production quantities, since slightly different categories are used. The top five fruit exports by weight in 2012 are shown in the table below. The top two places are again occupied by bananas and citrus.
Biology and health sciences
Plant: General
null
3992446
https://en.wikipedia.org/wiki/Effigia
Effigia
Effigia is an extinct genus of shuvosaurid known from the Late Triassic of New Mexico, south-western USA. With a bipedal stance, long neck, and a toothless beaked skull, Effigia and other shuvosaurids bore a resemblance to the ornithomimid dinosaurs of the Cretaceous Period. However, shuvosaurids were not dinosaurs, but were instead a specialized family of poposauroid pseudosuchians, meaning that their closest living relatives are crocodilians. Discovery The 2-meter (6 ft 7 in) holotype fossil was collected by Edwin H. Colbert in 1947. At this time Colbert led an excavation to collect blocks of rock from the Whitaker Quarry of Ghost Ranch, near Abiquiu, New Mexico. Colbert's expedition intended to recover abundant fossils of the basal theropod dinosaur Coelophysis, and he believed that no other large vertebrates were present in the quarry. As a result, his crew did not even open the plaster jackets of most of the blocks that were shipped to the American Museum of Natural History. The plaster jacket containing the holotype of Effigia started to be prepared in 2004, and the specimen was uncovered by graduate student Sterling Nesbitt at the AMNH. Nesbitt was opening jackets of blocks in order to find new specimens of Coelophysis. Upon finding the remains of Effigia, he instantly recognized this was not a dinosaur and proceeded to track down the rest of the blocks from that area of the quarry. Nesbitt and Mark Norell, curator at the museum, named it Effigia okeeffeae in January 2006 after Georgia O'Keeffe, who spent many years at Ghost Ranch. Convergence Effigia is noted for its remarkable similarity to ornithomimid dinosaurs. In 2007, Nesbitt's description demonstrated that Effigia was very similar to Shuvosaurus, and is definitely a member of the archosaur subgroup Pseudosuchia (the line leading towards modern crocodilians). Its similarity to ornithomimids represents a case of "extreme" convergent evolution. Nesbitt also demonstrated that Shuvosaurus was the same animal as Chatterjeea, and that it belonged to an exclusive clade containing closely related suchians such as Shuvosaurus and Poposaurus (Poposauroidea). Within this group, Effigia forms an even more exclusive clade with Shuvosaurus and the South American Sillosuchus (Shuvosauridae). In 2007, Lucas and others suggested that "Effigia" was synonymous with "Shuvosaurus" and used the new combination "Shuvosaurus okeeffeae" for the animal. This proposal has not been accepted by Nesbitt. Paleobiology Examination of Effigia's jaws suggest it nipped and sheared off vegetation when feeding, due to its weak jaws and sharp beak. It was previously suggested that it pecked for food like ostriches or other ratites, but biomechanical studies have estimated that its skull could not withstand such forces.
Biology and health sciences
Other prehistoric archosaurs
Animals
3994623
https://en.wikipedia.org/wiki/Medulloblastoma
Medulloblastoma
Medulloblastoma is a common type of primary brain cancer in children. It originates in the part of the brain that is towards the back and the bottom, on the floor of the skull, in the cerebellum, or posterior fossa. The brain is divided into two main parts, the larger cerebrum on top and the smaller cerebellum below towards the back. They are separated by a membrane called the tentorium. Tumors that originate in the cerebellum or the surrounding region below the tentorium are, therefore, called infratentorial. Historically, medulloblastomas have been classified as a primitive neuroectodermal tumor (PNET), but it is now known that medulloblastoma is distinct from supratentorial PNETs and they are no longer considered similar entities. Medulloblastomas are invasive, rapidly growing tumors that, unlike most brain tumors, spread through the cerebrospinal fluid and frequently metastasize to different locations along the surface of the brain and spinal cord. Metastasis all the way down to the cauda equina at the base of the spinal cord is termed "drop metastasis". The cumulative relative survival rate for all age groups and histology follow-up was 60%, 52%, and 47% at 5 years, 10 years, and 20 years, respectively, with children doing better than adults. Signs and symptoms Signs and symptoms are mainly due to secondary increased intracranial pressure due to blockage of the fourth ventricle and tumors are usually present for 1 to 5 months before diagnosis is made. The child typically becomes listless, with repeated episodes of vomiting, and a morning headache, which may lead to a misdiagnosis of gastrointestinal disease or migraine. Soon after, the child will develop a stumbling gait, truncal ataxia, frequent falls, diplopia, papilledema, and sixth cranial nerve palsy. Positional vertigo and nystagmus are also frequent, and facial sensory loss or motor weakness may be present. Decerebrate attacks appear late in the disease. Extraneural metastasis to the rest of the body is rare, and when it occurs, it is in the setting of relapse, more commonly in the era prior to routine chemotherapy. Pathogenesis Medulloblastomas are usually found in the vicinity of the fourth ventricle, between the brainstem and the cerebellum. Tumors with similar appearance and characteristics originate in other parts of the brain, but they are not identical to medulloblastoma. Although medulloblastomas are thought to originate from immature or embryonal cells at their earliest stage of development, the cell of origin depends on the subgroup of medulloblastoma. WNT tumors originate from the lower rhombic lip of the brainstem, while SHH tumors originate from the external granular layer. Currently, medulloblastomas are thought to arise from cerebellar stem cells that have been prevented from dividing and differentiating into their normal cell types. This accounts for the histologic variants seen on biopsy. Both perivascular pseudorosette and Homer Wright pseudorosette formations are highly characteristic of medulloblastomas and are seen in up to half of cases. The classic rosette with tumor cells around a central lumen can be seen. In the past, medulloblastoma was classified using histology, but integrated genomic studies have revealed that medulloblastoma is composed of four distinct molecular and clinical variants termed WNT/β-catenin, Sonic Hedgehog, Group 3, and Group 4. Of these subgroups, WNT patients have an excellent prognosis and group 3 patients have a poor prognosis. Also, a subgroup-specific alternative splicing further confirms the existence of distinct subgroups and highlights the transcriptional heterogeneity between subgroups. Amplification of the Sonic Hedgehog pathway is the best characterized subgroup, with 25% of human tumors having mutations in Patched, Sufu (Suppressor of Fused Homolog), Smoothened, or other genes in this pathway. Medulloblastomas are also seen in Gorlin syndrome as well as Turcot syndrome. Recurrent mutations in the genes CTNNB1, PTCH1, MLL2, SMARCA4, DDX3X, CTDNEP1, KDM6A, and TBR1 were identified in individuals with medulloblastoma. Additional pathways disrupted in some medulloblastomas include MYC, Notch, BMP, and TGF-β signaling pathways. Diagnosis The tumor is distinctive on T1- and T2-weighted MRI with heterogeneous enhancement and a typical location adjacent to and extension into the fourth ventricle. Histologically, the tumor is solid, pink-gray in color, and is well circumscribed. The tumor is very cellular, with high mitotic activity, little cytoplasm, and a tendency to form clusters and rosettes. The Chang staging system can be used in making the diagnosis. DNA methylation profiling of medulloblastoma allows robust sub-classification and improved outcome prediction using formalin-fixed biopsies. Correct diagnosis of medulloblastoma may require ruling out atypical teratoid rhabdoid tumor. Treatment Treatment begins with maximal surgical removal of the tumor. The addition of radiation to the entire neuraxis and chemotherapy may increase the disease-free survival. This combination may permit a 5-year survival in more than 80% of cases. Some evidence indicates that proton beam irradiation reduces the impact of radiation on the cochlear and cardiovascular areas and reduces the cognitive late effects of cranial irradiation. The presence of desmoplastic features such as connective tissue formation offers a better prognosis. Prognosis is worse if the child is less than 3 years old, degree of resection is inadequate, or if any CSF, spinal, supratentorial, or systemic spread occurs. Dementia after radiotherapy and chemotherapy is a common outcome appearing two to four years following treatment. Side effects from radiation treatment can include cognitive impairment, psychiatric illness, bone growth retardation, hearing loss, and endocrine disruption. Increased intracranial pressure may be controlled with corticosteroids or a ventriculoperitoneal shunt. An approach to monitor tumor development and treatment response by liquid biopsy is promising, but remains challenging. Chemotherapy Chemotherapy is often used as part of treatment. Evidence of benefit, however, is not clear as of 2013. A few different chemotherapeutic regimens for medulloblastoma are used; most involve a combination of lomustine, cisplatin, carboplatin, vincristine, or cyclophosphamide. In younger patients (less than 3–4 years of age), chemotherapy can delay, or in some cases possibly even eliminate, the need for radiotherapy. However, both chemotherapy and radiotherapy often have long-term toxicity effects, including delays in physical and cognitive development, higher risk of second cancers, and increased cardiac disease risks. Outcomes Array-based karyotyping of 260 medulloblastomas resulted in the following clinical subgroups based on cytogenetic profiles: Poor prognosis: gain of 6q or amplification of MYC or MYCN Intermediate: gain of 17q or an i(17q) without gain of 6q or amplification of MYC or MYCN Excellent prognosis: 6q and 17q balanced or 6q deletion Transcriptional profiling shows the existence of four main subgroups (Wnt, Shh, Group 3, and Group 4). Very good prognosis: WNT group, CTNNB1 mutation Infants good prognosis, others intermediate: SHH group, PTCH1/SMO/SUFU mutation, GLI2 amplification, or MYCN amplification Poor prognosis: Group 3, MYC amplification, photoreceptor/GABAergic gene expression Intermediate prognosis: Group 4, gene expression of neuronal/glutamatergic, CDK6 amplification, MYCN amplification Survival The historical cumulative relative survival rate for all age groups and histology follow-up was 60%, 52%, and 47% at 5 years, 10 years, and 20 years, respectively. Patients diagnosed with a medulloblastoma or PNET are 50 times more likely to die than a matched member of the general population. A population-based (SEER) 5-year relative survival rates indicated 69% overall: 72% in children (1–9 years) and 67% in adults (20+ years). The 20-year survival rate is 51% in children. Children and adults have different survival profiles, with adults faring worse than children only after the fourth year after diagnosis (after controlling for increased background mortality). Before the fourth year, survival probabilities are nearly identical. Long-term sequelae of standard treatment include hypothalamic-pituitary and thyroid dysfunction and intellectual impairment. The hormonal and intellectual deficits created by these therapies causes significant impairment of the survivors. In current clinical studies, the patients are divided into low-, standard- and high-risk groups: Depending on the study, healing rates of up to 100% are achieved in the low-risk group (usually WNT-activated). The current efforts are therefore moving in the direction of reducing the intensity of the therapy and thus the negative long-term consequences while confirming the high healing rates. In the HIT-SIOP PNET 4 study, in which 340 children and adolescents of the standard-risk group between the ages of four and 21 from several European countries participated, the 5-year survival rate was between 85% and 87% depending on the randomization. Around 78% of the patients remained without relapse for 5 years and are therefore considered to be cured. After a relapse, the prognosis was very poor. Despite intensive treatment, only four of 66 patients were still alive 5 years after a relapse. A US study involved 161 patients between the ages of three and 21 with a high-risk profile. Depending on the randomization, half of the patients additionally received carboplatin daily during the radiation. The 5-year survival rate of patients with carboplatin was 82%, those without 68%. The European SIOP PNET 5 study is currently taking place and will run until April 2024, in which an attempt is made to confirm the promising results with carboplatin during irradiation in the standard risk group. Epidemiology Medulloblastomas affect just under two people per million per year, and affect children 10 times more than adults. Medulloblastoma is the second-most frequent brain tumor in children after pilocytic astrocytoma and the most common malignant brain tumor in children, comprising 14.5% of newly diagnosed brain tumors. In adults, medulloblastoma is rare, comprising fewer than 2% of CNS malignancies. The rate of new cases of childhood medulloblastoma is higher in males (62%) than females (38%), a feature that is not seen in adults. Medulloblastoma and other PNET`s are more prevalent in younger children than older children. About 40% of medulloblastoma patients are diagnosed before the age of five, 31% are between the ages of 5 and 9, 18.3% are between the ages of 10 and 14, and 12.7% are between the ages of 15 and 19. Research models Using gene transfer of SV40 large T-antigen in neuronal precursor cells of rats, a brain tumor model was established. The PNETs were histologically indistinguishable from the human counterparts and have been used to identify new genes involved in human brain tumor carcinogenesis. The model was used to confirm p53 as one of the genes involved in human medulloblastomas, but since only about 10% of the human tumors showed mutations in that gene, the model can be used to identify the other binding partners of SV40 Large T- antigen, other than p53. In a mouse model, high medulloblastoma frequency appears to be caused by the down regulation of Cxcl3, with Cxcl3 being induced by Tis21. Consistently, the treatment with Cxcl3 completely prevents the growth of medulloblastoma lesions in a Shh-type mouse model of medulloblastoma. Thus, CXCL3 is a target for medulloblastoma therapy.
Biology and health sciences
Cancer
Health
3998129
https://en.wikipedia.org/wiki/Grey%20junglefowl
Grey junglefowl
The gray junglefowl (Gallus sonneratii), also known as Sonnerat's junglefowl, is one of the wild ancestors of the domestic chicken together with the red junglefowl and other junglefowls. The species epithet commemorates the French explorer Pierre Sonnerat. Local names include Komri in Rajasthan, Geera kur or Parda komri in Gondi, Jangli Murghi in Hindi, Raan kombdi in Marathi, Kattu Kozhi in Tamil and Malayalam, Kaadu koli in Kannada and Tella adavi kodi in Telugu. Description The male has a black cape with ochre spots and the body plumage on a grey ground colour is finely patterned. The elongated neck feathers are dark and end in a small, hard, yellowish plate; this peculiar structure making them popular for making high-grade artificial flies. The male has red wattles and combs but not as strongly developed as in the red junglefowl. Legs of males are red and have spurs while the yellow legs of females usually lack spurs. The central tail feathers are long and sickle shaped. Males have an eclipse plumage in which they moult their colourful neck feathers in summer during or after the breeding season. The female is duller and has black and white streaking on the underparts and yellow legs. Distribution and habitat This species is endemic to India, and even today it is found mainly in peninsular India and towards the northern boundary. They are found in thickets, on the forest floor and open scrub. The species occurs mainly in the Indian Peninsula, but extends into Gujarat, Madhya Pradesh and southern Rajasthan. The red junglefowl is found more along the foothills of the Himalayas; a region of overlap occurs in the Aravalli range. although the ranges are largely non-overlapping. Disputed subspecies The populations from the region of Mount Abu in Rajasthan named as the subspecies wangyeli is usually not recognized although it is said that the calls of the cock from this region differs from the call of birds from southern India and the plumage is much paler. Behaviour Their loud calls of Ku-kayak-kyuk-kyuk () are loud and distinctive, and can be heard in the early mornings and at dusk. Unlike the red junglefowl, the male does not flap its wings before uttering the call. They breed from February to May. They lay 4 to 7 eggs which are pale creamy in a scrape. Eggs hatch in about 21 days. Although mostly seen on the ground, grey junglefowl fly into trees to escape predators and to roost. They forage in small mixed or single sex groups. They feed on grains including bamboo seeds, berries, insects and termites, and are hunted for meat and for the long neck hackle feathers that are sought after for making fishing lures. Relationships Gray junglefowl have been bred domestically in England since 1862 and their feathers have been commercially supplied from domestic U.K. stocks for fly tying since 1978. A gene from the gray junglefowl is responsible for the yellow pigment in the legs and different body parts of all the domestic chicken breeds. A more recent study revealed multiple gray junglefowl genomic regions introgressed the genome of the domestic chicken, with evidence of some domestic chicken genes also found in the gray junglefowl. The gray junglefowl will sometimes hybridize in the wild with the red junglefowl. It also hybridizes readily in captivity and sometimes with free-range domestic chickens kept in habitations close to forests. The gray junglefowl and red junglefowl diverged about 2.6 million years ago. The species has been isolated by a variety of mechanisms, including behavioural differences and genic incompatibility, but hybridization is not unknown. Some phylogenetic studies of gray junglefowl show that this species is more closely related to the Sri Lankan junglefowl Gallus lafayetii than to the red junglefowl, Gallus gallus, but another study shows a more ambiguous position due to hybridization. However, the time of divergence between the gray junglefowl and Sri Lankan junglefowl around 1.8 million years ago is more recent than 2.6 million years ago calculated for between the gray junglefowl and red junglefowl. This divergence time supports a sister relationship between gray junglefowl and Sri Lankan junglefowl. An endogenous retroviral DNA sequence, of the EAV-HP group noted in domestic chickens is also found in the genome of this species pointing to the early integration of the virus DNA into the genome of Gallus.
Biology and health sciences
Galliformes
Animals
24706093
https://en.wikipedia.org/wiki/Photonic%20metamaterial
Photonic metamaterial
A photonic metamaterial (PM), also known as an optical metamaterial, is a type of electromagnetic metamaterial, that interacts with light, covering terahertz (THz), infrared (IR) or visible wavelengths. The materials employ a periodic, cellular structure. The subwavelength periodicity distinguishes photonic metamaterials from photonic band gap or photonic crystal structures. The cells are on a scale that is magnitudes larger than the atom, yet much smaller than the radiated wavelength, are on the order of nanometers. In a conventional material, the response to electric and magnetic fields, and hence to light, is determined by atoms. In metamaterials, cells take the role of atoms in a material that is homogeneous at scales larger than the cells, yielding an effective medium model. Some photonic metamaterials exhibit magnetism at high frequencies, resulting in strong magnetic coupling. This can produce a negative index of refraction in the optical range. Potential applications include cloaking and transformation optics. Photonic crystals differ from PM in that the size and periodicity of their scattering elements are larger, on the order of the wavelength. Also, a photonic crystal is not homogeneous, so it is not possible to define values of ε (permittivity) or u (permeability). History While researching whether or not matter interacts with the magnetic component of light, Victor Veselago (1967) envisioned the possibility of refraction with a negative sign, according to Maxwell's equations. A refractive index with a negative sign is the result of permittivity, ε < 0 (less than zero) and magnetic permeability, μ < 0 (less than zero). Veselago's analysis has been cited in over 1500 peer-reviewed articles and many books. In the mid-1990s, metamaterials were first seen as potential technologies for applications such as nanometer-scale imaging and cloaking objects. For example, in 1995, Guerra fabricated a transparent grating with 50 nm lines and spaces, and then coupled this (what would be later called) photonic metamaterial with an immersion objective to resolve a silicon grating having 50 nm lines and spaces, far beyond the diffraction limit for the 650 nm wavelength illumination in air. And in 2002, Guerra et al. published their demonstrated use of subwavelength nano-optics (photonic metamaterials) for optical data storage at densities well above the diffraction limit. As of 2015, metamaterial antennas were commercially available. Negative permeability was achieved with a split-ring resonator (SRR) as part of the subwavelength cell. The SRR achieved negative permeability within a narrow frequency range. This was combined with a symmetrically positioned electric conducting post, which created the first negative index metamaterial, operating in the microwave band. Experiments and simulations demonstrated the presence of a left-handed propagation band, a left-handed material. The first experimental confirmation of negative index of refraction occurred soon after, also at microwave frequencies. Negative permeability and negative permittivity Natural materials, such as precious metals, can achieve ε < 0 up to the visible frequencies. However, at terahertz, infrared and visible frequencies, natural materials have a very weak magnetic coupling component, or permeability. In other words, susceptibility to the magnetic component of radiated light can be considered negligible. Negative index metamaterials behave contrary to the conventional "right-handed" interaction of light found in conventional optical materials. Hence, these are dubbed left-handed materials or negative index materials (NIMs), among other nomenclatures. Only fabricated NIMs exhibit this capability. Photonic crystals, like many other known systems, can exhibit unusual propagation behavior such as reversal of phase and group velocities. However, negative refraction does not occur in these systems. Naturally occurring ferromagnetic and antiferromagnetic materials can achieve magnetic resonance, but with significant losses. In natural materials such as natural magnets and ferrites, resonance for the electric (coupling) response and magnetic (coupling) response do not occur at the same frequency. Optical frequency Photonic metamaterial SRRs have reached scales below 100 nanometers, using electron beam and nanolithography. One nanoscale SRR cell has three small metallic rods that are physically connected. This is configured as a U shape and functions as a nano-inductor. The gap between the tips of the U-shape function as a nano-capacitor. Hence, it is an optical nano-LC resonator. These "inclusions" create local electric and magnetic fields when externally excited. These inclusions are usually ten times smaller than the vacuum wavelength of the light c0 at the resonance frequency. The inclusions can then be evaluated by using an effective medium approximation. PMs display a magnetic response with useful magnitude at optical frequencies. This includes negative permeability, despite the absence of magnetic materials. Analogous to ordinary optical material, PMs can be treated as an effective medium that is characterized by effective medium parameters ε(ω) and μ(ω), or similarly, εeff and μeff. The negative refractive index of PMs in the optical frequency range was experimentally demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) and by Brueck et al. (at λ = 2 μm) at nearly the same time. Effective medium model An effective (transmission) medium approximation describes material slabs that, when reacting to an external excitation, are "effectively" homogeneous, with corresponding "effective" parameters that include "effective" ε and μ and apply to the slab as a whole. Individual inclusions or cells may have values different from the slab. However, there are cases where the effective medium approximation does not hold and one needs to be aware of its applicability. Coupling magnetism Negative magnetic permeability was originally achieved in a left-handed medium at microwave frequencies by using arrays of split-ring resonators. In most natural materials, the magnetically coupled response starts to taper off at frequencies in the gigahertz range, which implies that significant magnetism does not occur at optical frequencies. The effective permeability of such materials is unity, μeff = 1. Hence, the magnetic component of a radiated electromagnetic field has virtually no effect on natural occurring materials at optical frequencies. In metamaterials the cell acts as a meta-atom, a larger scale magnetic dipole, analogous to the picometer-sized atom. For meta-atoms constructed from gold, μ < 0 can be achieved at telecommunication frequencies but not at visible frequencies. The visible frequency has been elusive because the plasma frequency of metals is the ultimate limiting condition. Design and fabrication Optical wavelengths are much shorter than microwaves, making subwavelength optical metamaterials more difficult to realize. Microwave metamaterials can be fabricated from circuit board materials, while lithography techniques must be employed to produce PMs. Successful experiments used a periodic arrangement of short wires or metallic pieces with varied shapes. In a different study the whole slab was electrically connected. Fabrication techniques include electron beam lithography, nanostructuring with a focused ion beam and interference lithography. In 2014 a polarization-insensitive metamaterial prototype was demonstrated to absorb energy over a broad band (a super-octave) of infrared wavelengths. The material displayed greater than 98% measured average absorptivity that it maintained over a wide ±45° field-of-view for mid-infrared wavelengths between 1.77 and 4.81 μm. One use is to conceal objects from infrared sensors. Palladium provided greater bandwidth than silver or gold. A genetic algorithm randomly modified an initial candidate pattern, testing and eliminating all but the best. The process was repeated over multiple generations until the design became effective. The metamaterial is made of four layers on a silicon substrate. The first layer is palladium, covered by polyimide (plastic) and a palladium screen on top. The screen has sub-wavelength cutouts that block the various wavelengths. A polyimide layer caps the whole absorber. It can absorb 90 percent of infrared radiation at up to a 55 degree angle to the screen. The layers do not need accurate alignment. The polyimide cap protects the screen and helps reduce any impedance mismatch that might occur when the wave crosses from the air into the device. Research One-way transmission In 2015 visible light joined microwave and infrared NIMs in propagating light in only one direction. ("mirrors" instead reduce light transmission in the reverse direction, requiring low light levels behind the mirror to work.) The material combined two optical nanostructures: a multi-layered block of alternating silver and glass sheets and metal grates. The silver-glass structure is a "hyperbolic" metamaterial, which treats light differently depending on which direction the waves are traveling. Each layer is tens of nanometers thick—much thinner than visible light's 400 to 700 nm wavelengths, making the block opaque to visible light, although light entering at certain angles can propagate inside the material. Adding chromium grates with sub-wavelength spacings bent incoming red or green light waves enough that they could enter and propagate inside the block. On the opposite side of the block, another set of grates allowed light to exit, angled away from its original direction. The spacing of the exit grates was different from that of the entrance grates, bending incident light so that external light could not enter the block from that side. Around 30 times more light passed through in the forward direction than in reverse. The intervening blocks reduced the need for precise alignment of the two grates with respect to each other. Such structures hold potential for applications in optical communication—for instance, they could be integrated into photonic computer chips that split or combine signals carried by light waves. Other potential applications include biosensing using nanoscale particles to deflect light to angles steep enough to travel through the hyperbolic material and out the other side. Lumped circuit elements By employing a combination of plasmonic and non-plasmonic nanoparticles, lumped circuit element nanocircuits at infrared and optical frequencies appear to be possible. Conventional lumped circuit elements are not available in a conventional way. Subwavelength lumped circuit elements proved workable in the microwave and radio frequency (RF) domain. The lumped element concept allowed for element simplification and circuit modularization. Nanoscale fabrication techniques exist to accomplish subwavelength geometries. Cell design Metals such as gold, silver, aluminum and copper conduct currents at RF and microwave frequencies. At optical frequencies characteristics of some noble metals are altered. Rather than normal current flow, plasmonic resonances occur as the real part of the complex permittivity becomes negative. Therefore, the main current flow is actually the electric displacement current density ∂D / ∂t, and can be termed as the “flowing optical current". At subwavelength scales the cell's impedance becomes dependent on shape, size, material and the optical frequency illumination. The particle's orientation with the optical electric field may also help determine the impedance. Conventional silicon dielectrics have the real permittivity component εreal > 0 at optical frequencies, causing the nanoparticle to act as a capacitive impedance, a nanocapacitor. Conversely, if the material is a noble metal such as gold or silver, with εreal < 0, then it takes on inductive characteristics, becoming a nanoinductor. Material loss is represented as a nano-resistor. Tunability The most commonly applied scheme to achieve a tunable index of refraction is electro-optical tuning. Here the change in refractive index is proportional to either the applied electric field, or is proportional to the square modulus of the electric field. These are the Pockels effect and Kerr effects, respectively. An alternative is to employ a nonlinear optical material and depend on the optical field intensity to modify the refractive index or magnetic parameters. Layering Stacking layers produces NIMs at optical frequencies. However, the surface configuration (non-planar, bulk) of the SRR normally prevents stacking. Although a single-layer SRR structure can be constructed on a dielectric surface, it is relatively difficult to stack these bulk structures due to alignment tolerance requirements. A stacking technique for SRRs was published in 2007 that uses dielectric spacers to apply a planarization procedure to flatten the SRR layer. It appears that arbitrary many layers can be made this way, including any chosen number of unit cells and variant spatial arrangements of individual layers. Frequency doubling In 2014 researchers announced a 400 nanometer thick frequency-doubling non-linear mirror that can be tuned to work at near-infrared to mid-infrared to terahertz frequencies. The material operates with much lower intensity light than traditional approaches. For a given input light intensity and structure thickness, the metamaterial produced approximately one million times higher intensity output. The mirrors do not require matching the phase velocities of the input and output waves. It can produce giant nonlinear response for multiple nonlinear optical processes, such as second harmonic, sum- and difference-frequency generation, as well a variety of four-wave mixing processes. The demonstration device converted light with a wavelength of 8000 to 4000 nanometers. The device is made of a stack of thin layers of indium, gallium and arsenic or aluminum, indium and arsenic. 100 of these layers, each between one and twelve nanometers thick, were faced on top by a pattern of asymmetrical, crossed gold nanostructures that form coupled quantum wells and a layer of gold on the bottom. Potential applications include remote sensing and medical applications that call for compact laser systems. Other Dyakonov surface waves (DSW) relate to birefringence related to photonic crystals, metamaterial anisotropy. Recently photonic metamaterial operated at 780 nanometer (near-infrared), 813 nm and 772 nm.
Physical sciences
Basics_3
Physics
34880278
https://en.wikipedia.org/wiki/Separation%20process
Separation process
A separation process is a method that converts a mixture or a solution of chemical substances into two or more distinct product mixtures, a scientific process of separating two or more substances in order to obtain purity. At least one product mixture from the separation is enriched in one or more of the source mixture's constituents. In some cases, a separation may fully divide the mixture into pure constituents. Separations exploit differences in chemical properties or physical properties (such as size, shape, charge, mass, density, or chemical affinity) between the constituents of a mixture. Processes are often classified according to the particular properties they exploit to achieve separation. If no single difference can be used to accomplish the desired separation, multiple operations can often be combined to achieve the desired end. With a few exceptions, elements or compounds exist in nature in an impure state. Often these raw materials must go through a separation before they can be put to productive use, making separation techniques essential for the modern industrial economy. The purpose of separation may be: analytical: to identify the size of each fraction of a mixture is attributable to each component without attempting to harvest the fractions. preparative: to "prepare" fractions for input into processes that benefit when components are separated. Separations may be performed on a small scale, as in a laboratory for analytical purposes, or on a large scale, as in a chemical plant. Complete and incomplete separation Some types of separation require complete purification of a certain component. An example is the production of aluminum metal from bauxite ore through electrolysis refining. In contrast, an incomplete separation process may specify an output to consist of a mixture instead of a single pure component. A good example of an incomplete separation technique is oil refining. Crude oil occurs naturally as a mixture of various hydrocarbons and impurities. The refining process splits this mixture into other, more valuable mixtures such as natural gas, gasoline and chemical feedstocks, none of which are pure substances, but each of which must be separated from the raw crude. In both complete separation and incomplete separation, a series or cascade of separations may be necessary to obtain the desired end products. In the case of oil refining, crude is subjected to a long series of individual distillation steps, each of which produces a different product or intermediate. List of separation techniques Centrifugation and cyclonic separation, separates based on density differences Chelation Chromatography separates dissolved substances by different interaction with (i.e., travel through) a material. High-performance liquid chromatography (HPLC) Thin-layer chromatography (TLC) Countercurrent chromatography (CCC) Droplet countercurrent chromatography (DCC) Paper chromatography Ion chromatography Size-exclusion chromatography (SEC) Affinity chromatography Centrifugal partition chromatography Gas chromatography and Inverse gas chromatography Crystallization Decantation Demister (vapor), removes liquid droplets from gas streams Distillation, used for mixtures of liquids with different boiling points Drying, removes liquid from a solid by vaporization or evaporation Electrophoresis, separates organic molecules based on their different interaction with a gel under an electric potential (i.e., different travel) Capillary electrophoresis Electrostatic separation, works on the principle of corona discharge, where two plates are placed close together and high voltage is applied. This high voltage is used to separate the ionized particles. Elutriation Evaporation Extraction Leaching Liquid–liquid extraction Solid phase extraction Supercritical fluid extraction Subcritical fluid extraction Field flow fractionation Filtration – Mesh, bag and paper filters are used to remove large particulates suspended in fluids (e.g., fly ash) while membrane processes including microfiltration, ultrafiltration, nanofiltration, reverse osmosis, dialysis (biochemistry) utilising synthetic membranes, separates micrometre-sized or smaller species Flocculation, separates a solid from a liquid in a colloid, by use of a flocculant, which promotes the solid clumping into flocs Fractional distillation Fractional freezing Magnetic separation Oil-water separation, gravimetrically separates suspended oil droplets from waste water in oil refineries, petrochemical and chemical plants, natural gas processing plants and similar industries Precipitation Recrystallization Scrubbing, separation of particulates (solids) or gases from a gas stream using liquid. Sedimentation, separates using vocal density pressure differences Gravity separation Sieving Sponge, adhesion of atoms, ions or molecules of gas, liquid, or dissolved solids to a surface Stripping Sublimation Vapor–liquid separation, separates by gravity, based on the Souders–Brown equation Winnowing Zone refining
Physical sciences
Separation processes
Chemistry
34888669
https://en.wikipedia.org/wiki/Endospore%20staining
Endospore staining
Endospore staining is a technique used in bacteriology to identify the presence of endospores in a bacterial sample. Within bacteria, endospores are protective structures used to survive extreme conditions, including high temperatures making them highly resistant to chemicals. Endospores contain little or no ATP which indicates how dormant they can be. Endospores contain a tough outer coating made up of keratin which protects them from nucleic DNA as well as other adaptations. Endospores are able to regerminate into vegetative cells, which provides a protective nature that makes them difficult to stain using normal techniques such as simple staining and gram staining. Special techniques for endospore staining include the Schaeffer–Fulton stain and the Moeller stain. History Endospores were first studied in 1876 by scientists Cohn and Koch. It was found that endospores could not be stained using simple stains such as methylene blue, safranin, and carbol fuchsin. These scientists, along with a few others, found out that spores were dormant and resistant to heat. In the early 1900s, researchers were trying to find alternative methods to improve disease and infection from these endospores. In 1922, Dorner published a method for staining endospores. He found a differential staining technique where endospores appear green and vegetative cells appear pinkish red. Dorner used heat as a step in the process, but it was time-consuming, so in 1933, Schaeffer and Fulton modified his method. Schaeffer and Fulton made the heating process a lot faster by using a Bunsen burner. Although this method was not the most beneficial, it was a lot more convenient than Dorner's method. This improved method provided a quicker and easier test and allowed for the spores to be more susceptible to the dyes. To this day, the Schaeffer- Fulton stain is still performed to help identify bacteria. Examples Endospores can last for decades in multiple hard conditions, such as drying and freezing. This is because the DNA inside the endospore can survive over a long period. Most bacteria are unable to form endospores due to their high resistance, but some common species are the genera Bacillus ( over 100 species) and Clostridium (over 160 species). Bacillus anthraces, which causes anthrax Bacillus cereus- Can cause two types of food poisoning: emetic and diarrheal Bacillus subtilis- Found in soil Clostridium tetani,- Spore that causes lockjaw (tetanus) and rigid paralysis. Clostridium botulinum- Spore found in foods that have not been canned properly. Clostridium botulinum is sometimes sold as botox and prevents nerve transmission. Clostridioides difficile- Causes inflammation in the colon, most often from other antibiotics. Symptoms include diarrhea, belly pain, and fever. Shape and location Types of endospores that can be identified include free endospores, central endospores( middle of the cell), subterminal( between the end and middle of the cell), and terminal ( end of the cell) endospores. There can also be a combination of terminal or subterminal. Endospores can be differentiated based on shape, either spherical or elliptical (oval), size relative to the cell, and whether they cause the cell to look swollen or not. Staining mechanism In the Schaeffer-Fulton staining method, a primary stain containing malachite green is forced into the spore by steaming the bacteria. Malachite green can be left on the slide for 15 minutes or more to stain the spores. It takes a long time for the spores to stain due to their density, so heat acts as the mordant when performing this differential stain. Malachite green is water-soluble so vegetative cells and spore mother cells can be decolorized with distilled water and counterstained with 0.5% Safranin. In the end, a proper smear would show the endospore as a green dot within either a red or pink-colored cell. Mycobacterium is one obstacle that is faced with this type of staining process because it will still stain green even though it does not produce any endospores. This is due to its waxy cell wall which retains the malachite green dye even after the decolorizing process. A different type of staining called acid-fast stain will have to be done in order to get further information about this particular type of bacterium. Staining procedure Source: Using aseptic technique, prepare and air dried heat fixed slide with the desired organism. Prepare a boiling water bath. Cover the slide with a piece of paper towel and place on staining rack over the water bath. Flood the paper towel on the slide with Malachite Green ( primary stain). Steam the slide for 5 to 7 minutes (mordant). After the time is up, carefully remove the slide from the water bath using forceps. Take off papertowel. Let the slide cool down, and then using the forceps over the staining rack, gently rinse with distilled water until the runoff is clear( decolorizer). Pour off any excess water and place slide on the staining rack and flood with Safranin ( counterstain) for one minute. Rinse off any excess safranin gently with distilled water and carefully blot dry both sides. When slide is dry, view slide under the microscope under the oil immersion objective (100X).
Biology and health sciences
Basics_3
Biology
41597450
https://en.wikipedia.org/wiki/Poisson%20point%20process
Poisson point process
In probability theory, statistics and related fields, a Poisson point process (also known as: Poisson random measure, Poisson random point field and Poisson point field) is a type of mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The process's name derives from the fact that the number of points in any given finite region follows a Poisson distribution. The process and the distribution are named after French mathematician Siméon Denis Poisson. The process itself was discovered independently and repeatedly in several settings, including experiments on radioactive decay, telephone call arrivals and actuarial science. This point process is used as a mathematical model for seemingly random processes in numerous disciplines including astronomy, biology, ecology,geology, seismology, physics, economics, image processing, and telecommunications. The Poisson point process is often defined on the real number line, where it can be considered a stochastic process. It is used, for example, in queueing theory to model random events distributed in time, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes. In the plane, the point process, also known as a spatial Poisson process, can represent the locations of scattered objects such as transmitters in a wireless network, particles colliding into a detector or trees in a forest. The process is often used in mathematical models and in the related fields of spatial point processes, stochastic geometry, spatial statistics and continuum percolation theory. The point process depends on a single mathematical object, which, depending on the context, may be a constant, a locally integrable function or, in more general settings, a Radon measure. In the first case, the constant, known as the rate or intensity, is the average density of the points in the Poisson process located in some region of space. The resulting point process is called a homogeneous or stationary Poisson point process. In the second case, the point process is called an inhomogeneous or nonhomogeneous Poisson point process, and the average density of points depend on the location of the underlying space of the Poisson point process. The word point is often omitted, but there are other Poisson processes of objects, which, instead of points, consist of more complicated mathematical objects such as lines and polygons, and such processes can be based on the Poisson point process. Both the homogeneous and nonhomogeneous Poisson point processes are particular cases of the generalized renewal process. Overview of definitions Depending on the setting, the process has several equivalent definitions as well as definitions of varying generality owing to its many applications and characterizations. The Poisson point process can be defined, studied and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model; in higher dimensions such as the plane where it plays a role in stochastic geometry and spatial statistics; or on more general mathematical spaces. Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context. Despite all this, the Poisson point process has two key properties—the Poisson property and the independence property— that play an essential role in all settings where the Poisson point process is used. The two properties are not logically independent; indeed, the Poisson distribution of point counts implies the independence property, while in the converse direction the assumptions that: (i) the point process is simple, (ii) has no fixed atoms, and (iii) is a.s. boundedly finite are required. Poisson distribution of point counts A Poisson point process is characterized via the Poisson distribution. The Poisson distribution is the probability distribution of a random variable (called a Poisson random variable) such that the probability that equals is given by: where denotes factorial and the parameter determines the shape of the distribution. (In fact, equals the expected value of .) By definition, a Poisson point process has the property that the number of points in a bounded region of the process's underlying space is a Poisson-distributed random variable. Complete independence Consider a collection of disjoint and bounded subregions of the underlying space. By definition, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. This property is known under several names such as complete randomness, complete independence, or independent scattering and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general, which motivates the Poisson process being sometimes called a purely or completely random process. Homogeneous Poisson point process If a Poisson point process has a parameter of the form , where is Lebesgue measure (that is, it assigns length, area, or volume to sets) and is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the expected (or average) number of Poisson points existing in some bounded region, where rate is usually used when the underlying space has one dimension. The parameter can be interpreted as the average number of points per some unit of extent such as length, area, volume, or time, depending on the underlying mathematical space, and it is also called the mean density or mean rate; see Terminology. Interpreted as a counting process The homogeneous Poisson point process, when considered on the positive half-line, can be defined as a counting process, a type of stochastic process, which can be denoted as . A counting process represents the total number of occurrences or events that have happened up to and including time . A counting process is a homogeneous Poisson counting process with rate if it has the following three properties: has independent increments; and the number of events (or points) in any interval of length is a Poisson random variable with parameter (or mean) . The last property implies: In other words, the probability of the random variable being equal to is given by: The Poisson counting process can also be defined by stating that the time differences between events of the counting process are exponential variables with mean . The time differences between the events or arrivals are known as interarrival or interoccurence times. Interpreted as a point process on the real line Interpreted as a point process, a Poisson point process can be defined on the real line by considering the number of points of the process in the interval . For the homogeneous Poisson point process on the real line with parameter , the probability of this random number of points, written here as , being equal to some counting number is given by: For some positive integer , the homogeneous Poisson point process has the finite-dimensional distribution given by: where the real numbers . In other words, is a Poisson random variable with mean , where . Furthermore, the number of points in any two disjoint intervals, say, and are independent of each other, and this extends to any finite number of disjoint intervals. In the queueing theory context, one can consider a point existing (in an interval) as an event, but this is different to the word event in the probability theory sense. It follows that is the expected number of arrivals that occur per unit of time. Key properties The previous definition has two important features shared by Poisson point processes in general: the number of arrivals in each finite interval has a Poisson distribution; the number of arrivals in disjoint intervals are independent random variables. Furthermore, it has a third feature related to just the homogeneous Poisson point process: the Poisson distribution of the number of arrivals in each interval only depends on the interval's length . In other words, for any finite , the random variable is independent of , so it is also called a stationary Poisson process. Law of large numbers The quantity can be interpreted as the expected or average number of points occurring in the interval , namely: where denotes the expectation operator. In other words, the parameter of the Poisson process coincides with the density of points. Furthermore, the homogeneous Poisson point process adheres to its own form of the (strong) law of large numbers. More specifically, with probability one: where denotes the limit of a function, and is expected number of arrivals occurred per unit of time. Memoryless property The distance between two consecutive points of a point process on the real line will be an exponential random variable with parameter (or equivalently, mean ). This implies that the points have the memoryless property: the existence of one point existing in a finite interval does not affect the probability (distribution) of other points existing, but this property has no natural equivalence when the Poisson process is defined on a space with higher dimensions. Orderliness and simplicity A point process with stationary increments is sometimes said to be orderly or regular if: where little-o notation is being used. A point process is called a simple point process when the probability of any of its two points coinciding in the same position, on the underlying space, is zero. For point processes in general on the real line, the property of orderliness implies that the process is simple, which is the case for the homogeneous Poisson point process. Martingale characterization On the real line, the homogeneous Poisson point process has a connection to the theory of martingales via the following characterization: a point process is the homogeneous Poisson point process if and only if is a martingale. Relationship to other processes On the real line, the Poisson process is a type of continuous-time Markov process known as a birth process, a special case of the birth–death process (with just births and zero deaths). More complicated processes with the Markov property, such as Markov arrival processes, have been defined where the Poisson process is a special case. Restricted to the half-line If the homogeneous Poisson process is considered just on the half-line , which can be the case when represents time then the resulting process is not truly invariant under translation. In that case the Poisson process is no longer stationary, according to some definitions of stationarity. Applications There have been many applications of the homogeneous Poisson process on the real line in an attempt to model seemingly random and independent events occurring. It has a fundamental role in queueing theory, which is the probability field of developing suitable stochastic models to represent the random arrival and departure of certain phenomena. For example, customers arriving and being served or phone calls arriving at a phone exchange can be both studied with techniques from queueing theory. Generalizations The homogeneous Poisson process on the real line is considered one of the simplest stochastic processes for counting random numbers of points. This process can be generalized in a number of ways. One possible generalization is to extend the distribution of interarrival times from the exponential distribution to other distributions, which introduces the stochastic process known as a renewal process. Another generalization is to define the Poisson point process on higher dimensional spaces such as the plane. Spatial Poisson point process A spatial Poisson process is a Poisson point process defined in the plane . For its mathematical definition, one first considers a bounded, open or closed (or more precisely, Borel measurable) region of the plane. The number of points of a point process existing in this region is a random variable, denoted by . If the points belong to a homogeneous Poisson process with parameter , then the probability of points existing in is given by: where denotes the area of . For some finite integer , we can give the finite-dimensional distribution of the homogeneous Poisson point process by first considering a collection of disjoint, bounded Borel (measurable) sets . The number of points of the point process existing in can be written as . Then the homogeneous Poisson point process with parameter has the finite-dimensional distribution: Applications The spatial Poisson point process features prominently in spatial statistics, stochastic geometry, and continuum percolation theory. This point process is applied in various physical sciences such as a model developed for alpha particles being detected. In recent years, it has been frequently used to model seemingly disordered spatial configurations of certain wireless communication networks. For example, models for cellular or mobile phone networks have been developed where it is assumed the phone network transmitters, known as base stations, are positioned according to a homogeneous Poisson point process. Defined in higher dimensions The previous homogeneous Poisson point process immediately extends to higher dimensions by replacing the notion of area with (high dimensional) volume. For some bounded region of Euclidean space , if the points form a homogeneous Poisson process with parameter , then the probability of points existing in is given by: where now denotes the -dimensional volume of . Furthermore, for a collection of disjoint, bounded Borel sets , let denote the number of points of existing in . Then the corresponding homogeneous Poisson point process with parameter has the finite-dimensional distribution: Homogeneous Poisson point processes do not depend on the position of the underlying space through its parameter , which implies it is both a stationary process (invariant to translation) and an isotropic (invariant to rotation) stochastic process. Similarly to the one-dimensional case, the homogeneous point process is restricted to some bounded subset of , then depending on some definitions of stationarity, the process is no longer stationary. Points are uniformly distributed If the homogeneous point process is defined on the real line as a mathematical model for occurrences of some phenomenon, then it has the characteristic that the positions of these occurrences or events on the real line (often interpreted as time) will be uniformly distributed. More specifically, if an event occurs (according to this process) in an interval where , then its location will be a uniform random variable defined on that interval. Furthermore, the homogeneous point process is sometimes called the uniform Poisson point process (see Terminology). This uniformity property extends to higher dimensions in the Cartesian coordinate, but not in, for example, polar coordinates. Inhomogeneous Poisson point process The inhomogeneous or nonhomogeneous Poisson point process (see Terminology) is a Poisson point process with a Poisson parameter set as some location-dependent function in the underlying space on which the Poisson process is defined. For Euclidean space , this is achieved by introducing a locally integrable positive function , such that for every bounded region the (-dimensional) volume integral of over region is finite. In other words, if this integral, denoted by , is: where is a (-dimensional) volume element, then for every collection of disjoint bounded Borel measurable sets , an inhomogeneous Poisson process with (intensity) function has the finite-dimensional distribution: Furthermore, has the interpretation of being the expected number of points of the Poisson process located in the bounded region , namely Defined on the real line On the real line, the inhomogeneous or non-homogeneous Poisson point process has mean measure given by a one-dimensional integral. For two real numbers and , where , denote by the number points of an inhomogeneous Poisson process with intensity function occurring in the interval . The probability of points existing in the above interval is given by: where the mean or intensity measure is: which means that the random variable is a Poisson random variable with mean . A feature of the one-dimension setting, is that an inhomogeneous Poisson process can be transformed into a homogeneous by a monotone transformation or mapping, which is achieved with the inverse of . Counting process interpretation The inhomogeneous Poisson point process, when considered on the positive half-line, is also sometimes defined as a counting process. With this interpretation, the process, which is sometimes written as , represents the total number of occurrences or events that have happened up to and including time . A counting process is said to be an inhomogeneous Poisson counting process if it has the four properties: has independent increments; and where is asymptotic or little-o notation for as . In the case of point processes with refractoriness (e.g., neural spike trains) a stronger version of property 4 applies: . The above properties imply that is a Poisson random variable with the parameter (or mean) which implies Spatial Poisson process An inhomogeneous Poisson process defined in the plane is called a spatial Poisson process It is defined with intensity function and its intensity measure is obtained performing a surface integral of its intensity function over some region. For example, its intensity function (as a function of Cartesian coordinates and ) can be so the corresponding intensity measure is given by the surface integral where is some bounded region in the plane . In higher dimensions In the plane, corresponds to a surface integral while in the integral becomes a (-dimensional) volume integral. Applications When the real line is interpreted as time, the inhomogeneous process is used in the fields of counting processes and in queueing theory. Examples of phenomena which have been represented by or appear as an inhomogeneous Poisson point process include: Goals being scored in a soccer game. Defects in a circuit board In the plane, the Poisson point process is important in the related disciplines of stochastic geometry and spatial statistics. The intensity measure of this point process is dependent on the location of underlying space, which means it can be used to model phenomena with a density that varies over some region. In other words, the phenomena can be represented as points that have a location-dependent density. This processes has been used in various disciplines and uses include the study of salmon and sea lice in the oceans, forestry, and search problems. Interpretation of the intensity function The Poisson intensity function has an interpretation, considered intuitive, with the volume element in the infinitesimal sense: is the infinitesimal probability of a point of a Poisson point process existing in a region of space with volume located at . For example, given a homogeneous Poisson point process on the real line, the probability of finding a single point of the process in a small interval of width is approximately . In fact, such intuition is how the Poisson point process is sometimes introduced and its distribution derived. Simple point process If a Poisson point process has an intensity measure that is a locally finite and diffuse (or non-atomic), then it is a simple point process. For a simple point process, the probability of a point existing at a single point or location in the underlying (state) space is either zero or one. This implies that, with probability one, no two (or more) points of a Poisson point process coincide in location in the underlying space. Simulation Simulating a Poisson point process on a computer is usually done in a bounded region of space, known as a simulation window, and requires two steps: appropriately creating a random number of points and then suitably placing the points in a random manner. Both these two steps depend on the specific Poisson point process that is being simulated. Step 1: Number of points The number of points in the window, denoted here by , needs to be simulated, which is done by using a (pseudo)-random number generating function capable of simulating Poisson random variables. Homogeneous case For the homogeneous case with the constant , the mean of the Poisson random variable is set to where is the length, area or (-dimensional) volume of . Inhomogeneous case For the inhomogeneous case, is replaced with the (-dimensional) volume integral Step 2: Positioning of points The second stage requires randomly placing the points in the window . Homogeneous case For the homogeneous case in one dimension, all points are uniformly and independently placed in the window or interval . For higher dimensions in a Cartesian coordinate system, each coordinate is uniformly and independently placed in the window . If the window is not a subspace of Cartesian space (for example, inside a unit sphere or on the surface of a unit sphere), then the points will not be uniformly placed in , and suitable change of coordinates (from Cartesian) are needed. Inhomogeneous case For the inhomogeneous case, a couple of different methods can be used depending on the nature of the intensity function . If the intensity function is sufficiently simple, then independent and random non-uniform (Cartesian or other) coordinates of the points can be generated. For example, simulating a Poisson point process on a circular window can be done for an isotropic intensity function (in polar coordinates and ), implying it is rotationally variant or independent of but dependent on , by a change of variable in if the intensity function is sufficiently simple. For more complicated intensity functions, one can use an acceptance-rejection method, which consists of using (or 'accepting') only certain random points and not using (or 'rejecting') the other points, based on the ratio:. where is the point under consideration for acceptance or rejection. That is, a location is uniformly randomly selected for consideration, then to determine whether to place a sample at that location a uniformly randomly drawn number in is compared to the probability density function , accepting if it is smaller than the probability density function, and repeating until the previously chosen number of samples have been drawn. General Poisson point process In measure theory, the Poisson point process can be further generalized to what is sometimes known as the general Poisson point process or general Poisson process by using a Radon measure , which is a locally finite measure. In general, this Radon measure can be atomic, which means multiple points of the Poisson point process can exist in the same location of the underlying space. In this situation, the number of points at is a Poisson random variable with mean . But sometimes the converse is assumed, so the Radon measure is diffuse or non-atomic. A point process is a general Poisson point process with intensity if it has the two following properties: the number of points in a bounded Borel set is a Poisson random variable with mean . In other words, denote the total number of points located in by , then the probability of random variable being equal to is given by: the number of points in disjoint Borel sets forms independent random variables. The Radon measure maintains its previous interpretation of being the expected number of points of located in the bounded region , namely Furthermore, if is absolutely continuous such that it has a density (which is the Radon–Nikodym density or derivative) with respect to the Lebesgue measure, then for all Borel sets it can be written as: where the density is known, among other terms, as the intensity function. History Poisson distribution Despite its name, the Poisson point process was neither discovered nor studied by its namesake. It is cited as an example of Stigler's law of eponymy. The name arises from the process's inherent relation to the Poisson distribution, derived by Poisson as a limiting case of the binomial distribution. It describes the probability of the sum of Bernoulli trials with probability , often likened to the number of heads (or tails) after biased coin flips with the probability of a head (or tail) occurring being . For some positive constant , as increases towards infinity and decreases towards zero such that the product is fixed, the Poisson distribution more closely approximates that of the binomial. Poisson derived the Poisson distribution, published in 1841, by examining the binomial distribution in the limit of (to zero) and (to infinity). It only appears once in all of Poisson's work, and the result was not well known during his time. Over the following years others used the distribution without citing Poisson, including Philipp Ludwig von Seidel and Ernst Abbe. At the end of the 19th century, Ladislaus Bortkiewicz studied the distribution, citing Poisson, using real data on the number of deaths from horse kicks in the Prussian army. Discovery There are a number of claims for early uses or discoveries of the Poisson point process. For example, John Michell in 1767, a decade before Poisson was born, was interested in the probability a star being within a certain region of another star under the erroneous assumption that the stars were "scattered by mere chance", and studied an example consisting of the six brightest stars in the Pleiades, without deriving the Poisson distribution. This work inspired Simon Newcomb to study the problem and to calculate the Poisson distribution as an approximation for the binomial distribution in 1860. At the beginning of the 20th century the Poisson process (in one dimension) would arise independently in different situations. In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process. In Denmark A.K. Erlang derived the Poisson distribution in 1909 when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang unaware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent of each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution. In 1910 Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Their experimental work had mathematical contributions from Harry Bateman, who derived Poisson probabilities as a solution to a family of differential equations, though the solution had been derived earlier, resulting in the independent discovery of the Poisson process. After this time, there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists. Early applications The years after 1909 led to a number of studies and applications of the Poisson point process, however, its early history is complex, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and others working in the physical sciences. The early results were published in different languages and in different settings, with no standard terminology and notation used. For example, in 1922 Swedish chemist and Nobel Laureate Theodor Svedberg proposed a model in which a spatial Poisson point process is the underlying process to study how plants are distributed in plant communities. A number of mathematicians started studying the process in the early 1930s, and important contributions were made by Andrey Kolmogorov, William Feller and Aleksandr Khinchin, among others. In the field of teletraffic engineering, mathematicians and statisticians studied and used Poisson and other point processes. History of terms The Swede Conny Palm in his 1943 dissertation studied the Poisson and other point processes in the one-dimensional setting by examining them in terms of the statistical or stochastic dependence between the points in time. In his work exists the first known recorded use of the term point processes as Punktprozesse in German. It is believed that William Feller was the first in print to refer to it as the Poisson process in a 1940 paper. Although the Swede Ove Lundberg used the term Poisson process in his 1940 PhD dissertation, in which Feller was acknowledged as an influence, it has been claimed that Feller coined the term before 1940. It has been remarked that both Feller and Lundberg used the term as though it were well-known, implying it was already in spoken use by then. Feller worked from 1936 to 1939 alongside Harald Cramér at Stockholm University, where Lundberg was a PhD student under Cramér who did not use the term Poisson process in a book by him, finished in 1936, but did in subsequent editions, which his has led to the speculation that the term Poisson process was coined sometime between 1936 and 1939 at the Stockholm University. Terminology The terminology of point process theory in general has been criticized for being too varied. In addition to the word point often being omitted, the homogeneous Poisson (point) process is also called a stationary Poisson (point) process, as well as uniform Poisson (point) process. The inhomogeneous Poisson point process, as well as being called nonhomogeneous, is also referred to as the non-stationary Poisson process. The term point process has been criticized, as the term process can suggest over time and space, so random point field, resulting in the terms Poisson random point field or Poisson point field being also used. A point process is considered, and sometimes called, a random counting measure, hence the Poisson point process is also referred to as a Poisson random measure, a term used in the study of Lévy processes, but some choose to use the two terms for Poisson points processes defined on two different underlying spaces. The underlying mathematical space of the Poisson point process is called a carrier space, or state space, though the latter term has a different meaning in the context of stochastic processes. In the context of point processes, the term "state space" can mean the space on which the point process is defined such as the real line, which corresponds to the index set or parameter set in stochastic process terminology. The measure is called the intensity measure, mean measure, or parameter measure, as there are no standard terms. If has a derivative or density, denoted by , is called the intensity function of the Poisson point process. For the homogeneous Poisson point process, the derivative of the intensity measure is simply a constant , which can be referred to as the rate, usually when the underlying space is the real line, or the intensity. It is also called the mean rate or the mean density or rate . For , the corresponding process is sometimes referred to as the standard Poisson (point) process. The extent of the Poisson point process is sometimes called the exposure. Notation The notation of the Poisson point process depends on its setting and the field it is being applied in. For example, on the real line, the Poisson process, both homogeneous or inhomogeneous, is sometimes interpreted as a counting process, and the notation is used to represent the Poisson process. Another reason for varying notation is due to the theory of point processes, which has a couple of mathematical interpretations. For example, a simple Poisson point process may be considered as a random set, which suggests the notation , implying that is a random point belonging to or being an element of the Poisson point process . Another, more general, interpretation is to consider a Poisson or any other point process as a random counting measure, so one can write the number of points of a Poisson point process being found or located in some (Borel measurable) region as , which is a random variable. These different interpretations results in notation being used from mathematical fields such as measure theory and set theory. For general point processes, sometimes a subscript on the point symbol, for example , is included so one writes (with set notation) instead of , and can be used for the bound variable in integral expressions such as Campbell's theorem, instead of denoting random points. Sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the point or belongs to or is a point of the point process , and be written with set notation as or . Furthermore, the set theory and integral or measure theory notation can be used interchangeably. For example, for a point process defined on the Euclidean state space and a (measurable) function on , the expression demonstrates two different ways to write a summation over a point process (see also Campbell's theorem (probability)). More specifically, the integral notation on the left-hand side is interpreting the point process as a random counting measure while the sum on the right-hand side suggests a random set interpretation. Functionals and moment measures In probability theory, operations are applied to random variables for different purposes. Sometimes these operations are regular expectations that produce the average or variance of a random variable. Others, such as characteristic functions (or Laplace transforms) of a random variable can be used to uniquely identify or characterize random variables and prove results like the central limit theorem. In the theory of point processes there exist analogous mathematical tools which usually exist in the forms of measures and functionals instead of moments and functions respectively. Laplace functionals For a Poisson point process with intensity measure on some space , the Laplace functional is given by: One version of Campbell's theorem involves the Laplace functional of the Poisson point process. Probability generating functionals The probability generating function of non-negative integer-valued random variable leads to the probability generating functional being defined analogously with respect to any non-negative bounded function on such that . For a point process the probability generating functional is defined as: where the product is performed for all the points in . If the intensity measure of is locally finite, then the is well-defined for any measurable function on . For a Poisson point process with intensity measure the generating functional is given by: which in the homogeneous case reduces to Moment measure For a general Poisson point process with intensity measure the first moment measure is its intensity measure: which for a homogeneous Poisson point process with constant intensity means: where is the length, area or volume (or more generally, the Lebesgue measure) of . The Mecke equation The Mecke equation characterizes the Poisson point process. Let be the space of all -finite measures on some general space . A point process with intensity on is a Poisson point process if and only if for all measurable functions the following holds For further details see. Factorial moment measure For a general Poisson point process with intensity measure the -th factorial moment measure is given by the expression: where is the intensity measure or first moment measure of , which for some Borel set is given by For a homogeneous Poisson point process the -th factorial moment measure is simply: where is the length, area, or volume (or more generally, the Lebesgue measure) of . Furthermore, the -th factorial moment density is: Avoidance function The avoidance function or void probability of a point process is defined in relation to some set , which is a subset of the underlying space , as the probability of no points of existing in . More precisely, for a test set , the avoidance function is given by: For a general Poisson point process with intensity measure , its avoidance function is given by: Rényi's theorem Simple point processes are completely characterized by their void probabilities. In other words, complete information of a simple point process is captured entirely in its void probabilities, and two simple point processes have the same void probabilities if and if only if they are the same point processes. The case for Poisson process is sometimes known as Rényi's theorem, which is named after Alfréd Rényi who discovered the result for the case of a homogeneous point process in one-dimension. In one form, the Rényi's theorem says for a diffuse (or non-atomic) Radon measure on and a set is a finite union of rectangles (so not Borel) that if is a countable subset of such that: then is a Poisson point process with intensity measure . Point process operations Mathematical operations can be performed on point processes to get new point processes and develop new mathematical models for the locations of certain objects. One example of an operation is known as thinning which entails deleting or removing the points of some point process according to a rule, creating a new process with the remaining points (the deleted points also form a point process). Thinning For the Poisson process, the independent -thinning operations results in another Poisson point process. More specifically, a -thinning operation applied to a Poisson point process with intensity measure gives a point process of removed points that is also Poisson point process with intensity measure , which for a bounded Borel set is given by: This thinning result of the Poisson point process is sometimes known as Prekopa's theorem. Furthermore, after randomly thinning a Poisson point process, the kept or remaining points also form a Poisson point process, which has the intensity measure The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other. In other words, if a region is known to contain kept points (from the original Poisson point process), then this will have no influence on the random number of removed points in the same region. This ability to randomly create two independent Poisson point processes from one is sometimes known as splitting the Poisson point process. Superposition If there is a countable collection of point processes , then their superposition, or, in set theory language, their union, which is also forms a point process. In other words, any points located in any of the point processes will also be located in the superposition of these point processes . Superposition theorem The superposition theorem of the Poisson point process says that the superposition of independent Poisson point processes with mean measures will also be a Poisson point process with mean measure In other words, the union of two (or countably more) Poisson processes is another Poisson process. If a point is sampled from a countable union of Poisson processes, then the probability that the point belongs to the th Poisson process is given by: For two homogeneous Poisson processes with intensities , the two previous expressions reduce to and Clustering The operation clustering is performed when each point of some point process is replaced by another (possibly different) point process. If the original process is a Poisson point process, then the resulting process is called a Poisson cluster point process. Random displacement A mathematical model may require randomly moving points of a point process to other locations on the underlying mathematical space, which gives rise to a point process operation known as displacement or translation. The Poisson point process has been used to model, for example, the movement of plants between generations, owing to the displacement theorem, which loosely says that the random independent displacement of points of a Poisson point process (on the same underlying space) forms another Poisson point process. Displacement theorem One version of the displacement theorem involves a Poisson point process on with intensity function . It is then assumed the points of are randomly displaced somewhere else in so that each point's displacement is independent and that the displacement of a point formerly at is a random vector with a probability density . Then the new point process is also a Poisson point process with intensity function If the Poisson process is homogeneous with and if is a function of , then In other words, after each random and independent displacement of points, the original Poisson point process still exists. The displacement theorem can be extended such that the Poisson points are randomly displaced from one Euclidean space to another Euclidean space , where is not necessarily equal to . Mapping Another property that is considered useful is the ability to map a Poisson point process from one underlying space to another space. Mapping theorem If the mapping (or transformation) adheres to some conditions, then the resulting mapped (or transformed) collection of points also form a Poisson point process, and this result is sometimes referred to as the mapping theorem. The theorem involves some Poisson point process with mean measure on some underlying space. If the locations of the points are mapped (that is, the point process is transformed) according to some function to another underlying space, then the resulting point process is also a Poisson point process but with a different mean measure . More specifically, one can consider a (Borel measurable) function that maps a point process with intensity measure from one space , to another space in such a manner so that the new point process has the intensity measure: with no atoms, where is a Borel set and denotes the inverse of the function . If is a Poisson point process, then the new process is also a Poisson point process with the intensity measure . Approximations with Poisson point processes The tractability of the Poisson process means that sometimes it is convenient to approximate a non-Poisson point process with a Poisson one. The overall aim is to approximate both the number of points of some point process and the location of each point by a Poisson point process. There a number of methods that can be used to justify, informally or rigorously, approximating the occurrence of random events or phenomena with suitable Poisson point processes. The more rigorous methods involve deriving upper bounds on the probability metrics between the Poisson and non-Poisson point processes, while other methods can be justified by less formal heuristics. Clumping heuristic One method for approximating random events or phenomena with Poisson processes is called the clumping heuristic. The general heuristic or principle involves using the Poisson point process (or Poisson distribution) to approximate events, which are considered rare or unlikely, of some stochastic process. In some cases these rare events are close to being independent, hence a Poisson point process can be used. When the events are not independent, but tend to occur in clusters or clumps, then if these clumps are suitably defined such that they are approximately independent of each other, then the number of clumps occurring will be close to a Poisson random variable and the locations of the clumps will be close to a Poisson process. Stein's method Stein's method is a mathematical technique originally developed for approximating random variables such as Gaussian and Poisson variables, which has also been applied to point processes. Stein's method can be used to derive upper bounds on probability metrics, which give way to quantify how different two random mathematical objects vary stochastically. Upperbounds on probability metrics such as total variation and Wasserstein distance have been derived. Researchers have applied Stein's method to Poisson point processes in a number of ways, such as using Palm calculus. Techniques based on Stein's method have been developed to factor into the upper bounds the effects of certain point process operations such as thinning and superposition. Stein's method has also been used to derive upper bounds on metrics of Poisson and other processes such as the Cox point process, which is a Poisson process with a random intensity measure. Convergence to a Poisson point process In general, when an operation is applied to a general point process the resulting process is usually not a Poisson point process. For example, if a point process, other than a Poisson, has its points randomly and independently displaced, then the process would not necessarily be a Poisson point process. However, under certain mathematical conditions for both the original point process and the random displacement, it has been shown via limit theorems that if the points of a point process are repeatedly displaced in a random and independent manner, then the finite-distribution of the point process will converge (weakly) to that of a Poisson point process. Similar convergence results have been developed for thinning and superposition operations that show that such repeated operations on point processes can, under certain conditions, result in the process converging to a Poisson point processes, provided a suitable rescaling of the intensity measure (otherwise values of the intensity measure of the resulting point processes would approach zero or infinity). Such convergence work is directly related to the results known as the Palm–Khinchin equations, which has its origins in the work of Conny Palm and Aleksandr Khinchin, and help explains why the Poisson process can often be used as a mathematical model of various random phenomena. Generalizations of Poisson point processes The Poisson point process can be generalized by, for example, changing its intensity measure or defining on more general mathematical spaces. These generalizations can be studied mathematically as well as used to mathematically model or represent physical phenomena. Poisson-type random measures The Poisson-type random measures (PT) are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under Point process operation#Thinning. These random measures are examples of the mixed binomial process and share the distributional self-similarity property of the Poisson random measure. They are the only members of the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. The Poisson random measure is independent on disjoint subspaces, whereas the other PT random measures (negative binomial and binomial) have positive and negative covariances. The PT random measures are discussed and include the Poisson random measure, negative binomial random measure, and binomial random measure. Poisson point processes on more general spaces For mathematical models the Poisson point process is often defined in Euclidean space, but has been generalized to more abstract spaces and plays a fundamental role in the study of random measures, which requires an understanding of mathematical fields such as probability theory, measure theory and topology. In general, the concept of distance is of practical interest for applications, while topological structure is needed for Palm distributions, meaning that point processes are usually defined on mathematical spaces with metrics. Furthermore, a realization of a point process can be considered as a counting measure, so points processes are types of random measures known as random counting measures. In this context, the Poisson and other point processes have been studied on a locally compact second countable Hausdorff space. Cox point process A Cox point process, Cox process or doubly stochastic Poisson process is a generalization of the Poisson point process by letting its intensity measure to be also random and independent of the underlying Poisson process. The process is named after David Cox who introduced it in 1955, though other Poisson processes with random intensities had been independently introduced earlier by Lucien Le Cam and Maurice Quenouille. The intensity measure may be a realization of random variable or a random field. For example, if the logarithm of the intensity measure is a Gaussian random field, then the resulting process is known as a log Gaussian Cox process. More generally, the intensity measures is a realization of a non-negative locally finite random measure. Cox point processes exhibit a clustering of points, which can be shown mathematically to be larger than those of Poisson point processes. The generality and tractability of Cox processes has resulted in them being used as models in fields such as spatial statistics and wireless networks. Marked Poisson point process For a given point process, each random point of a point process can have a random mathematical object, known as a mark, randomly assigned to it. These marks can be as diverse as integers, real numbers, lines, geometrical objects or other point processes. The pair consisting of a point of the point process and its corresponding mark is called a marked point, and all the marked points form a marked point process. It is often assumed that the random marks are independent of each other and identically distributed, yet the mark of a point can still depend on the location of its corresponding point in the underlying (state) space. If the underlying point process is a Poisson point process, then the resulting point process is a marked Poisson point process. Marking theorem If a general point process is defined on some mathematical space and the random marks are defined on another mathematical space, then the marked point process is defined on the Cartesian product of these two spaces. For a marked Poisson point process with independent and identically distributed marks, the marking theorem states that this marked point process is also a (non-marked) Poisson point process defined on the aforementioned Cartesian product of the two mathematical spaces, which is not true for general point processes. Compound Poisson point process The compound Poisson point process or compound Poisson process is formed by adding random values or weights to each point of Poisson point process defined on some underlying space, so the process is constructed from a marked Poisson point process, where the marks form a collection of independent and identically distributed non-negative random variables. In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is formed from the sum of all the random variables corresponding to points of the Poisson process located in some region of the underlying mathematical space. If there is a marked Poisson point process formed from a Poisson point process (defined on, for example, ) and a collection of independent and identically distributed non-negative marks such that for each point of the Poisson process there is a non-negative random variable , the resulting compound Poisson process is then: where is a Borel measurable set. If general random variables take values in, for example, -dimensional Euclidean space , the resulting compound Poisson process is an example of a Lévy process provided that it is formed from a homogeneous Point process defined on the non-negative numbers . Failure process with the exponential smoothing of intensity functions The failure process with the exponential smoothing of intensity functions (FP-ESI) is an extension of the nonhomogeneous Poisson process. The intensity function of an FP-ESI is an exponential smoothing function of the intensity functions at the last time points of event occurrences and outperforms other nine stochastic processes on 8 real-world failure datasets when the models are used to fit the datasets, where the model performance is measured in terms of AIC (Akaike information criterion) and BIC (Bayesian information criterion).
Mathematics
Probability
null
5369676
https://en.wikipedia.org/wiki/Paleo-Tethys%20Ocean
Paleo-Tethys Ocean
The Paleo-Tethys or Palaeo-Tethys Ocean was an ocean located along the northern margin of the paleocontinent Gondwana that started to open during the Middle Cambrian, grew throughout the Paleozoic, and finally closed during the Late Triassic; existing for about 400 million years. Paleo-Tethys was a precursor to the Tethys Ocean (also called the Neo-Tethys), which was located between Gondwana and the Hunic terranes (continental fragments that broke off Gondwana and moved north). It opened as the Proto-Tethys Ocean subducted under these terranes and closed as the Cimmerian terranes (that also broke-off Gondwana and moved north) gave way to the Tethys Ocean. Confusingly, the Neo-Tethys is sometimes defined as the ocean south of a hypothesized mid-ocean ridge separating Greater India from Asia, in which case the ocean between Cimmeria and this hypothesized ridge is called the Meso-Tethys, i.e., the "Middle-Tethys". The so-called Hunic terranes are divided into the European Hunic (today the crust under parts of Europe – called Armorica – and Iberia) and Asiatic Hunic (today the crust of parts of southern Asia). A large transform fault separated the two terranes. The role the Paleo-Tethys played in the supercontinent cycle, and especially the break-up of Pangaea, is unresolved. Some geologists argue that the opening of the North Atlantic was triggered by the subduction of Panthalassa under the western margins of the Americas while others argue that the closure of the Paleo-Tethys and Tethys resulted in the break-up. In the first scenario, mantle plumes caused the opening of the Atlantic and the break-up of Pangaea and the closure of the Tethyan domain was one of the consequences of this process; in the other scenario, the longitudinal forces that closed the Tethyan domain were transmitted latitudinally in what is today the Mediterranean region, resulting in the initial opening of the Atlantic. History The Paleo-Tethys Ocean began to form when back-arc spreading separated the European Hunic terranes from Gondwana in the late Ordovician, to begin moving toward Euramerica (also known as the Old Red Sandstone Continent) in the north. In the process, the plate under the Rheic Ocean between Euramerica and the European Hunic terranes subducted and rifts in this plate resulted in the formation of a small Rhenhercynian Ocean which lasted until Late Carboniferous time. In the Early Devonian, the eastern part of Paleo-Tethys opened up, when the Asiatic Hunic terranes, including the North and South China microcontinents, moved northward. These events caused Proto-Tethys Ocean to shrink until the Late Carboniferous, when the Chinese blocks collided with Siberia. In the Early Carboniferous however, a subduction zone developed south of the European Hunic terranes consuming Paleo-Tethys oceanic crust. Gondwana started moving north, and in the process the western part of the Paleo-Tethys would close. In the Carboniferous continental collision took place between the Old Red Sandstone Continent and the European Hunic terrane, in North America this is called the Alleghenian orogeny, in Europe the Variscan orogeny. The Rheic Ocean had completely disappeared, and the western Paleo-Tethys was closing. By the Late Permian, the small elongated Cimmerian plate (today's crust of Turkey, Iran, Tibet and parts of South-East Asia) broke away from Gondwana (now part of Pangaea). South of the Cimmerian continent a new ocean, the Tethys Ocean, was created. By the Late Triassic, all that was left of the Paleo-Tethys Ocean was a narrow seaway. In the Early Jurassic epoch, as part of the Alpine Orogeny, the oceanic crust of the Paleo-Tethys subducted under the Cimmerian plate, closing the ocean from west to east. A last remnant of Paleo-Tethys Ocean might be an oceanic crust under the Black Sea. (Anatolia, to the sea's south, is a part of the original Cimmerian continent that formed the southern boundary of the Paleo-Tethys.) The Paleo-Tethys Ocean sat where the Indian Ocean and Southern Asia are now located. The Equator ran the length of the sea, giving it a tropical climate. The shores and islands probably supported dense coal forests.
Physical sciences
Paleogeography
Earth science
5377010
https://en.wikipedia.org/wiki/Anurognathidae
Anurognathidae
Anurognathidae is a family of small, short-tailed pterosaurs that lived in Europe, Asia, and possibly North America during the Jurassic and Cretaceous periods. Five genera are known: Anurognathus, from the Late Jurassic of Germany; Jeholopterus, from the Middle to Late Jurassic of China; Dendrorhynchoides, from the Middle Jurassic of China; Batrachognathus, from the Late Jurassic of Kazakhstan; and Vesperopterylus, from the Early Cretaceous of China. Bennett (2007) suggested that the holotype of Mesadactylus, BYU 2024, a synsacrum, belonged to an anurognathid, though this affinity has been questioned by other authors. Mesadactylus is from the Late Jurassic Morrison Formation of the United States. Indeterminate anurognathid remains have also been reported from the Middle Jurassic Bakhar Svita of Mongolia and the Early Cretaceous of North Korea. Classification A family Anurognathidae was named in 1928 by Franz Nopcsa von Felső-Szilvás (as the subfamily Anurognathinae) with Anurognathus as the type genus. The family name Anurognathidae was first used by Oskar Kuhn in 1967. The phylogeny of Anurognathidae is disputed. Both Alexander Kellner and David Unwin in 2003 defined the group as a node clade: the last common ancestor of Anurognathus and Batrachognathus and all its descendants. Some analyses, such as that of Kellner (2003), place them as the most basal group in the pterosaur tree. Unwin also recovered the group as very basal, falling between Dimorphodontidae and Campylognathoididae. However, anurognathids have some characteristics in common with the derived Pterodactyloidea, such as short and fused tail bones. More recent analyses, which include more fossils and taxa, support this observation and recover the group as substantially more derived than previously thought, but still basal to pterodactyloids. In 2010 an analysis by Brian Andres indicated the Anurognathidae and Pterodactyloidea were sister taxa. This conformed better to the fossil record because no early anurognathids were known at the time, and being the basalmost pterosaur clade would require a ghost lineage of over sixty million years. However, the reassignment of "Dimorphodon weintraubi" to a basal position within Anurognathidae helps fill this gap and suggests this group appeared earlier than previously thought, possibly in the Early Jurassic Period. Depending on where Anurognathidae falls within the Pterosauria, the existence of "Dimorphodon weintraubi" may have important implications for the timing of the evolution of major pterosaur clades, making further study of this specimen critical for pterosaur research. In 2022, a phylogenetic analysis accompanying the description of Cascocauda recovered Anurognathidae as a sister clade to Breviquartossa. Lifestyle Anurognathids are widely believed to have been nocturnal or crepuscular akin to bats. The fact that many anurognathids have large eye sockets supports the theory of operating in low-light environments. Anurognathid teeth suggest they were largely insectivorous, though some may have had more prey choices, such as Batrachognathus and Jeholopterus, which have been hypothesized to have been piscivorous. At least some, such as Vesperopterylus, were arboreal, with claws suited for gripping tree branches. Feathers A 2018 study of the remains of two small Jurassic-age pterosaurs from Inner Mongolia, China, named as the genus Cascocauda in 2022, found that pterosaurs had a wide array of pycnofiber shapes and structures, as opposed to the homogeneous structures that had generally been assumed to cover them. Some of these had frayed ends, very similar in structure to four different feather types known from birds or other dinosaurs but almost never known from pterosaurs prior to the study, suggesting homology. A response to this study was published in 2020, where it was suggested that the structures seen on the anurognathids were actually a result of the decomposition of aktinofibrils: a type of fibre used to strengthen and stiffen the wing. However, in a response to this, the authors of the 2018 paper point to the fact that the presence of the structures extend past the patagium, and the presence of both aktinofibrils and filaments on Jeholopterus ningchengensis and Sordes pilosus. The various forms of filament structure present on the anurognathids in the 2018 study would also require a form of decomposition that would cause the different 'filament' forms seen. They therefore conclude that the most parsimonious interpretation of the structures is that they are filamentous proto-feathers. But Liliana D’Alba points out that the description of the preserved integumentary structures on the two anurogmathid specimens is still based upon gross morphology. She also points out that Pterorhynchus was described to have feathers to support the claim that feathers had a common origin with ornithodirans but was argued against by several authors. The only method to assure if it was homologous to feathers is to use a scanning electron microscope.
Biology and health sciences
Pterosaurs
Animals
37405934
https://en.wikipedia.org/wiki/Wild%20Bactrian%20camel
Wild Bactrian camel
The wild Bactrian camel (Camelus ferus) is an endangered species of camel endemic to Northwest China and southwestern Mongolia. It is closely related but not ancestral to the domestic Bactrian camel (Camelus bactrianus). Genetic studies have established it as a separate species which diverged from the Bactrian camel about 0.7–1.1 million years ago. Currently, there are only around 950 wild Bactrian camels. Most live on the Lop Nur Wild Camel National Nature Reserve in China, and a smaller population lives in the Great Gobi A Strictly Protected Area in Mongolia. There are also populations in the Altun Shan Wild Camel Nature Reserve (1986) in Qakilik County, in the Aksai Annanba Nature Reserve (1992), and in Dunhuang Wanyaodun Nature Reserve (now Dunhuang Xihu Wild Camel Nature Reserve) contiguous with the reserve in Qakilik (2001) and a reserve in Mazongshan contiguous with the reserve in Mongolia, all in China. Name The species was originally considered a subspecies of the domestic Bactrian camel and named C. bactrianus ferus in reference to the region of Bactria, a wider geopolitical area of ancient South-Central Asia where wild Bactrian camels (C. ferus) were once widespread. The name ferus means "feral" or "wild" and is a common subspecies name for the wild ancestors of previously described domestic species. Later research recognised the domestic C. bactrianus as a separate, sister species to the wild Bactrian, elevating the subspecies C. bactrianus ferus to full species level as C. ferus. Description Wild Bactrian camels have long, narrow slit-like nostrils, a double row of long thick eyelashes, and ears with hairs that provide protection against desert sandstorms. They have tough undivided soles with two large toes that spread wide apart, and a horny layer which enables them to walk on rough and hot stony or sandy terrain. Their thick and shaggy body hair changes colour to light brown or beige during winter. Like its close relative, the domesticated Bactrian camel, it is one of the few mammals able to eat snow to provide itself with liquids in the winter. While the legend that camels store water in their humps is a misconception, they are adapted to conserve water. However, long periods without water will result in a deterioration of the animal's health. Differences from domestic Bactrian camels Wild Bactrian camels (Camelus ferus) appear similar to domesticated Bactrian camels (Camelus bactrianus) but the outstanding difference is genetic, with the two species having descended from two distinct ancestors. There are several differences in size and shape between the two species. The wild Bactrian camel is slightly smaller than the domestic Bactrian camel and has been described as "lithe, and slender-legged, with very narrow feet and a body that looks laterally compressed." The humps of the wild Bactrian camel are smaller, lower, and more conical in shape than those of the domestic Bactrian camel. These humps may often be about half the size of those of a domesticated Bactrian camel. The wild Bactrian camel has a different shape of foot and a flatter skull (the Mongolian name for a wild Bactrian camel, havtagai, means "flat-head"). The wool of the wild Bactrian camel is always sandy coloured and shorter and sparser than that of domestic Bactrian camels. The wild Bactrian camel can also survive on water saltier than seawater, something which probably no other mammal in the world can tolerate – including the domesticated Bactrian camel. Behaviour Wild Bactrian camels generally move in groups of up to 30 individuals, although 6 to 20 is more common depending on the amount of food available. They are fully migratory and widely scattered with a population density as low as 5 per 100 km2. They travel with a single adult male in the lead and assemble near water points where larger groups can also be seen. Their lifespan is about 40 years and they breed during winter with an overlap into the rainy season. Females produce offspring starting at age 5, and thereafter in a cycle of 2 years. Typically, wild Bactrian camels seen alone are postdispersal young individuals which have just reached sexual maturity. Distribution and habitat Their habitat is in arid plains and hills where water sources are scarce and very little vegetation exists with shrubs as their main food source. These habitats have widely varying temperatures: the summer temperature ranges from and winter temperature a low of . Wild Bactrian camels travel over long distances, seeking water in places close to mountains where springs are found, and hill slopes covered in snow provide some moisture in winter. The size of a herd may be as many as 100 camels but generally consists of 2–15 members in a group; this is reported to be due to arid environment and heavy poaching. The wild Bactrian camels are limited to three pockets in northwest China and some in southwest Mongolia. China spotted 39, and estimated that there were 600–650 camels in Altun Shan-Lop Nur reserves combined, in late 2018, with 48 spotted in Dunhuang reserve in 2018. At the Dunhuang and Mazongshan reserves, it had been estimated that one hundred camels exist per reserve, and for the Aksai reserve, it was estimated that there are nearly 200, according to an earlier estimation. In Mongolia, their population was about 800 in 2012. In ancient times, wild Bactrian camels were seen from the great bend of the Yellow River extending west to the Inner Mongolian deserts and further to Northwest China and central Kazakhstan. In the 1800s, due to hunting for its meat and hide, its presence was noted in remote areas of the Taklamakan, Kumtag and Gobi deserts in China and Mongolia. In the 1920s, only remnant populations were recorded in Mongolia and China. In 1964, China began testing nuclear weapons at Lop Nur, home to many of the wild Bactrian camels. The camels experienced no apparent ill effects from the radiation and continued to breed naturally. Instead, their habitat became a restricted military zone where human activity was kept to a minimum. After China signed the Comprehensive Test Ban Treaty in 1996, the camels were reclassified as an endangered species on the IUCN Red List. Since then, human incursions into the area have caused a sharp drop in the camel population. The extant habitat of the wild camels has been further disturbed by newly constructed roads and exploitation of oil fields. In addition, a border fence between China and Mongolia prevents the camels from migrating between the Chinese and Mongolian populations. A 2013 study confirmed at least 382 wild camels in China. The total population within the Chinese nature reserves is estimated to be between 640 and 740. Genetics Genetic analysis suggests that the domestic Bactrian camel did not descend from the wild Bactrian camel, and that the two species split around 700,000 – 1.1 million years ago. Ancient DNA analysis suggests that alongside the domestic Bactrian camel, the species is closely related to the extinct giant camel Camelus knoblochi, which went extinct around 20,000 years ago. While C. knoblochi is equidistant genetically from both living Bactrian camel species based on nuclear genomes, the mitochondrial genome diversity of C. knoblochi is nested within that of living wild Bactrian camels, suggesting interbreeding between the two species. Cladogram of relationships between living and extinct camels based on genomes after Yuan et al. 2024. Status The wild Bactrian camel has been classified as "critically endangered", according to the International Union for Conservation of Nature and Natural Resources (IUCN), since 2002; its status was deemed "critical" in the 1960s, gradually being elevated to "critically endangered". The UK-based Wild Camel Protection Foundation (WCPF) estimates that there are only about 950 individuals of the species left in the world, with its current population trend still decreasing. The London Zoological Society considers the wild bactrian to be the eighth most endangered large mammal in the world, and it is on the critically endangered list. The wild Bactrian camel was identified as one of the top ten "focal species" in 2007 by the Evolutionarily Distinct and Globally Endangered (EDGE) project, which prioritises unique and threatened species for conservation.[29] Observations made during five field expeditions starting in 1993 by John Hare and the WCPF suggest that the surviving populations may be facing an 80% decline within the next 30 years. Threats Camels face various threats including poaching, climate change, being hunted, and human encroachment into their habitat. In the Gobi Reserve Area, 25 to 30 camels are reported to be poached every year, and about 20 in the Lop Nur Reserve. Hunters kill the camels by laying land mines in the salt water springs where the camels drink. Other threats include scarcity of access to water such as oases, attacks by wolves, hybridization with domestic Bactrians leading to a concern of a loss of genetically distinct populations or infertile individuals which could potentially ward off viable bulls from a large number of females during their lifetimes, toxic effluent releases from illegal mining, re-designation of wildlife areas as industrial zones, and sharing grazing areas with domestic animals. Due to increasing human populations, wild Bactrian camels that migrate in search of grazing land may compete for food and water sources with introduced domestic stock and are sometimes shot by farmers. The only extant predators that regularly target wild Bactrian camels are wolves, which have been seen to pursue weaker and weather-battered camels as they try to reach oases. Due to increasingly dry conditions in the species' range, the numbers of cases of wolf predation on wild Bactrian camels at oases has reportedly increased. Conservation Several actions have been initiated by the governments of China and Mongolia to conserve this species, including ecosystem-based management. Two programmes instituted in this respect are the Great Gobi Reserve A in Mongolia, set up in 1982; and the Lop Nur Wild Camel National Nature Reservein China, established in 2000. The Wild Camel Protection Foundation, the only such charity of its kind, has as its main goal conservation of the wild in its natural desert environment to ensure that their status does not transition to Extinct in the Wild. The actions taken by the various organizations, motivated and supported by IUCN and WCPF, include establishment of more nature reserves (in China and Mongolia) for their conservation, and breeding them in captivity (as captive females may calve twice every two years, which may not happen in the wild) to prevent extinction. The captive breeding initiated by WCPF in 2003 is the Zakhyn-Us Sanctuary in Mongolia, where the initial programme of breeding the last non-hybridised herds of wild Bactrian camels has proved a success, with the birth of several viable calves. The wild Bactrian camel was considered for introduction at Pleistocene Park in Northern Siberia, as a proxy for extinct Pleistocene camel species. If this had proved feasible, it would have increased their geographic range considerably, adding a safety margin to their survival. In 2021 however, domesticated Bactrian camels were introduced to the park instead.
Biology and health sciences
Camelidae
Animals
24712184
https://en.wikipedia.org/wiki/Igneous%20rock
Igneous rock
Igneous rock ( ), or magmatic rock, is one of the three main rock types, the others being sedimentary and metamorphic. Igneous rocks are formed through the cooling and solidification of magma or lava. The magma can be derived from partial melts of existing rocks in either a planet's mantle or crust. Typically, the melting is caused by one or more of three processes: an increase in temperature, a decrease in pressure, or a change in composition. Solidification into rock occurs either below the surface as intrusive rocks or on the surface as extrusive rocks. Igneous rock may form with crystallization to form granular, crystalline rocks, or without crystallization to form natural glasses. Igneous rocks occur in a wide range of geological settings: shields, platforms, orogens, basins, large igneous provinces, extended crust and oceanic crust. Geological significance Igneous and metamorphic rocks make up 90–95% of the top of the Earth's crust by volume. Igneous rocks form about 15% of the Earth's current land surface. Most of the Earth's oceanic crust is made of igneous rock. Igneous rocks are also geologically important because: their minerals and global chemistry give information about the composition of the lower crust or upper mantle from which their parent magma was extracted, and the temperature and pressure conditions that allowed this extraction; their absolute ages can be obtained from various forms of radiometric dating and can be compared to adjacent geological strata, thus permitting calibration of the geological time scale; their features are usually characteristic of a specific tectonic environment, allowing tectonic reconstructions (see plate tectonics); in some special circumstances they host important mineral deposits (ores): for example, tungsten, tin, and uranium are commonly associated with granites and diorites, whereas ores of chromium and platinum are commonly associated with gabbros. Geological setting Igneous rocks can be either intrusive (plutonic and hypabyssal) or extrusive (volcanic). Intrusive Intrusive igneous rocks make up the majority of igneous rocks and are formed from magma that cools and solidifies within the crust of a planet. Bodies of intrusive rock are known as intrusions and are surrounded by pre-existing rock (called country rock). The country rock is an excellent thermal insulator, so the magma cools slowly, and intrusive rocks are coarse-grained (phaneritic). The mineral grains in such rocks can generally be identified with the naked eye. Intrusions can be classified according to the shape and size of the intrusive body and its relation to the bedding of the country rock into which it intrudes. Typical intrusive bodies are batholiths, stocks, laccoliths, sills and dikes. Common intrusive rocks are granite, gabbro, or diorite. The central cores of major mountain ranges consist of intrusive igneous rocks. When exposed by erosion, these cores (called batholiths) may occupy huge areas of the Earth's surface. Intrusive igneous rocks that form at depth within the crust are termed plutonic (or abyssal) rocks and are usually coarse-grained. Intrusive igneous rocks that form near the surface are termed subvolcanic or hypabyssal rocks and they are usually much finer-grained, often resembling volcanic rock. Hypabyssal rocks are less common than plutonic or volcanic rocks and often form dikes, sills, laccoliths, lopoliths, or phacoliths. Extrusive Extrusive igneous rock, also known as volcanic rock, is formed by the cooling of molten magma on the earth's surface. The magma, which is brought to the surface through fissures or volcanic eruptions, rapidly solidifies. Hence such rocks are fine-grained (aphanitic) or even glassy. Basalt is the most common extrusive igneous rock and forms lava flows, lava sheets and lava plateaus. Some kinds of basalt solidify to form long polygonal columns. The Giant's Causeway in Antrim, Northern Ireland is an example. The molten rock, which typically contains suspended crystals and dissolved gases, is called magma. It rises because it is less dense than the rock from which it was extracted. When magma reaches the surface, it is called lava. Eruptions of volcanoes into air are termed subaerial, whereas those occurring underneath the ocean are termed submarine. Black smokers and mid-ocean ridge basalt are examples of submarine volcanic activity. The volume of extrusive rock erupted annually by volcanoes varies with plate tectonic setting. Extrusive rock is produced in the following proportions: divergent boundary: 73% convergent boundary (subduction zone): 15% hotspot: 12%. The behaviour of lava depends upon its viscosity, which is determined by temperature, composition, and crystal content. High-temperature magma, most of which is basaltic in composition, behaves in a manner similar to thick oil and, as it cools, treacle. Long, thin basalt flows with pahoehoe surfaces are common. Intermediate composition magma, such as andesite, tends to form cinder cones of intermingled ash, tuff and lava, and may have a viscosity similar to thick, cold molasses or even rubber when erupted. Felsic magma, such as rhyolite, is usually erupted at low temperature and is up to 10,000 times as viscous as basalt. Volcanoes with rhyolitic magma commonly erupt explosively, and rhyolitic lava flows are typically of limited extent and have steep margins because the magma is so viscous. Felsic and intermediate magmas that erupt often do so violently, with explosions driven by the release of dissolved gases—typically water vapour, but also carbon dioxide. Explosively erupted pyroclastic material is called tephra and includes tuff, agglomerate and ignimbrite. Fine volcanic ash is also erupted and forms ash tuff deposits, which can often cover vast areas. Because volcanic rocks are mostly fine-grained or glassy, it is much more difficult to distinguish between the different types of extrusive igneous rocks than between different types of intrusive igneous rocks. Generally, the mineral constituents of fine-grained extrusive igneous rocks can only be determined by examination of thin sections of the rock under a microscope, so only an approximate classification can usually be made in the field. Although classification by mineral makeup is preferred by the IUGS, this is often impractical, and chemical classification is done instead using the TAS classification. Classification Igneous rocks are classified according to mode of occurrence, texture, mineralogy, chemical composition, and the geometry of the igneous body. The classification of the many types of igneous rocks can provide important information about the conditions under which they formed. Two important variables used for the classification of igneous rocks are particle size, which largely depends on the cooling history, and the mineral composition of the rock. Feldspars, quartz or feldspathoids, olivines, pyroxenes, amphiboles, and micas are all important minerals in the formation of almost all igneous rocks, and they are basic to the classification of these rocks. All other minerals present are regarded as nonessential in almost all igneous rocks and are called accessory minerals. Types of igneous rocks with other essential minerals are very rare, but include carbonatites, which contain essential carbonates. In a simplified compositional classification, igneous rock types are categorized into felsic or mafic based on the abundance of silicate minerals in the Bowen's Series. Rocks dominated by quartz, plagioclase, alkali feldspar and muscovite are felsic. Mafic rocks are primarily composed of biotite, hornblende, pyroxene and olivine. Generally, felsic rocks are light colored and mafic rocks are darker colored. For textural classification, igneous rocks that have crystals large enough to be seen by the naked eye are called phaneritic; those with crystals too small to be seen are called aphanitic. Generally speaking, phaneritic implies an intrusive origin or plutonic, indicating slow cooling; aphanitic are extrusive or volcanic, indicating rapid cooling. An igneous rock with larger, clearly discernible crystals embedded in a finer-grained matrix is termed porphyry. Porphyritic texture develops when the larger crystals, called phenocrysts, grow to considerable size before the main mass of the magma crystallizes as finer-grained, uniform material called groundmass. Grain size in igneous rocks results from cooling time so porphyritic rocks are created when the magma has two distinct phases of cooling. Igneous rocks are classified on the basis of texture and composition. Texture refers to the size, shape, and arrangement of the mineral grains or crystals of which the rock is composed. Texture Texture is an important criterion for the naming of volcanic rocks. The texture of volcanic rocks, including the size, shape, orientation, and distribution of mineral grains and the intergrain relationships, will determine whether the rock is termed a tuff, a pyroclastic lava or a simple lava. However, the texture is only a subordinate part of classifying volcanic rocks, as most often there needs to be chemical information gleaned from rocks with extremely fine-grained groundmass or from airfall tuffs, which may be formed from volcanic ash. Textural criteria are less critical in classifying intrusive rocks where the majority of minerals will be visible to the naked eye or at least using a hand lens, magnifying glass or microscope. Plutonic rocks also tend to be less texturally varied and less prone to showing distinctive structural fabrics. Textural terms can be used to differentiate different intrusive phases of large plutons, for instance porphyritic margins to large intrusive bodies, porphyry stocks and subvolcanic dikes. Mineralogical classification is most often used to classify plutonic rocks. Chemical classifications are preferred to classify volcanic rocks, with phenocryst species used as a prefix, e.g. "olivine-bearing picrite" or "orthoclase-phyric rhyolite". Mineralogical classification The IUGS recommends classifying igneous rocks by their mineral composition whenever possible. This is straightforward for coarse-grained intrusive igneous rock, but may require examination of thin sections under a microscope for fine-grained volcanic rock, and may be impossible for glassy volcanic rock. The rock must then be classified chemically. Mineralogical classification of an intrusive rock begins by determining if the rock is ultramafic, a carbonatite, or a lamprophyre. An ultramafic rock contains more than 90% of iron- and magnesium-rich minerals such as hornblende, pyroxene, or olivine, and such rocks have their own classification scheme. Likewise, rocks containing more than 50% carbonate minerals are classified as carbonatites, while lamprophyres are rare ultrapotassic rocks. Both are further classified based on detailed mineralogy. In the great majority of cases, the rock has a more typical mineral composition, with significant quartz, feldspars, or feldspathoids. Classification is based on the percentages of quartz, alkali feldspar, plagioclase, and feldspathoid out of the total fraction of the rock composed of these minerals, ignoring all other minerals present. These percentages place the rock somewhere on the QAPF diagram, which often immediately determines the rock type. In a few cases, such as the diorite-gabbro-anorthite field, additional mineralogical criteria must be applied to determine the final classification. Where the mineralogy of an volcanic rock can be determined, it is classified using the same procedure, but with a modified QAPF diagram whose fields correspond to volcanic rock types. Chemical classification and petrology When it is impractical to classify a volcanic rock by mineralogy, the rock must be classified chemically. There are relatively few minerals that are important in the formation of common igneous rocks, because the magma from which the minerals crystallize is rich in only certain elements: silicon, oxygen, aluminium, sodium, potassium, calcium, iron, and magnesium. These are the elements that combine to form the silicate minerals, which account for over ninety percent of all igneous rocks. The chemistry of igneous rocks is expressed differently for major and minor elements and for trace elements. Contents of major and minor elements are conventionally expressed as weight percent oxides (e.g., 51% SiO2, and 1.50% TiO2). Abundances of trace elements are conventionally expressed as parts per million by weight (e.g., 420 ppm Ni, and 5.1 ppm Sm). The term "trace element" is typically used for elements present in most rocks at abundances less than 100 ppm or so, but some trace elements may be present in some rocks at abundances exceeding 1,000 ppm. The diversity of rock compositions has been defined by a huge mass of analytical data—over 230,000 rock analyses can be accessed on the web through a site sponsored by the U. S. National Science Foundation (see the External Link to EarthChem). The single most important component is silica, SiO2, whether occurring as quartz or combined with other oxides as feldspars or other minerals. Both intrusive and volcanic rocks are grouped chemically by total silica content into broad categories. Felsic rocks have the highest content of silica, and are predominantly composed of the felsic minerals quartz and feldspar. These rocks (granite, rhyolite) are usually light coloured, and have a relatively low density. Intermediate rocks have a moderate content of silica, and are predominantly composed of feldspars. These rocks (diorite, andesite) are typically darker in colour than felsic rocks and somewhat more dense. Mafic rocks have a relatively low silica content and are composed mostly of pyroxenes, olivines and calcic plagioclase. These rocks (basalt, gabbro) are usually dark coloured, and have a higher density than felsic rocks. Ultramafic rock is very low in silica, with more than 90% of mafic minerals (komatiite, dunite). This classification is summarized in the following table: The percentage of alkali metal oxides (Na2O plus K2O) is second only to silica in its importance for chemically classifying volcanic rock. The silica and alkali metal oxide percentages are used to place volcanic rock on the TAS diagram, which is sufficient to immediately classify most volcanic rocks. Rocks in some fields, such as the trachyandesite field, are further classified by the ratio of potassium to sodium (so that potassic trachyandesites are latites and sodic trachyandesites are benmoreites). Some of the more mafic fields are further subdivided or defined by normative mineralogy, in which an idealized mineral composition is calculated for the rock based on its chemical composition. For example, basanite is distinguished from tephrite by having a high normative olivine content. Other refinements to the basic TAS classification include: Ultrapotassic – rocks containing molar K2O/Na2O >3. Peralkaline – rocks containing molar (K2O + Na2O)/Al2O3 >1. Peraluminous – rocks containing molar (K2O + Na2O + CaO)/Al2O3 <1. In older terminology, silica oversaturated rocks were called silicic or acidic where the SiO2 was greater than 66% and the family term quartzolite was applied to the most silicic. A normative feldspathoid classifies a rock as silica-undersaturated; an example is nephelinite. Magmas are further divided into three series: The tholeiitic series – basaltic andesites and andesites. The calc-alkaline series – andesites. The alkaline series – subgroups of alkaline basalts and the rare, very high potassium-bearing (i.e. shoshonitic) lavas. The alkaline series is distinguishable from the other two on the TAS diagram, being higher in total alkali oxides for a given silica content, but the tholeiitic and calc-alkaline series occupy approximately the same part of the TAS diagram. They are distinguished by comparing total alkali with iron and magnesium content. These three magma series occur in a range of plate tectonic settings. Tholeiitic magma series rocks are found, for example, at mid-ocean ridges, back-arc basins, oceanic islands formed by hotspots, island arcs and continental large igneous provinces. All three series are found in relatively close proximity to each other at subduction zones where their distribution is related to depth and the age of the subduction zone. The tholeiitic magma series is well represented above young subduction zones formed by magma from relatively shallow depth. The calc-alkaline and alkaline series are seen in mature subduction zones, and are related to magma of greater depths. Andesite and basaltic andesite are the most abundant volcanic rock in island arc which is indicative of the calc-alkaline magmas. Some island arcs have distributed volcanic series as can be seen in the Japanese island arc system where the volcanic rocks change from tholeiite—calc-alkaline—alkaline with increasing distance from the trench. History of classification Some igneous rock names date to before the modern era of geology. For example, basalt as a description of a particular composition of lava-derived rock dates to Georgius Agricola in 1546 in his work De Natura Fossilium. The word granite goes back at least to the 1640s and is derived either from French granit or Italian granito, meaning simply "granulate rock". The term rhyolite was introduced in 1860 by the German traveler and geologist Ferdinand von Richthofen The naming of new rock types accelerated in the 19th century and peaked in the early 20th century. Much of the early classification of igneous rocks was based on the geological age and occurrence of the rocks. However, in 1902, the American petrologists Charles Whitman Cross, Joseph P. Iddings, Louis V. Pirsson, and Henry Stephens Washington proposed that all existing classifications of igneous rocks should be discarded and replaced by a "quantitative" classification based on chemical analysis. They showed how vague, and often unscientific, much of the existing terminology was and argued that as the chemical composition of an igneous rock was its most fundamental characteristic, it should be elevated to prime position. Geological occurrence, structure, mineralogical constitution—the hitherto accepted criteria for the discrimination of rock species—were relegated to the background. The completed rock analysis is first to be interpreted in terms of the rock-forming minerals which might be expected to be formed when the magma crystallizes, e.g., quartz feldspars, olivine, akermannite, Feldspathoids, magnetite, corundum, and so on, and the rocks are divided into groups strictly according to the relative proportion of these minerals to one another. This new classification scheme created a sensation, but was criticized for its lack of utility in fieldwork, and the classification scheme was abandoned by the 1960s. However, the concept of normative mineralogy has endured, and the work of Cross and his coinvestigators inspired a flurry of new classification schemes. Among these was the classification scheme of M.A. Peacock, which divided igneous rocks into four series: the alkalic, the alkali-calcic, the calc-alkali, and the calcic series. His definition of the alkali series, and the term calc-alkali, continue in use as part of the widely used Irvine-Barager classification, along with W.Q. Kennedy's tholeiitic series. By 1958, there were some 12 separate classification schemes and at least 1637 rock type names in use. In that year, Albert Streckeisen wrote a review article on igneous rock classification that ultimately led to the formation of the IUGG Subcommission of the Systematics of Igneous Rocks. By 1989 a single system of classification had been agreed upon, which was further revised in 2005. The number of recommended rock names was reduced to 316. These included a number of new names promulgated by the Subcommission. Origin of magmas The Earth's crust averages about thick under the continents, but averages only some beneath the oceans. The continental crust is composed primarily of sedimentary rocks resting on a crystalline basement formed of a great variety of metamorphic and igneous rocks, including granulite and granite. Oceanic crust is composed primarily of basalt and gabbro. Both continental and oceanic crust rest on peridotite of the mantle. Rocks may melt in response to a decrease in pressure, to a change in composition (such as an addition of water), to an increase in temperature, or to a combination of these processes. Other mechanisms, such as melting from a meteorite impact, are less important today, but impacts during the accretion of the Earth led to extensive melting, and the outer several hundred kilometres of our early Earth was probably an ocean of magma. Impacts of large meteorites in the last few hundred million years have been proposed as one mechanism responsible for the extensive basalt magmatism of several large igneous provinces. Decompression Decompression melting occurs because of a decrease in pressure. The solidus temperatures of most rocks (the temperatures below which they are completely solid) increase with increasing pressure in the absence of water. Peridotite at depth in the Earth's mantle may be hotter than its solidus temperature at some shallower level. If such rock rises during the convection of solid mantle, it will cool slightly as it expands in an adiabatic process, but the cooling is only about 0.3 °C per kilometre. Experimental studies of appropriate peridotite samples document that the solidus temperatures increase by 3 °C to 4 °C per kilometre. If the rock rises far enough, it will begin to melt. Melt droplets can coalesce into larger volumes and be intruded upwards. This process of melting from the upward movement of solid mantle is critical in the evolution of the Earth. Decompression melting creates the ocean crust at mid-ocean ridges. It also causes volcanism in intraplate regions, such as Europe, Africa and the Pacific sea floor. There, it is variously attributed either to the rise of mantle plumes (the "Plume hypothesis") or to intraplate extension (the "Plate hypothesis"). Effects of water and carbon dioxide The change of rock composition most responsible for the creation of magma is the addition of water. Water lowers the solidus temperature of rocks at a given pressure. For example, at a depth of about 100 kilometres, peridotite begins to melt near 800 °C in the presence of excess water, but near or above about 1,500 °C in the absence of water. Water is driven out of the oceanic lithosphere in subduction zones, and it causes melting in the overlying mantle. Hydrous magmas composed of basalt and andesite are produced directly and indirectly as results of dehydration during the subduction process. Such magmas, and those derived from them, build up island arcs such as those in the Pacific Ring of Fire. These magmas form rocks of the calc-alkaline series, an important part of the continental crust. The addition of carbon dioxide is relatively a much less important cause of magma formation than the addition of water, but genesis of some silica-undersaturated magmas has been attributed to the dominance of carbon dioxide over water in their mantle source regions. In the presence of carbon dioxide, experiments document that the peridotite solidus temperature decreases by about 200 °C in a narrow pressure interval at pressures corresponding to a depth of about 70 km. At greater depths, carbon dioxide can have more effect: at depths to about 200 km, the temperatures of initial melting of a carbonated peridotite composition were determined to be 450 °C to 600 °C lower than for the same composition with no carbon dioxide. Magmas of rock types such as nephelinite, carbonatite, and kimberlite are among those that may be generated following an influx of carbon dioxide into mantle at depths greater than about 70 km. Temperature increase Increase in temperature is the most typical mechanism for formation of magma within continental crust. Such temperature increases can occur because of the upward intrusion of magma from the mantle. Temperatures can also exceed the solidus of a crustal rock in continental crust thickened by compression at a plate boundary. The plate boundary between the Indian and Asian continental masses provides a well-studied example, as the Tibetan Plateau just north of the boundary has crust about 80 kilometres thick, roughly twice the thickness of normal continental crust. Studies of electrical resistivity deduced from magnetotelluric data have detected a layer that appears to contain silicate melt and that stretches for at least 1,000 kilometres within the middle crust along the southern margin of the Tibetan Plateau. Granite and rhyolite are types of igneous rock commonly interpreted as products of the melting of continental crust because of increases in temperature. Temperature increases also may contribute to the melting of lithosphere dragged down in a subduction zone. Magma evolution Most magmas are fully melted only for small parts of their histories. More typically, they are mixes of melt and crystals, and sometimes also of gas bubbles. Melt, crystals, and bubbles usually have different densities, and so they can separate as magmas evolve. As magma cools, minerals typically crystallize from the melt at different temperatures (fractional crystallization). As minerals crystallize, the composition of the residual melt typically changes. If crystals separate from the melt, then the residual melt will differ in composition from the parent magma. For instance, a magma of gabbroic composition can produce a residual melt of granitic composition if early formed crystals are separated from the magma. Gabbro may have a liquidus temperature near 1,200 °C, and the derivative granite-composition melt may have a liquidus temperature as low as about 700 °C. Incompatible elements are concentrated in the last residues of magma during fractional crystallization and in the first melts produced during partial melting: either process can form the magma that crystallizes to pegmatite, a rock type commonly enriched in incompatible elements. Bowen's reaction series is important for understanding the idealised sequence of fractional crystallisation of a magma. Clinopyroxene thermobarometry is used to determine temperature and pressure conditions at which magma differentiation occurred for specific igneous rocks. Magma composition can be determined by processes other than partial melting and fractional crystallization. For instance, magmas commonly interact with rocks they intrude, both by melting those rocks and by reacting with them. Magmas of different compositions can mix with one another. In rare cases, melts can separate into two immiscible melts of contrasting compositions. Etymology The word igneous rock means composed of fire. and is derived from the Latin root words of igni-, meaning fire, and -eous meaning composed of. The word volcanic rock is derived from the Latin root words of Vulcan, the Roman god of fire, and -ic, meaning having some characteristics of. The word plutonic rock, another name for intrusive igneous rock, is derived from the Latin root words of Pluto, the Roman god of the underworld, and -ic, meaning having some characteristics of. Gallery
Physical sciences
Petrology
null
23229753
https://en.wikipedia.org/wiki/Coconut%20crab
Coconut crab
The coconut crab (Birgus latro) is a terrestrial species of giant hermit crab, and is also known as the robber crab or palm thief. It is the largest terrestrial arthropod known, with a weight of up to . The distance from the tip of one leg to the tip of another can be as wide as . It is found on islands across the Indian and Pacific Oceans, as far east as the Gambier Islands, Pitcairn Islands and Caroline Island and as far west as Zanzibar. While its range broadly shadows the distribution of the coconut palm, the coconut crab has been extirpated from most areas with a significant human population such as mainland Australia and Madagascar. The coconut crab is the only species of the genus Birgus, and is related to the other terrestrial hermit crabs of the genus Coenobita. It shows a number of adaptations to life on land. Juvenile coconut crabs use empty gastropod shells for protection like other hermit crabs, but the adults develop a tough exoskeleton on their abdomens and stop carrying a shell. Coconut crabs have organs known as branchiostegal lungs, which they use for breathing instead of their vestigial gills. After the juvenile stage, they will drown if immersed in water for too long. They have an acute sense of smell which they use to find potential food sources, and which has developed convergently with that of insects. Adult coconut crabs feed primarily on fleshy fruits, nuts, seeds, and the pith of fallen trees, but they will eat carrion and other organic matter opportunistically. Anything left unattended on the ground is a potential source of food, which they will investigate and may carry away – thereby getting the alternative name of "robber crab". The species is popularly associated with the coconut palm, yet coconuts are not a significant part of its diet. Although it lives in a burrow, the crab has been filmed climbing coconut and pandanus trees. No film shows a crab selectively picking coconut fruit, though they might dislodge ripe fruit that otherwise would fall naturally. When a crab is not near its burrow, climbing is an immediate escape route from predators. Sea birds eat young crabs, and both humans and larger, older crabs eat crabs of all ages. Mating occurs on dry land, but the females return to the edge of the sea to release their fertilized eggs, and then retreat up the beach. The larvae that hatch are planktonic for 3–4 weeks, before settling to the sea floor, entering a gastropod shell and returning to dry land. Sexual maturity is reached after about 5 years, and the total lifespan may be over 60 years. In the 3–4 weeks that the larvae remain at sea, their chances of reaching another suitable location is enhanced if a floating life support system avails itself to them. Examples of the systems that provide such opportunities include floating logs and rafts of marine or terrestrial vegetation. Similarly, floating coconuts can be a very significant part of the crab's dispersal options. Fossils of this crab date back to the Miocene. Taxonomy The coconut crab has been known to western scientists since the voyages of Francis Drake around 1580 and William Dampier around 1688. Based on an account by Georg Eberhard Rumphius (1705), who had called the animal "", Carl Linnaeus (1767) named the species Cancer latro, from the Latin , meaning "robber". The genus Birgus was erected in 1816 by William Elford Leach, containing only Linnaeus' Cancer latro, which was thus renamed Birgus latro. Birgus is classified in the family Coenobitidae, alongside one other genus, Coenobita, which contains terrestrial hermit crabs. Common names for the species include coconut crab, robber crab, and palm thief, which mirrors the animal's name in other European languages (e.g. ). In Japan (where the species lives on some of the country's southerly island chains), the species is typically referred to as , meaning 'palm crab'. Description B. latro is both the largest living terrestrial arthropod and the largest living terrestrial invertebrate. Reports of its size vary, but most sources give a body length up to , a weight up to , and a leg span more than , with males generally being larger than females. The carapace may reach a length of , and a width up to . The body of the coconut crab is, like those of all decapods, divided into a front section (cephalothorax) with 10 legs, and an abdomen. The front-most pair of legs has large chelae (claws) with the left being larger than the right. The next two pairs of legs, as with other hermit crabs, are large, powerful walking legs with pointed tips that allow coconut crabs to climb vertical or even overhanging surfaces. The fourth pair of legs is smaller, with tweezer-like chelae at the end allowing young coconut crabs to grip the inside of the shell or coconut husks that juveniles habitually carry for protection. Adults use this pair for walking and climbing. The last pair of legs is very small and is used by females to tend their eggs and by the males in mating. This last pair of legs is usually held in the cavity containing the breathing organs, inside the carapace. Some difference in color occurs between individuals found on different islands, ranging from orange-red to purplish blue, In most regions, blue is the predominant color, but in some places such as the Seychelles most individuals are red. Although B. latro is a derived type of hermit crab, only juveniles use salvaged snail shells to protect their soft abdomens while adolescents sometimes use broken coconut shells for the same purpose. Unlike other hermit crabs, the adult coconut crabs do not carry shells but instead harden their abdominal terga by depositing chitin and calcium carbonate. Absent the physical constraint of living within another creature's shell B. latro grows much larger than its relatives in the family Coenobitidae. Despite being the product of carcinization, like most true crabs B. latro bends its tail beneath its body for protection. The hardened abdomen protects the coconut crab and reduces water loss on land, but must be periodically moulted. Adults moult annually, digging a burrow up to long in which to hide while their soft shell hardens. Depending on the size of the individual 1–3 weeks are needed for the exoskeleton to harden. The animals remain in this burrow for 3–16 weeks, again depending on size. Respiration Except as larvae, coconut crabs cannot swim, and they drown if left in water for more than an hour. They use a special organ called a branchiostegal lung to breathe. This organ can be interpreted as a developmental stage between gills and lungs, and is one of the most significant adaptations of the coconut crab to its habitat. The branchiostegal lung contains a tissue similar to that found in gills, but suited to the absorption of oxygen from air, rather than water. This organ is expanded laterally and is evaginated to increase the surface area; located in the cephalothorax, it is optimally placed to reduce both the blood/gas diffusion distance and the return distance of oxygenated blood to the pericardium. Coconut crabs use their hindmost, smallest pair of legs to clean these breathing organs and to moisten them with water. The organs require water to properly function, and the coconut crab provides this by stroking its wet legs over the spongy tissues nearby. Coconut crabs may drink water from small puddles by transferring it from their chelipeds to their maxillipeds. In addition to the branchiostegal lung, the coconut crab has an additional rudimentary set of gills. Although these gills are comparable in number to aquatic species from the families Paguridae and Diogenidae, they are reduced in size and have comparatively less surface area. Sense of smell The coconut crab has a well-developed sense of smell, which it uses to locate its food. The process of smelling works very differently depending on whether the smelled molecules are hydrophilic molecules in water or hydrophobic molecules in air. Crabs that live in water have specialized organs called aesthetascs on their antennae to determine both the intensity and the direction of a scent. Coconut crabs live on the land, so the aesthetascs on their antennae are shorter and blunter than those of other crabs and are more similar to those of insects. While insects and the coconut crab originate from different clades, the same need to track smells in the air led to convergent evolution of similar organs. Coconut crabs flick their antennae as insects do to enhance their reception. Their sense of smell can detect interesting odors over large distances. The smells of rotting meat, bananas, and coconuts, all potential food sources, catch their attention especially. The olfactory system in the coconut crab's brain is well-developed compared to other areas of the brain. Life cycle Coconut crabs mate frequently and quickly on dry land in the period from May to September, especially between early June and late August. Males have spermatophores and deposit a mass of spermatophores on the abdomens of females; the oviducts opens at the base of the third pereiopods, and fertilisation is thought to occur on the external surface of the abdomen, as the eggs pass through the spermatophore mass. The extrusion of eggs occurs on land in crevices or burrows near the shore. The female lays her eggs shortly after mating and glues them to the underside of her abdomen, carrying the fertilised eggs underneath her body for a few months. At the time of hatching, the female coconut crab migrates to the seashore and releases the larvae into the ocean. The coconut crab takes a large risk while laying the eggs, because coconut crabs cannot swim: If a coconut crab falls into the water or is swept away, its weight makes it difficult, or impossible, for it to swim back to dry land. The egg laying usually takes place on rocky shores at dusk, especially when this coincides with high tide. The empty egg cases remain on the female's body after the larvae have been released, and the female eats the egg cases within a few days. The larvae float in the pelagic zone of the ocean with other plankton for 3–4 weeks, during which a large number of them are eaten by predators. The larvae pass through three to five zoea stages before moulting into the postlarval glaucothoe stage; this process takes from 25 to 33 days. Upon reaching the glaucothoe stage of development, they settle to the bottom, find and wear a suitably sized gastropod shell, and migrate to the shoreline with other terrestrial hermit crabs. At that time, they sometimes visit dry land. Afterwards, they leave the ocean permanently and lose the ability to breathe in water. As with all hermit crabs, they change their shells as they grow. Young coconut crabs that cannot find a seashell of the right size often use broken coconut pieces. When they outgrow their shells, they develop a hardened abdomen. The coconut crab reaches sexual maturity around 5 years after hatching. They reach their maximum size only after 40–60 years. They grow remarkably slowly, and may take up to 120 years to reach full size, as posited by ecologist Michelle Drew of the Max Planck Institute. Distribution Coconut crabs live in the Indian Ocean and the central Pacific Ocean, with a distribution that closely matches that of the coconut palm. The western limit of the range of B. latro is Zanzibar, off the coast of Tanzania, while the tropics of Cancer and Capricorn mark the northern and southern limits, respectively, with very few populations in the subtropics, such as the Ryukyu Islands. Some evidence indicates the coconut crab once lived on the mainland of Australia, Madagascar, Rodrigues, Easter Island, Tokelau, the Marquesas islands, and possibly India, but is now extirpated in those areas. As they cannot swim as adults, coconut crabs must have colonised the islands as planktonic larvae. Christmas Island in the Indian Ocean has the largest and densest population of coconut crabs in the world, although it is outnumbered there by more than 50 times by the Christmas Island red crab (Gecarcoidea natalis). Other Indian Ocean populations exist on the Seychelles, including Aldabra and Cosmoledo, but the coconut crab is extinct on the central islands. Coconut crabs occur on several of the Andaman and Nicobar Islands in the Bay of Bengal. They occur on most of the islands, and the northern atolls, of the Chagos Archipelago. In the Pacific, the coconut crab's range became known gradually. Charles Darwin believed it was only found on "a single coral island north of the Society group". The coconut crab is far more widespread, though it is not abundant on every Pacific island it inhabits. Large populations exist on the Cook Islands, especially Pukapuka, Suwarrow, Mangaia, Takutea, Mauke, Atiu, and Palmerston Island. These are close to the eastern limit of its range, as are the Line Islands of Kiribati, where the coconut crab is especially frequent on Teraina (Washington Island), with its abundant coconut palm forest. The Gambier Islands mark the species' eastern limit. Ecology Diet The diet of coconut crabs consists primarily of fleshy fruits (particularly Ochrosia ackeringae, Arenga listeri, Pandanus elatus, P. christmatensis); nuts (Aleurites moluccanus), drupes (Cocos nucifera) and seeds (Annona reticulata); and the pith of fallen trees. However, as they are omnivores, they will consume other organic materials such as tortoise hatchlings and dead animals, including other crustaceans, as well as the molted exoskeletons of other crustaceans. They have been observed to prey upon crabs such as Gecarcoidea natalis and Discoplax hirtipes, as well as scavenge on the carcasses of other coconut crabs. During a tagging experiment, one coconut crab was observed killing and eating a Polynesian rat (Rattus exulans). In 2016, a large coconut crab was observed climbing a tree to disable and consume a red-footed booby on the Chagos Archipelago. The coconut crab can take a coconut from the ground and cut it to a husk nut, take it with its claw, climb up a tree high and drop the husk nut, to access the coconut flesh inside. They often descend from the trees by falling, and can survive a fall of at least unhurt. Coconut crabs cut holes into coconuts with their strong claws and eat the contents, although it can take several days before the coconut is opened. Thomas Hale Streets discussed the behaviour in 1877, doubting that the animal would climb trees to get at the coconuts. As late as the 1970s there were doubts about the crab's ability to open coconuts. In the 1980s, Holger Rumpf was able to confirm Streets' report, observing and studying how they open coconuts in the wild. The animal has developed a special technique to do so; if the coconut is still covered with husk, it will use its claws to rip off strips, always starting from the side with the three germination pores, the group of three small circles found on the outside of the coconut. Once the pores are visible, the coconut crab bangs its pincers on one of them until it breaks. Afterwards, it turns around and uses the smaller pincers on its other legs to pull out the white flesh of the coconut. Using their strong claws, larger individuals can even break the hard coconut into smaller pieces for easier consumption. Habitat Coconut crabs are considered one of the most terrestrial-adapted of the decapods, with most aspects of its life oriented to, and centered around such an existence; they will actually drown in sea water in less than a day. Coconut crabs live alone in burrows and rock crevices, depending on the local terrain. They dig their own burrows in sand or loose soil. During the day, the animal stays hidden to reduce water loss from heat. The coconut crabs' burrows contain very fine yet strong fibres of the coconut husk which the animal uses as bedding. While resting in its burrow, the coconut crab closes the entrances with one of its claws to create the moist microclimate within the burrow, which is necessary for the functioning of its breathing organs. In areas with a large coconut crab population, some may come out during the day, perhaps to gain an advantage in the search for food. Other times, they emerge if it is moist or raining, since these conditions allow them to breathe more easily. They live almost exclusively on land, returning to the sea only to release their eggs; on Christmas Island, for instance, B. latro is abundant from the sea. Relationship with humans Adult coconut crabs have no known predators apart from other coconut crabs and humans. Its large size and the quality of its meat means that the coconut crab is extensively hunted and is very rare on islands with a human population. The coconut crab is eaten as a delicacy – and regarded as an aphrodisiac – on various islands, and intensive hunting has threatened the species' survival in some areas. In other regions, there are taboos associated with the crab that prohibit or limit hunting and consumption of Birgus latro. Such taboos have been recorded in the Nicobar Islands in India, on Flores Island in Indonesia, and among the Tao people of Taiwan. On the Nicobarian Kamorta Island, it is believed that eating the crab leads to bad luck and can cause severe, sometimes fatal, illnesses. In cases where a local falls ill after consuming the crab, their family creates a wooden replica of the creature. This effigy is then taken to the crab's capture site, where specific rituals are performed. While the coconut crab itself is not innately poisonous, it may become so depending on its diet, and cases of coconut crab poisoning have occurred. For instance, consumption of the sea mango (Cerbera manghas) by the coconut crab may make the coconut crab toxic due to the presence of cardiac cardenolides. The pincers of the coconut crab are powerful enough to cause noticeable pain to a human; furthermore, the coconut crab often keeps its hold for extended periods of time. Thomas Hale Streets reports a trick used by Micronesians of the Line Islands to get a coconut crab to loosen its grip: "It may be interesting to know that in such a dilemma a gentle titillation of the under soft parts of the body with any light material will cause the crab to loosen its hold." In the Cook Islands, the coconut crab is known as or , and in the Mariana Islands it is called , and is sometimes associated with because of the traditional belief that ancestral spirits can return in the form of animals such as the coconut crab. A popular internet meme suggests that Amelia Earhart crash-landed on Nikumaroro and her remains were rapidly consumed by coconut crabs on the island. However, as no evidence of Earhart's plane has been found on or near Nikumaroro, this theory is generally discredited by historians. Conservation Coconut crab populations in several areas have declined or become locally extinct due to both habitat loss and human predation. In 1981, it was listed on the IUCN Red List as a vulnerable species, but a lack of biological data caused its assessment to be amended to "data deficient" in 1996. In 2018, IUCN updated its assessment to "vulnerable". Conservation management strategies have been put in place in some regions, such as minimum legal size limit restrictions in Guam and Vanuatu, and a ban on the capture of egg-bearing females in Guam and the Federated States of Micronesia. In the Northern Mariana Islands, hunting of non-egg-bearing adults above a carapace length of may take place in September, October, and November, and only under license. The bag limit is five coconut crabs on any given day, and 15 across the whole season. In Tuvalu, coconut crabs live on the motu (islets) in the Funafuti Conservation Area, a marine conservation area covering 33 km2 (12.74 mi2) of reef, lagoon and motu on the western side of Funafuti atoll.
Biology and health sciences
Crabs and hermit crabs
Animals
40167806
https://en.wikipedia.org/wiki/Kilonova
Kilonova
A kilonova (also called a macronova) is a transient astronomical event that occurs in a compact binary system when two neutron stars or a neutron star and a black hole merge. These mergers are thought to produce gamma-ray bursts and emit bright electromagnetic radiation, called "kilonovae", due to the radioactive decay of heavy r-process nuclei that are produced and ejected fairly isotropically during the merger process. The measured high sphericity of the kilonova AT2017gfo at early epochs was deduced from the blackbody nature of its spectrum. History The existence of thermal transient events from neutron star mergers was first introduced by Li & Paczyński in 1998. The radioactive glow arising from the merger ejecta was originally called mini-supernova, as it is to the brightness of a typical supernova, the self-detonation of a massive star. The term kilonova was later introduced by Metzger et al. in 2010 to characterize the peak brightness, which they showed reaches 1000 times that of a classical nova. The first candidate kilonova to be found was detected on June 3, 2013 as short gamma-ray burst GRB 130603B by instruments on board the Swift Gamma-Ray Burst Explorer and KONUS/WIND spacecraft, and then imaged by the Hubble Space Telescope 9 and 30 days later. On October 16, 2017, the LIGO and Virgo collaborations announced the first detection of a gravitational wave (GW170817) which would correspond with electromagnetic observations, and demonstrated that the source was a binary neutron star merger. This merger was followed by a short GRB (GRB 170817A) and a longer lasting transient visible for weeks in the optical and near-infrared electromagnetic spectrum (AT 2017gfo), located only 140 million light-years away in the nearby galaxy NGC 4993. Observations of AT 2017gfo confirmed that it was the first conclusive observation of a kilonova. Spectral modelling of AT2017gfo identified the r-process elements strontium and yttrium, which conclusively ties the formation of heavy elements to neutron-star mergers. Further modelling showed the ejected fireball of heavy elements was highly spherical in early epochs. Some researchers have suggested that "thanks to this work, astronomers could use kilonovae as a standard candle to measure cosmic expansion. Since kilonovae explosions are spherical, astronomers could compare the apparent size of a supernova explosion with its actual size as seen by the gas motion, and thus measure the rate of cosmic expansion at different distances." Theory The inspiral and merging of two compact objects are a strong source of gravitational waves (GW). The basic model for thermal transients from neutron star mergers was introduced by Li-Xin Li and Bohdan Paczyński in 1998. In their work, they suggested that the radioactive ejecta from a neutron star merger is a source for powering thermal transient emission, later dubbed kilonova. Observations A first observational suggestion of a kilonova came in 2008 following the gamma-ray burst GRB 080503, where a faint object appeared in optical light after one day and rapidly faded. However, other factors such as the lack of a galaxy and the detection of X-rays were not in agreement with the hypothesis of a kilonova. Another kilonova was suggested in 2013, in association with the short-duration gamma-ray burst GRB 130603B, where the faint infrared emission from the distant kilonova was detected using the Hubble Space Telescope. In October 2017, astronomers reported that observations of AT 2017gfo showed that it was the first secure case of a kilonova following a merger of two neutron stars. In October 2018, astronomers reported that GRB 150101B, a gamma-ray burst event detected in 2015, may be analogous to the historic GW170817. The similarities between the two events, in terms of gamma ray, optical and x-ray emissions, as well as to the nature of the associated host galaxies, are considered "striking", and this remarkable resemblance suggests the two separate and independent events may both be the result of the merger of neutron stars, and both may be a hitherto-unknown class of kilonova transients. Kilonova events, therefore, may be more diverse and common in the universe than previously understood, according to the researchers. In retrospect, GRB 160821B, a gamma-ray burst detected in August 2016, is now believed to also have been due to a kilonova, by its resemblance of its data to AT2017gfo. A kilonova was also thought to have caused the long gamma-ray burst GRB 211211A, discovered in December 2021 by Swift’s Burst Alert Telescope (BAT) and the Fermi Gamma-ray Burst Monitor (GBM). These discoveries challenge the formerly prevailing theory that long GRBs exclusively come from supernovae, the end-of-life explosions of massive stars. GRB 211211A lasted 51s; GRB 191019A (2019) and GRB 230307A (2023), with durations of around 64s and 35s respectively, have been also argued to belong to this class of long GBRs from neutron star mergers. In 2023, GRB 230307A was observed and associated with tellurium and lanthanides.
Physical sciences
Stellar astronomy
Astronomy
3999979
https://en.wikipedia.org/wiki/Fulgoridae
Fulgoridae
The family Fulgoridae is a large group of hemipteran insects, especially abundant and diverse in the tropics, containing over 125 genera worldwide. They are mostly of moderate to large size, many with a superficial resemblance to Lepidoptera due to their brilliant and varied coloration. Various genera and species (especially the genera Fulgora and Pyrops) are sometimes referred to as lanternflies or lanthorn flies. The head of some species is produced into a hollow process, resembling a snout, which is sometimes inflated and nearly as large as the body of the insect, sometimes elongated, narrow and apically upturned. It was believed, mainly on the authority of Maria Sibylla Merian, that this process, the so-called lantern, was luminous at night in the living insect. Carl Linnaeus adopted the statement without question and coined a number of specific names, such as laternaria, phosphorea and candelaria to illustrate the supposed fact, and thus propagated the myth. Taxonomy Metcalf in 1938, as amended in 1947, recognized five subfamilies (Amyclinae, Aphaeninae, Fulgorinae, Phenacinae, and Poiocerinae) and twelve tribes in the Fulgoridae. By 1963 Lallemand had divided the Fulgoridae into eight subfamilies (Amyclinae, Aphaeninae, Enchophorinae, Fulgorinae, Phenacinae, Poiocerinae, Xosopharinae and Zanninae) and eleven tribes. This classification was generally accepted. However, 21st century molecular analysis has called into question the organization of Fulgoridae, and suggests that the subfamily Zanninae may not belong in Fulgoridae. Subfamilies and selected genera The NCBI and the Hemiptera Database currently include to the following sub-families and genera (lists complete if subfamily not linked):
Biology and health sciences
Hemiptera (true bugs)
Animals
4004214
https://en.wikipedia.org/wiki/Palaeotherium
Palaeotherium
Palaeotherium is an extinct genus of equoid that lived in Europe and possibly the Middle East from the Middle Eocene to the Early Oligocene. It is the type genus of the Palaeotheriidae, a group exclusive to the Palaeogene that was closest in relation to the Equidae, which contains horses plus their closest relatives and ancestors. Fossils of Palaeotherium were first described in 1782 by the French naturalist Robert de Lamanon and then closely studied by another French naturalist, Georges Cuvier, after 1798. Cuvier erected the genus in 1804 and recognized multiple species based on overall fossil sizes and forms. As one of the first fossil genera to be recognized with official taxonomic authority, it is recognized as an important milestone within the field of palaeontology. The research by early naturalists on Palaeotherium contributed to the developing ideas of evolution, extinction, and succession and demonstrating the morphological diversity of different species within one genus. Since Cuvier's descriptions, many other naturalists from Europe and the Americas recognized many species of Palaeotherium, some valid, some reclassified to different genera afterward, and others being eventually rendered invalid. The German palaeontologist Jens Lorenz Franzen modernized its taxonomy due to his recognition of many subspecies as part of his dissertation in 1968, which were subsequently accepted by other palaeontologists. Today, there are fourteen known species recognized, many of which have multiple subspecies. In 1992, the French palaeontologist Jean-Albert Remy recognized two subgenera that most species are classified to based on cranial anatomies: the specialized Palaeotherium and the more generalized Franzenitherium. Palaeotherium is an evolutionarily derived member of its family with tridactyl (or three-toed) forelimbs and hindlimbs, small post-canine diastemata (gaps between teeth), and premolars that are usually developed into molar-like forms. It shares many similar anatomical traits with other perissodactyls and has a large diversity in anatomical traits by species, with some species like P. magnum, P. curtum, and P. crassum being stockier in build and P. medium being more cursorial (or adapted for running). The genus ranges in size from the small species P. lautricense, with an estimated weight of , to the massive P. giganteum, thought to have been capable of weighing over . P. magnum, known by two mostly complete skeletons from France, could have reached approximately in shoulder height and in length. The large-sized species were therefore amongst the largest mammals in the Eocene of Europe. Palaeotherium may have lived in herds and, as demonstrated by its dentition, was able to actively niche partition with another palaeothere Plagiolophus by specializing on softer leaves and fruit, although both were mostly leaf-eating. Palaeotherium and other genera of the subfamily Palaeotheriinae likely descended from the earlier subfamily Pachynolophinae, which lived in both Europe and Asia as opposed to North America unlike undisputed members of the Equidae. By the time that the first species P. eocaenum appeared in the middle Eocene, western Europe was an archipelago that was isolated from the rest of Eurasia, meaning that it and subsequent species lived in an environment with various other faunas that also evolved with strong levels of endemism. The Iberian Peninsula had its own level of endemism with several species that are only known within the region, although they were replaced by more widespread species from central Europe by the late Eocene. Within both the middle and late Eocene, Palaeotherium consistently maintained a high species diversity and endured major environmental changes leading to a faunal turnover that occurred by the beginning of the late Eocene. By the early Oligocene, most of its species went extinct along with many genera of western European mammals as part of the Grande Coupure extinction and faunal turnover event, the causes of the extinctions being attributed mainly to environmental changes from increased glaciation and seasonality, negative interactions with immigrant faunas from Asia (competition and/or predation), or some combination of the two. P. medium survived past the Grande Coupure probably due to its cursorial nature that allowed it to travel across open lands more efficiently and escape immigrant carnivores; it was the last species of its genus and went extinct not long after the faunal turnover event. Taxonomy Research history First descriptions In 1782, the French naturalist Robert de Lamanon described a fossil skull including the upper and lower jaws that was collected from the quarries of Montmartre, a hill near Paris that belonged to the nobleman Philippe-Laurent de Joubert. He recognized that the molars and incisors were roughly similar to those of ruminants but noted that the dentition lacked modern analogues. Consequently, he hypothesized that the animal was extinct, had an amphibious lifestyle, and fed on both plants and fish. Since 1796, the French naturalist Georges Cuvier innovated the idea of vanished worlds of extinct animals, but as his observations of fossils were mostly limited to drawings and fragmentary fossils stored at the National Museum of Natural History, France, his palaeontological insight was limited early on. In 1798, he documented fossils from Montmartre, suggesting initially that they could have belonged to the canid genus Canis based on dental morphology. Later in the same year, he instead suggested that the fossils belonged to a pachyderm that was most closely related to tapirs and had trunks like them. He also figured out that the animals of Montmartre were of multiple species with different sizes and numbers of toes. The fossils of Montmartre were credited with great importance to the field of palaeontology, as they were embedded in deeper and harder sediments than other fossil mammals such as Megatherium. The science historian Bruno Belhoste argued that Cuvier's study of Palaeotherium in 1798 "marks the true birth of paleontology". Early taxonomy and depictions In 1804, Cuvier confirmed that the skull previously reported by de Lamanon belonged to a mammal. The skull preserves a complete set of 44 teeth that are similar to those of rhinoceroses and hyraxes. Cuvier recognized that the skull differs from other mammals and therefore established a new genus and species, Palaeotherium medium. The genus name Palaeotherium means "ancient beast", which is a compound of the Greek prefix () meaning 'old' or 'ancient' and the suffix () meaning 'beast' or 'wild animal'. He debunked Lamanon's hypothesis that Palaeotherium was an omnivorous amphibian and suspected that it had trunks akin to those of tapirs. From 1804 up to 1824, Cuvier erected a total of 13 species of Palaeotherium based on skull, dental, and postcranial material. He erected the second of these species, P. magnum, in 1804, explaining that it had similar but larger-sized dentition than P. medium. In describing the third and small-sized species, P. minus, he began to focus on the study of postcranial material rather than just cranial and dental material. In 1805, Cuvier erected P. crassum based on the three-toed forefeet, which were similar to tapirs and rhinoceroses in the shape of the metacarpal bones. In 1812, he named another species, P. curtum, based on metacarpal bones that were slightly smaller than those of P. crassum. As of 1968, four of the Palaeotherium species named by Cuvier were considered valid and remained classified in Palaeotherium (P. medium, P. magnum, P. crassum, P. curtum), six were valid but were eventually reclassified to different genera by different palaeontologists (P. minus, P. tapiroïdes, P. buxovillanum, P. aurelianense, P. occitanicum, and P. isselanum), and three were considered invalid (P. giganteum, P. latum, and P. indeterminatum). In 1812, Cuvier defined Palaeotherium as containing only tridactyl (or three-toed) species. He also speculated on life appearance and behaviour of several Palaeotherium species, but cautioned that such interpretations are limited by the fragmentary fossil material. He suggested that P. magnum would have resembled a horse-sized tapir with sparse hair. P. crassum and P. medium would also have had a tapir-like appearance, with proportionally longer legs and feet in the latter. Cuvier also published a speculative skeletal reconstruction of P. minus and hypothesized that it was smaller than a sheep and potentially cursorial given its slender legs and face. Finally, he theorized that P. curtum would have been the bulkiest species. In 1822, Cuvier published a reconstruction of the skeleton of P. magnum, outlining that it was the size of a Javan rhinoceros, was stocky in build, and had a massive head. The same year, Palaeotherium was also depicted in drawings by the French palaeontologist Charles Léopold Laurillard under the direction of Cuvier. Three sculptures representing Palaeotherium magnum, Palaeotherium medium and "Plagiolophus minus" (= Plagiolophus) are part of the Crystal Palace Dinosaurs exhibition in the Crystal Palace Park in London, which has been open to the public since 1854 and was created by the English sculptor Benjamin Waterhouse Hawkins. Both the P. magnum sculpture, the largest of the three, and the medium-sized P. medium sculpture were posed in a standing position, whereas the smaller "P. minus" sculpture depicts a sitting animal. The resemblance of the models to tapirs reflects early perceptions of the life appearance of Palaeotherium. However, the sculptures differ from living tapirs in several ways, such as shorter and taller faces, higher eye positions, slimmer legs, longer tails, and the presence of three toes on the forelimbs unlike the four toes of tapirs. Of the three sculptures, P. medium most closely resembles a tapir, and it has remained mostly intact. P. medium was depicted as having thick skin and a slender face and trunk, representing outdated perceptions that it was a slow animal. The original P. magnum sculpture was last known from a 1958 photograph before it was lost at some point afterward (it was replaced by a new republicated model in 2023); the photograph reveals that it was the largest of the three sculptures and had a robust and muscular build with large and deep eyes, a proportionally large head, and bulky legs. The model's trunk was wide and descended below the lower lip. The overall anatomy appears to be based on elephants. Palaeotherium proved to be a significant find to the field of palaeontology in multiple other aspects. For one, both the skeletal reconstruction drawing and the life restoration in Cuvier's works were incorporated into textbooks and handbooks around the world up to the 20th century. The genus was also incorporated into old orthogenesis models of the evolution of the horse theory as early as 1851 by British biologist Richard Owen and followed by other 19th century European naturalists such as Jean Albert Gaudry and Vladimir Kovalevsky. Later 19th century taxonomy In the 19th century, several of Cuvier's Palaeotherium species have been reclassified under different genera. "P." aurelianense was reclassified as its own genus Anchitherium by the German palaeontologist Hermann von Meyer in 1844. In an 1839–1864 osteography, the French naturalist Henri Marie Ducrotay de Blainville relisted "P." tapiroides, "P." buxovillanum and "P." occitanicum as species belonging to Lophiodon, but the latter two were eventually moved to Paralophiodon and Lophiaspis, respectively, in the 20th century. In 1862, Swiss zoologist Ludwig Ruetimeyer considered the previously recognised genera Plagiolophus and Propalaeotherium as distinct from Palaeotherium; these contain the species P. minor and P. isselanum, respectively. The 19th century also saw the erection of several new Palaeotherium species. In 1853, French palaeontologist Auguste Pomel erected the species P. duvali based on limb bones that he thought were less stocky than those of P. curtum. In his 1839–1864 osteography, Blainville erected P. girondicum, pointing out that its fossils were from the Gironde Basin and that Cuvier only briefly referenced it in an 1825 publication. In 1863, the French naturalist Jean-Baptiste Noulet created the species P. castrense based on an incomplete mandible that was uncovered from the commune of Viviers-lès-Montagnes that was placed in a fossil collection from Castres. In 1869, Swiss palaeontologists Pictet and Humbert erected the species Plagiolophus siderolithicus based on molars that are similar to those of P. minor but were smaller in size. The same year, German palaeontologist Oscar Fraas erected P. suevicum based on teeth that he thought had distinct enamel. The French naturalist Paul Gervais, in 1875, described fossil bones and teeth from the French commune of Dampleux, noting that they belonged to a species smaller than other Palaeotherium species and with dental dimensions similar to those of Plagiolophus minor. He assigned the fossils to the newly erected species P. eocaenum. Palaeotherium skeletons In 1873, the French geologist Gaston Casimir Vasseur uncovered the first complete skeleton of Palaotherium, attributed to P. magnum, from a gypsum quarry in the commune of Vitry-sur-Seine. The quarry was owned by the civil engineer Fuchs, who donated the skeleton to the National Museum of Natural History, France. The skeleton was described by Gervais in the same year, who noted that the neck was longer than expected and that the build was less stocky than that of tapirs and rhinoceroses. The skull of the specimen measures long. The naturalist said that the excavation of the specimen was difficult but completed by multiple skillful workers. Since its description, it has been displayed at the Gallery of Paleontology and Comparative Anatomy of the museum as an important and famous component. During the 20th century, a second complete skeleton of P. magnum was excavated from the plasters in the French commune of Mormoiron. It was sent to the geological department of the University of Lyon and described after preparation by the Austrian geologist Frédéric Roman in 1922. Roman published a reconstruction of the skeleton in his 1922 monography. According to Austrian palaeontologist Othenio Abel in 1924, it was the most complete skeleton of Palaeotherium and amongst the most complete of any early Cenozoic mammal known at the time, missing only a few ribs and the left femur. 20th century revisions In 1904, Swiss palaeontologist Hans Georg Stehlin created the species P. lautricense based on an upper jaw stored in the Muséum de Toulouse that originated from sandstone deposits at Castres. He also assigned two somewhat crushed skulls to this species. In his monography on palaeotheres, published the same year, Stehlin considered most species of Palaeotherium as potentially valid, but noted that most taxonomists were reluctant to invalidate species erected by Cuvier. Stehlin considered P. girondicum to be a form of P. magnum, and described two forms of P. curtum from jaw fragments from La Débruge. He also named three new species – P. Mühlbergi, based on dental material from the Swiss municipality of Obergösgen; P. Renevieri, based on new finds from Mormont and a mandible identified by Pictet in 1869; and P. Rütimeyeri, from the municipality of Egerkingen, which he described as having primitive premolars. In 1917, French palaeontologist Charles Depéret recognized two additional species of Palaeotherium – P. Euzetense and P. Stehlini. In 1968, upcoming German palaeontologist Jens Lorenz Franzen, then a graduate student, made major revisions of Palaeotherium in his dissertation. He invalidated several species as dubious names (P. giganteum (considered to have been a rhinocerotid instead), P. gracile, P. parvulum, P. commune, P. primaevum, and P. gervaisii) and synonymized many others with P. magnum (P. aniciense, P. subgracile), P. medium (P. brivatense, P. moeschi), P. crassum (P. indeterminatum), P. curtum (P. latum and P. buseri), P. duvali (P. kleini), and P. muehlbergi (P. velaunum). He additionally invalidated many species that had been erected throughout the 19th and early 20th centuries. He also erected P. pomeli based on fossils from a locality in Castres and reclassified "Plagiolophus" siderolithicum as a species of Palaeotherium. Furthermore, Franzen converted some species into subspecies (P. magnum girondicum, P. magnum stehlini, P. medium suevicum, and P. medium euzetense) and named six additional subspecies. In 1975, Spanish palaeontologist María Lourdes Casanovas-Cladellas erected the species P. crusafonti from a left maxilla with dentition from the Spanish site of Roc de Santa. In 1980, both she and José-Vicente Santafé Llopis established a second Iberian species, P. franzeni, from the Spanish municipality of Sossís based on differences in dentition. In 1985, the French palaeontologist Jean-Albert Remy named a new subspecies, P. muehlbergi thaleri, in honor of fellow palaeontologist Louis Thaler; these fossils, consisting of two skulls with mandibles, were from the commune of Saint-Étienne-de-l'Olm. In 1991, Casanovas-Cladellas and Santafé Llopis erected P. llamaquiquense from partial jaw material from the Spanish locality of Llamaquique in the city of Oviedo, where the name derived from. The next year in 1992, Remy proposed the creation of two subgenera of Palaeotherium based on cranial characteristics: Palaeotherium and Franzenitherium. In 1993, the Spanish palaeontologist Miguel Ángel Cuesta Ruiz-Colmenares established the species P. giganteum based on teeth from the Mazaterón site in the Duero Basin, considering it to be the largest species of Palaeotherium known. In 1998, Casanovas-Cladellas et al. erected the subspecies P. crassum sossissense from a fragmented right maxilla with dentition from Sossís in Spain. They also invalidated the previously named P. franzeni and reassigned the material to P. magnum stehlini. Classification Palaeotherium is the type genus of the Palaeotheriidae, largely considered to be one of two major hippomorph families in the superfamily Equoidea, the other being the Equidae. Alternatively, some authors have proposed that equids are more closely related to the Tapiromorpha than to the Palaeotheriidae. It is also usually thought to consist of two families, the Palaeotheriinae and Pachynolophinae; a few authors alternatively have argued that pachynolophines are more closely related to other perissodactyl groups than to palaeotheriines. Some authors have also considered the Plagiolophinae to be a separate subfamily, while others group its genera into the Palaeotheriinae. Palaeotherium has also been suggested to belong to the tribe Palaeotheriini, one of three proposed tribes within the Palaeotheriinae along with the Leptolophini and Plagiolophini. The Eurasian distribution of the palaeotheriids (or palaeotheres) were in contrast to equids, which are generally thought to have been an endemic radiation in North America. Some of the most basal equoids of the European landmass are of uncertain affinities, with some genera being thought to potentially belong to the Equidae. Palaeotheres are well-known for having lived in western Europe during much of the Palaeogene but were also present in eastern Europe, possibly the Middle East, and, in the case of pachynolophines (or pachynolophs), Asia. The Perissodactyla makes its earliest known appearance in the European landmass in the MP7 faunal unit of the Mammal Palaeogene zones. During the temporal unit, many genera of basal equoids such as Hyracotherium, Pliolophus, Cymbalophus, and Hallensia made their first appearances there. A majority of the genera persisted to the MP8-MP10 units, and pachynolophines such as Propalaeotherium and Orolophus arose by MP10. The MP13 unit saw the appearances of later pachynolophines such as Pachynolophus and Anchilophus along with definite records of the first palaeotheriines such as Palaeotherium and Paraplagiolophus. The palaeotheriine Plagiolophus has been suggested to have potentially made an appearance by MP12. It was by MP14 that the subfamily proceeded to diversify, and the pachynolophines were generally replaced but still reached the late Eocene. In addition to more widespread palaeothere genera such as Plagiolophus, Palaeotherium, and Leptolophus, some of their species reaching medium to large sizes, various other palaeothere genera that were endemic to the Iberian Peninsula, such as Cantabrotherium, Franzenium, and Iberolophus, appeared by the middle Eocene. The phylogenetic tree for several members of the family Palaeotheriidae, as well as three outgroups, as created by Remy in 2017 and followed by Remy et al. in 2019 is defined below: As shown in the above phylogeny, the Palaeotheriidae is recovered as a monophyletic clade, meaning that it did not leave any derived descendant groups in its evolutionary history. Hyracotherium sensu stricto (in a strict sense) is defined as amongst the first offshoots of the family and a member of the Pachynolophinae. "H." remyi, formerly part of the now-invalid genus Propachynolophus, is defined as a sister taxon to more derived palaeotheres. Both Pachynolophus and Lophiotherium, defined as pachynolophines, are defined as monophyletic genera. The other pachynolophines Eurohippus and Propalaeotherium consistute a paraphyletic clade in relation to members of the derived and monophyletic subfamily Palaeotheriinae (Leptolophus, Plagiolophus, and Palaeotherium), thus making Pachynolophinae a paraphyletic subfamily clade. Inner systematics Since 1968, many species of Palaeotherium have multiple defined subspecies that are justified by various intraspecific variations. Later since 1992, two subgenera are officially recognized for Palaeotherium. The first of these subgenera is Palaeotherium, which includes the type species P. magnum along with P. medium, P. crassum, P. curtum, P. castrense, P. siderolithicum, and P. muehlbergi. The second subgenus is Franzenitherium, which includes the type species P. lautricense as well as P. duvali and was named in honor of Franzen's review of Palaeotherium. The subgenus Palaeotherium is distinct from another subgenus Franzenitherium based on specialized traits. For example, the orbit of Palaeotherium being aligned in front of the skull's midlength is a specialized trait compared to that of Franzenitherium being aligned more with the skull's midlength. Several Palaeotherium species are too fragmentary to be placed in any of the subgenera. The following table lists all valid species and subspecies of Palaeotherium, the subgenus that each is classified to, the Mammal Palaeogene faunal units that they are recorded from based on fossil deposit appearances, the authors who named the taxa, and the year that they were formally named: Description Skull The Palaeotheriidae are distinguished from other perissodactyls mostly based on features of the skull. For example, the orbits are generally wide open at the back and are located in the middle of the skull or slightly more frontwards. The nasal bones of palaeotheres are thick to very thick. Palaeotherium itself is characterized by several cranial traits that distinguish it from other palaeothere genera such as an elongated zygomatic process of the squamosal bone extending to the maxilla and the presense of an anastomosis (anatomical connection between two passageways) roughly at the sphenoid bone and prominent temporalis muscle developments. The calvaria ranges in base length from to depending on the species. The height and weight proportions of the skull of Palaeotherium are roughly equivalent with those of other taxa within the Equoidea; members of the superfamily have relatively shortened front facial areas. The skull's top peaks at the far back area, although this is not observed in P. lautricense. The sagittal crest can be prominent and depends on the age and sex of the individual for development. In comparison to other equoids where the skull's maximum width extends above the front root of the parallel zygomatic arches, those of Palaeotherium and most other palaeotheres (except Leptolophus) extend back to the joint of the squamosal bone and mandible. The orbit on the skull of Palaeotherium, unlike that of other equoids, is proportionally smaller and located somewhat in front of the skull's midlength, the latter trait of which may be further extended in the case of P. medium. Similar to other Palaeogene equoids, the front edge of the orbit is aligned with M1 or M2 while the back area is wide. Unlike most other palaeotheres, its nasal opening stretches up to the P3 tooth at minimum or up to the front edge of the orbit above M3 in the case of P. magnum. While the shapes and proportions of the nasal bones vary by species, they extend beyond P1 in adults and sometimes even the canine like in equines. The temporal fossae are large but vary in proportion. The cranial vault is broad, domed, and wider than the overall skull. The horizontal ramus of the mandible is overall thick plus tall and has an elongated mandibular symphysis, but the width and lower area morphology vary by species. It is wide in both the front and back areas and low compared to equines. The joint for the squamosal and mandible of Palaeotherium is low compared to those of Plagiolophus and Leptolophus. The angular process, located above the angle of the mandible, is blocked from further expansion by the mandibular notch and is well-developed in its rear like in Palaeogene equids. Dentition Derived palaeotheres are generally diagnosed as having selenolophodont (selenodont-lophodont ridge form) upper molars (M/m) and selenodont (crescent-shaped ridge form) lower molars that are mesodont, or medium-crowned, in height. The canines (C/c) strongly protrude and are separated from the premolars (P/p) by medium to long diastemata (gaps between two close teeth) and from the incisors (I/i) by short ones in both the upper and lower dentition. The other teeth are paired closely with each other in both the upper and lower rows. The dental formula of Palaeotherium is for a total of 44 teeth, consistent with the primitive dental formula for early-middle Palaeogene placental mammals. The incisors are shovel-shaped and, like in modern horses, are used for chewing at right angles in relation to their longitudinal axes. They have no cutting functions but instead are used for grasping food akin to how tweezers grasp items. The canines are proportionally large and dagger-shaped. They were probably not used for cutting or chewing given how they are oriented, but may have been used in self-defence and conspecific fights. The decreased length of the postcanine diastema in Palaeotherium and the equid subfamily Anchitheriinae may be correlated with increases in body size. This trend may be due to the necessity to improve chewing performances through molarization and proportional size increases of the premolars. Postcanine diastemata are strongly reduced in early species such as P. castrense; in later species, they vary from small (P. crassum, P. curtum) to large (P. medium, P. magnum). The separation of cheek teeth from the incisors and canines attests to their independent and specific chewing functions. The premolars and preceding deciduous teeth both tend to have molarized forms (meaning molar-like shapes) and have newly developing hypocone cusps on them. The forms of the deciduous premolars (dP) of juvenile Palaeotherium and other palaeotheriines distinguish them from the earlier pachynolophines, where the dP2-dP4 of juvenile P. renevieri and P. magnum are both molarized and four-cusped (although dP1 is triangular). Late Eocene species of Palaeotherium tend to have more molariform premolars. The non-molarized premolars are composed of four to five cusps (one to two external, two intermediate, and one internal) while the molarized premolars and molars have six cusps (two external, two intermediate, and two internal). The upper molars are medium-crowned (shorter than those of modern equids) and have ectolophs (crests or ridges of upper molar teeth) that are about twice the height of the inner cusps and curve into a W shape. The W-shaped ectolophs themselves are made up of two articulated crescents. The lower molarized premolars and molars are about half as wide as their upper counterparts. The mesostyle cusp (a small cusp type) present in the molars thicken from M1 to M3. The lingual lobes (or divisions) in the upper molars are closely aligned with the ectolophs. Postcranial skeleton The overall postcranial anatomy of Palaeotherium is best known from a skeleton of P. magnum uncovered from Mormoiron. The vertebral column is made up of seven large cervical vertebrae, seventeen thoracic vertebrae, six lumbar vertebrae, six sacral vertebrae, and fifteen caudal vertebrae. The cervical vertebrae, comprising the neck, measure long while the caudal vertebrae, comprising the tail, measure long. The sacrum is triangular and similar to that of the Equidae, but is slightly wider in its front area. P. magnum would have had a total of thirty-four ribs based on the total number of thoracic vertebrae. Like in equids, the front ribs are strong and flattened. The back portion of the thorax would have been wider than in horses and roughly comparable to those of tapirs and rhinoceroses but was not as long as that of the latter. The ribs are separated from the sternum, which is approximately the same size as the thorax. P. magnum has generally strong and stocky limb bones. The femora (upper thigh bones) of P. crassum and P. medium, in comparison, are less robust. Palaeotherium has a straighter and less concave trochlea of the astragalus than Plagiolophus. The calcaneum is semirectangular in shape but slightly wider on its rear end. The cuboid bone is high and narrow, similar to that of Anchitherium. Most species of Palaeotherium have tridactyl (three-toed) hindlimbs and forelimbs, in contrast to earlier palaeotheres that have tetradactyl (four-toed) forelimbs and tridactyl hindlimbs. P. eocaenum might have had a tetradactyl forelimb, as indicated by a manus that has been tentatively assigned to it. Palaeotherium differs from Plagiolophus in its long and narrow carpals and in its metacarpal bones, which are close in length to each other and develop into wide ungual phalanges. The tridactyl foot morphology with all three digits being functional suggests digitigrade locomotion. Palaeotherium shows an exceptional amount of variation in the shape of its third metacarpal and its manus dimensions. P. curtum has very robust forelimb bones including a short and stocky manus, which suggests that it was stocky in build. P. magnum and P. crassum resemble tapirs, especially the mountain tapir (Tapirus pinchaque), in the build of their forelimbs. P. magnum has less slender radii and metacarpals compared to those of P. crassum and is therefore comparable to those of modern tapirs. P. medium, along with Plagiolophus, appear to be the most cursorial palaeotheres due to their elongated and gracile metacarpals. P. medium has a more unique foot morphology compared to other Palaeotherium species due to narrower and higher feet and longer metapodial bones. The cursorial adaptations of P. medium is further supported by the morphology of the humerus. The middle metatarsal bone is larger and more robust than the others. The fourth toe of P. magnum appears slightly arched and is slightly longer than the second toe. Footprints Several types of tracks have been suggested to belong to Palaeotherium, among them the ichnogenus Palaeotheriipus that was named by the palaeontologist Paul Ellenberger in 1980 based on tracks from lacustrine limestones in the department of Gard in France. Ellenberger suggested that the ichnogenus most closely corresponds to P. medium or P. cf. crassum. The ichnogenus is diagnosed as a very short and tridactyl footprint in which the outer digits (II and IV) are shallow the middle digit (digit III) is more deeply impressed. It differs from another palaeothere ichnogenus, Plagiolophustipus, which was suggested to have been made by Plagiolophus, by the presence of smaller and broader digit impressions. Lophiopus, possibly produced by Lophiodon, differs by smaller digit impressions that are more widely splaced, while Rhinoceripeda, attributed to the Rhinocerotidae, is an oval-shaped footprint with three or five digits. Palaeotheriipus is known from both France and Iran, whereas Plagiolophustipus is currently known from Spain. Two ichnospecies of Palaeotheriipus have been named. The type ichnospecies is Palaeotheriipus similimedius and based on the French material. These footprints are wider () than long (), with fingers that diverge widely from each other at angles of at least 50°. The hoof of finger III appears to be wider than those of the outer toes. Ellenberger suggested that the ichnospecies most closely corresponds with either P. medium euzetense or P. medium perrealense. A second ichnospecies, P. sarjeanti, was described from eastern Iran and opens the possibility that palaeotheres could have extended in geographical range to the region by the middle to late Eocene. It was named in honor of the ichnologist William A. S. Sarjeant and is diagnosed as showing a relatively round middle digit that is broader and longer than the outer digits. The manus is less elongated than the pes. Additional footprints from the d'Apt-Forcalquier basin in France, dated to the middle Eocene and described by G. Bessonat et al. in 1969, are recorded to be larger than the footprints of P. similimedius. They have been suggested to be produced by the species P. magnum. Size Palaeotherium includes species of various sizes that range in skull base length from . The length of the tooth row from P2 to M3 ranges from in the smallest species, P. lautricense, to in the largest species, P. giganteum. P. magnum, which was previously considered the largest species, is close to P. giganteum in size with one tooth row measuring . P. medium is estimated to be the size of a subadult South American tapir (Tapirus terrestris), larger than the roe deer-sized Plagiolophus minor. The P. magnum Mormoiron skeleton demonstrates that individuals could have reached approximately in shoulder height and in length. Additionally, its head and neck together measure , and its forelimb (humerus to hoof) also measures in length. In 2015, Remy calculated the body mass of several Eocene European perissodactyl species based on a formula originally proposed by Christine M. Janis in 1990. He estimated that the small species P. lautricense could have weighed just . P. siderolithicum could have had an average weight of around . P. aff. ruetimeyeri could have had a larger body mass of while P. pomeli was estimated at . P. castrense robiacense was estimated to be much heavier, at . According to Piere Perales-Gogenola et al. in 2022, the largest species P. giganteum could have had a body weight over . MacLaren and Naewelaerts proposed a somewhat lower weight estimate of for the large species P. magnum. Palaeobiology Palaeotherium species vary substantially in size, morphology, and build. The skeletons of P. magnum, P. curtum, and P. crassum were relatively robust, while that of P. medium was more gracile, suggesting increased cursoriality. The evolutionary history of palaeotheres might have emphasized the sense of smell rather than sight or hearing, evident by the smaller orbits and the apparent lack of a derived auditory system. A well-developed sense of smell could have allowed palaeotheres to keep track of their herds, implying gregarious behaviours. The wide diversity of palaeothere forelimb morphologies attests to different degrees of cursoriality in separate species. They generally had smaller hindlimbs compared to forelimbs, suggesting less tendencies towards cursoriality due to being adapted to closed and stable environments. In 2000, Giuseppe Santi proposed that Palaeotherium could have been able to stand on its hind legs to reach high plants. P. magnum may have been able to browse on plants at over when quadrupedal; when on its hind legs, it could have reached up to or even in height. However, Jerry J. Hooker argued that there is no evidence for such facultative bipedalism in P. magnum unlike in the contemporary artiodactyl Anoplotherium. The long neck of P. magnum suggests that it might have browsed on higher plants and/or drank water from below. Palaeotherium was amongst the largest mammals to inhabit Europe during the middle to late Eocene, with only a few contemporary mammalian groups such as lophiodonts, anoplotheriids, and other palaeotheres reaching similar or larger body sizes. According to Sandra Engels in a conference paper, both Palaeotherium and Plagiolophus have dentitions capable of processing harder items such as hard fruits, while their predecessors, such as Hyracotherium and Propalaeotherium, were adapted to softer food. Unlike in equids and basal equoids, the molars of later palaeotheres serve dual purposes of shearing food on the buccal side followed by crushing it on the lingual side, an adaptation for broader herbivorous diets. The two derived genera have brachyodont (low-crowned) dentition, suggesting that both genera were mostly folivorous (leaf-eating) and did not have frugivorous (fruit-eating) tendencies, evident by the lower amounts of rounded cusps on their molars. While both genera may have incorporated some fruit into their diets, the higher lingual tooth wear in Plagiolophus indicates it ate more fruit than Palaeotherium. Because of their likely tendencies to browse on higher plants, evident by their long necks and the woodland environments they inhabited, it is unlikely that minerals, usually consumed from grazing on ground plants, significantly affected the tooth wear of either of these genera. The tooth wear in both genera could have been the result of chewing on fruit seeds. It is likely that Palaeotherium ate softer food such as younger leaves and fleshy fruit that may have had hard seeds while Plagiolophus leaned towards consuming tough food such as older leaves and harder fruit. The interpretation that Palaeotherium consumed more leaf and woody material and less fruit compared to Plagiolophus is supported by the two having somewhat different chewing functions and Palaeotherium being more efficient in shearing food. Palaeoecology Middle Eocene For much of the Eocene, a hothouse climate with humid, tropical environments with consistently high precipitations prevailed. Modern mammalian orders including the Perissodactyla, Artiodactyla, and Primates (or the suborder Euprimates) appeared already by the early Eocene, diversifying rapidly and developing dentitions specialized for folivory. The omnivorous forms mostly either switched to folivorous diets or went extinct by the middle Eocene (47–37 million years ago) along with the archaic "condylarths". By the late Eocene (approx. 37–33 mya), most of the ungulate form dentitions shifted from bunodont (or rounded) cusps to cutting ridges (i.e. lophs) for folivorous diets. Land connections between western Europe and North America were interrupted around 53 Ma. From the early Eocene up until the Grande Coupure extinction event (56–33.9 mya), western Eurasia was separated into three landmasses: western Europe (an archipelago), Balkanatolia (in-between the Paratethys Sea of the north and the Neotethys Ocean of the south), and eastern Eurasia. The Holarctic mammalian faunas of western Europe were therefore mostly isolated from other landmasses including Greenland, Africa, and eastern Eurasia, allowing for endemism to develop. Therefore, the European mammals of the late Eocene (MP17–MP20 of the Mammal Palaeogene zones) were mostly descendants of endemic middle Eocene groups. Palaeotherium made its first appearance with the species P. eocaenum in the MP13 unit. By then, it would have coexisted with perissodactyls (Palaeotheriidae, Lophiodontidae, and Hyrachyidae), non-endemic artiodactyls (Dichobunidae and Tapirulidae), endemic European artiodactyls (Choeropotamidae (possibly polyphyletic, however), Cebochoeridae, and Anoplotheriidae), and primates (Adapidae). Both the Amphimerycidae and Xiphodontidae made their first appearances by the level MP14. The stratigraphic ranges of the early species of Palaeotherium also overlapped with metatherians (Herpetotheriidae), cimolestans (Pantolestidae, Paroxyclaenidae), rodents (Ischyromyidae, Theridomyoidea, Gliridae), eulipotyphlans, bats, apatotherians, carnivoraformes (Miacidae), and hyaenodonts (Hyainailourinae, Proviverrinae). Other MP13-MP14 sites have also yielded fossils of turtles and crocodylomorphs, and MP13 sites are stratigraphically the latest to have yielded remains of the bird clades Gastornithidae and Palaeognathae. The Egerkingen α + β locality, dating to MP14, records fossils of P. eocaenum, P. ruetimeyeri, and P. castrense castrense. Other mammal genera recorded within the locality include the herpetotheriid Amphiperatherium, ischyromyids Ailuravus and Plesiarctomys, pseudosciurid Treposciurus, omomyid Necrolemur, adapid Leptadapis, proviverrine Proviverra, palaeotheres (Propalaeotherium, Anchilophus, Lophiotherium, Plagiolophus), hyrachyid Chasmotherium, lophiodont Lophiodon, dichobunids Hyperdichobune and Mouillacitherium, choeropotamid Rhagatherium, anoplotheriid Catodontherium, amphimerycid Pseudamphimeryx, cebochoerid Cebochoerus, tapirulid Tapirulus, mixtotheriid Mixtotherium, and the xiphodonts Dichodon and Haplomeryx. MP16 marks the first appearances of several species of Palaeotherium in the Central European region, namely P. castrense robiacense, P. pomeli, P. siderolithicum, and P. lautricense, some of which are exclusive to the unit (P. pomeli and P. lautricense) and one of which makes its latest appearance (P. castrense). The locality of Robiac in France records the likes of Palaeotherium aff. ruetimeyeri and all the aforementioned species from the region in MP16 along with the herpetotheriids Amphiperatherium and Peratherium, apatemyid Heterohyus, nyctithere Saturninia, omomyids (Necrolemur, Pseudoloris, and Microchoerus), adapid Adapis, ischyromyid Ailuravus, glirid Glamys, pseudosciurid Sciuroides, theridomyids Elfomys and Pseudoltinomys, hyaenodonts (Paracynohyaenodon, Paroxyaena, and Cynohyaenodon), carnivoraformes (Simamphicyon, Quercygale, and Paramiacis), cebochoerids Cebochoerus and Acotherulum, choeropotamids Choeropotamus and Haplobunodon, tapirulid Tapirulus, anoplotheriids (Dacrytherium, Catodontherium, and Robiatherium, dichobunid Mouillacitherium, robiacinid Robiacina, xiphodonts (Xiphodon, Dichodon, Haplomeryx), amphimerycid Pseudamphimeryx, lophiodont Lophiodon, hyrachyid Chasmotherium, and other palaeotheres (Plagiolophus, Leptolophus, Anchilophus, Metanchilophus, Lophiotherium, Pachynolophus, Eurohippus). MP16 also records two species that are restricted to the unit, P. llamaquiquense and P. giganteum, both of which were endemic to the Iberian region. MP17 marks the restricted appearance of another Iberian endemic species P. crusafonti. The endemic species of Palaeotherium were amongst the many taxa of palaeotheres only known from the Iberian region. P. giganteum is recorded from the Spanish locality of Mazaterón along with the testudines Hadrianus and Neochelys, alligatoroid Diplocynodon, baurusuchid Iberosuchus, adapoid Mazateronodon, omomyid Pseudoloris, pseudosciurid Sciuroides, theridomyids Pseudoltinomys and Remys, hyaenodont Proviverra, anoplotheriids (Duerotherium and cf. Dacrytherium), xiphodonts (cf. Dichodon), and other palaeotheres (Paranchilophus, Plagiolophus, Leptolophus, Cantabrotherium, Franzenium, and Iberolophus). After MP16, a faunal turnover occurred, marking the disappearances of the lophiodonts and European hyrachyids as well as the extinctions of all European crocodylomorphs except for the alligatoroid Diplocynodon. The causes of the faunal turnover have been attributed to a shift from humid and highly tropical environments to drier and more temperate forests with open areas and more abrasive vegetation. The surviving herbivorous faunas shifted their dentitions and dietary strategies accordingly to adapt to abrasive and seasonal vegetation. However, the environments were still subhumid and covered by subtropical evergreen forests. The Palaeotheriidae was the sole remaining European perissodactyl group, and frugivorous-folivorous or purely folivorous artiodactyls became the dominant group in western Europe. Late Eocene The late Eocene MP17 unit marks the first appearances of several species of Palaeotherium, namely P. magnum, P. medium, P. curtum, P. crassum, P. duvali, and P. muehlbergi. The temporal range of P. siderolithicum, first known in MP16, continued up to MP19, and P. renevieri made its first and only appearance in MP19. Some other species extended in temporal range up to MP19 (P. duvali, P. crassum) while some others lasted up to MP20 (P. magnum, P. curtum, P. muehlbergi). By the late Eocene, the latest species of Palaeotherium were widespread throughout western Europe, including what is now Portugal, Spain, France, Switzerland, Germany, and the United Kingdom. Additionally, the genus is known from as far east as the Thrace Basin of Greece in the eastern European region in the middle to late Eocene. The faunas of eastern Europe vastly differed from those of western Europe despite the presence of Palaeotherium in both regions. It is possible that Palaeotherium was distributed as far east as eastern Iran, depending on whether the footprints are attributable to it. The presence of Palaeotherium in eastern Europe suggests periodic connectivity between Balkanatolia and other Eurasian regions. Within the late Eocene, the Cainotheriidae and derived members of the Anoplotheriinae both made their first appearances by MP18. Also, several migrant mammal groups had reached western Europe by MP17a-MP18, namely the Anthracotheriidae, Hyaenodontinae, and Amphicyonidae. In addition to snakes, frogs, and salamandrids, rich assemblage of lizards are known in western Europe as well from MP16–MP20, representing the Iguanidae, Lacertidae, Gekkonidae, Agamidae, Scincidae, Helodermatidae, and Varanoidea, most of which were able to thrive in the warm temperatures of western Europe. The MP18 locality of La Débruge in France holds fossil records of multiple species of Palaeotherium, namely P. curtum villerealense, P. duvali duvali, P. muehlbergi thaleri, P. medium perrealense, P. crassum robustum, and P. magnum girondicum. The locality indicates that the multiple subspecies of Palaeotherium coexisted with the herpetotheriid Peratherium, theridomyids Blainvillimys and Theridomys, ischyromyid Plesiarctomys, glirid Glamys, hyaenodonts Hyaenodon and Pterodon, amphicyonid Cynodictis, palaeotheres Plagiolophus and Anchilophus, dichobunid Dichobune, choeropotamid Choeropotamus, cebochoerids Cebochoerus and Acotherulum, anoplotheriids (Anoplotherium, Diplobune, and Dacrytherium), tapirulid Tapirulus, xiphodonts Xiphodon and Dichodon, cainothere Oxacron, amphimerycid Amphimeryx, and the anthracothere Elomeryx. Extinction The Grande Coupure event during the latest Eocene to earliest Oligocene (MP20-MP21) is one of the largest and most abrupt faunal turnovers in the Cenozoic of Western Europe and coincident with climate forcing events of cooler and more seasonal climates. The event led to the extinction of 60% of western European mammalian lineages, which were subsequently replaced by Asian immigrants. The Grande Coupure is often dated directly to the Eocene-Oligocene boundary at 33.9 Ma, although some estimate that the event began slightly later, at 33.6–33.4 mya. The event occurred during or after the Eocene-Oligocene transition, an abrupt shift from a hot greenhouse world that characterised much of the Palaeogene to a coolhouse/icehouse world from the early Oligocene onwards. The massive drop in temperatures results from the first major expansion of the Antarctic ice sheets that caused drastic pCO2 decreases and an estimated drop of ~ in sea level. Many palaeontologists agree that glaciation and the resulting drops in sea level allowed for increased migrations between Balkanatolia and western Europe. The Turgai Strait, which once separated much of Europe from Asia, is often proposed as the main European seaway barrier prior to the Grande Coupure, but some researchers challenged this perception recently, arguing that it completely receded already 37 Ma, long before the Eocene-Oligocene transition. In 2022, Alexis Licht et al. suggested that the Grande Coupure could have possibly been synchronous with the Oi-1 glaciation (33.5 Ma), which records a decline in atmospheric CO2, boosting the Antarctic glaciation that already started by the Eocene-Oligocene transition. The Grande Coupure event marked a large faunal turnover marking the arrivals of anthracotheres, entelodonts, ruminants (Gelocidae, Lophiomerycidae), rhinocerotoids (Rhinocerotidae, Amynodontidae, Eggysodontidae), carnivorans (later Amphicyonidae, Amphicynodontidae, Nimravidae, and Ursidae), eastern Eurasian rodents (Eomyidae, Cricetidae, and Castoridae), and eulipotyphlans (Erinaceidae). The MP20 unit, the last before the Grande Coupure, marks the last appearances of most species of Palaeotherium, namely P. magnum, P. curtum, and P. muehlbergi. P. medium survived the Grande Coupure event based on its appearance at MP21, making it the last representative of its genus before its extinction. The extinction and faunal turnover devastated many of the endemic faunas of western Europe by driving many mammalian genera to extinction, the causes being attributed to interactions with immigrant faunas (competition, predations), environmental changes from cooling climates, or some combination of the two. Researchers have proposed theories as to why both P. medium and Plagiolophus minor survived the Grande Coupure event up to the early Oligocene whereas other species went extinct. Santi proposed that the dentition and cranial musculature of Palaeotherium were generally unsuited for the closed habitat turnovers caused by aridification and expansion of more open habitats, therefore being unable to adapt to the environmental changes. He also suggested that its poorer sight and hearing senses plus slow locomotion could have also made it more vulnerable to immigrant carnivores. The researcher then explained that P. medium could have survived longer than the other species of Palaeotherium because of its cursorial nature, with MacLaren and Nauwelaerts similarly stating that Plagiolophus minor was more well-suited to adapt to open and drier habitats and immigrant predators than its relatives because of its smaller size and cursorial nature. Sarah C. Joomun et al. determined that certain faunas may have arrived later and therefore may not have played roles in the extinctions. They concluded that climate change, which led to increased seasonality and changes in plant food availability, caused certain palaeotheres and artiodactyls to become unable to adapt to the major changes and go extinct.
Biology and health sciences
Perissodactyla
Animals
26137572
https://en.wikipedia.org/wiki/Plant%20litter
Plant litter
Plant litter (also leaf litter, tree litter, soil litter, litterfall or duff) is dead plant material (such as leaves, bark, needles, twigs, and cladodes) that have fallen to the ground. This detritus or dead organic material and its constituent nutrients are added to the top layer of soil, commonly known as the litter layer or O horizon ("O" for "organic"). Litter is an important factor in ecosystem dynamics, as it is indicative of ecological productivity and may be useful in predicting regional nutrient cycling and soil fertility. Characteristics and variability Litterfall is characterized as fresh, undecomposed, and easily recognizable (by species and type) plant debris. This can be anything from leaves, cones, needles, twigs, bark, seeds/nuts, logs, or reproductive organs (e.g. the stamen of flowering plants). Items larger than 2 cm diameter are referred to as coarse litter, while anything smaller is referred to as fine litter or litter. The type of litterfall is most directly affected by ecosystem type. For example, leaf tissues account for about 70 percent of litterfall in forests, but woody litter tends to increase with forest age. In grasslands, there is very little aboveground perennial tissue so the annual litterfall is very low and quite nearly equal to the net primary production. In soil science, soil litter is classified in three layers, which form on the surface of the O Horizon. These are the L, F, and H layers: The litter layer is quite variable in its thickness, decomposition rate and nutrient content and is affected in part by seasonality, plant species, climate, soil fertility, elevation, and latitude. The most extreme variability of litterfall is seen as a function of seasonality; each individual species of plant has seasonal losses of certain parts of its body, which can be determined by the collection and classification of plant litterfall throughout the year, and in turn affects the thickness of the litter layer. In tropical environments, the largest amount of debris falls in the latter part of dry seasons and early during wet season. As a result of this variability due to seasons, the decomposition rate for any given area will also be variable. Latitude also has a strong effect on litterfall rates and thickness. Specifically, litterfall declines with increasing latitude. In tropical rainforests, there is a thin litter layer due to the rapid decomposition, while in boreal forests, the rate of decomposition is slower and leads to the accumulation of a thick litter layer, also known as a mor. Net primary production works inversely to this trend, suggesting that the accumulation of organic matter is mainly a result of decomposition rate. Surface detritus facilitates the capture and infiltration of rainwater into lower soil layers. The surface detritus also protects soil from excess drying and warming. Soil litter protects soil aggregates from raindrop impact, preventing the release of clay and silt particles from plugging soil pores. Releasing clay and silt particles reduces the capacity for soil to absorb water and increases cross surface flow, accelerating soil erosion. In addition soil litter reduces wind erosion by preventing soil from losing moisture and providing cover preventing soil transportation. Organic matter accumulation also helps protect soils from wildfire damage. Soil litter can be completely removed depending on intensity and severity of wildfires and season. Regions with high frequency wildfires have reduced vegetation density and reduced soil litter accumulation. Climate also influences the depth of plant litter. Typically humid tropical and sub-tropical climates have reduced organic matter layers and horizons due to year-round decomposition and high vegetation density and growth. In temperate and cold climates, litter tends to accumulate and decompose slower due to a shorter growing season. Net primary productivity Net primary production and litterfall are intimately connected. In every terrestrial ecosystem, the largest fraction of all net primary production is lost to herbivores and litter fall. Due to their interconnectedness, global patterns of litterfall are similar to global patterns of net primary productivity. Plant litter, which can be made up of fallen leaves, twigs, seeds, flowers, and other woody debris, makes up a large portion of above ground net primary production of all terrestrial ecosystems. Fungus plays a large role in cycling the nutrients from the plant litter back into the ecosystem. Habitat and food Litter provides habitat for a variety of organisms. Plants Certain plants are specially adapted for germinating and thriving in the litter layers. For example, bluebell (Hyacinthoides non-scripta) shoots puncture the layer to emerge in spring. Some plants with rhizomes, such as common wood sorrel (Oxalis acetosella) do well in this habitat. Detritivores and other decomposers Many organisms that live on the forest floor are decomposers, such as fungi. Organisms whose diet consists of plant detritus, such as earthworms, are termed detritivores. The community of decomposers in the litter layer also includes bacteria, amoeba, nematodes, rotifer, tardigrades, springtails, cryptostigmata, potworms, insect larvae, mollusks, oribatid mites, woodlice, and millipedes. Even some species of microcrustaceans, especially copepods (for instance Bryocyclops spp., Graeteriella spp.,Olmeccyclops hondo, Moraria spp.,Bryocamptus spp., Atheyella spp.) live in moist leaf litter habitats and play an important role as predators and decomposers. The consumption of the litterfall by decomposers results in the breakdown of simple carbon compounds into carbon dioxide (CO2) and water (H2O), and releases inorganic ions (like nitrogen and phosphorus) into the soil where the surrounding plants can then reabsorb the nutrients that were shed as litterfall. In this way, litterfall becomes an important part of the nutrient cycle that sustains forest environments. As litter decomposes, nutrients are released into the environment. The portion of the litter that is not readily decomposable is known as humus. Litter aids in soil moisture retention by cooling the ground surface and holding moisture in decaying organic matter. The flora and fauna working to decompose soil litter also aid in soil respiration. A litter layer of decomposing biomass provides a continuous energy source for macro- and micro-organisms. Larger animals Numerous reptiles, amphibians, birds, and even some mammals rely on litter for shelter and forage. Amphibians such as salamanders and caecilians inhabit the damp microclimate underneath fallen leaves for part or all of their life cycle. This makes them difficult to observe. A BBC film crew captured footage of a female caecilian with young for the first time in a documentary that aired in 2008. Some species of birds, such as the ovenbird of eastern North America for example, require leaf litter for both foraging and material for nests. Sometimes litterfall even provides energy to much larger mammals, such as in boreal forests where lichen litterfall is one of the main constituents of wintering deer and elk diets. Nutrient cycle During leaf senescence, a portion of the plant's nutrients are reabsorbed from the leaves. The nutrient concentrations in litterfall differ from the nutrient concentrations in the mature foliage by the reabsorption of constituents during leaf senescence. Plants that grow in areas with low nutrient availability tend to produce litter with low nutrient concentrations, as a larger proportion of the available nutrients is reabsorbed. After senescence, the nutrient-enriched leaves become litterfall and settle on the soil below. Litterfall is the dominant pathway for nutrient return to the soil, especially for nitrogen (N) and phosphorus (P). The accumulation of these nutrients in the top layer of soil is known as soil immobilization. Once the litterfall has settled, decomposition of the litter layer, accomplished through the leaching of nutrients by rainfall and throughfall and by the efforts of detritivores, releases the breakdown products into the soil below and therefore contributes to the cation exchange capacity of the soil. This holds especially true for highly weathered tropical soils. Decomposition rate is tied to the type of litterfall present. Leaching is the process by which cations such as iron (Fe) and aluminum (Al), as well as organic matter are removed from the litterfall and transported downward into the soil below. This process is known as podzolization and is particularly intense in boreal and cool temperate forests that are mainly constituted by coniferous pines whose litterfall is rich in phenolic compounds and fulvic acid. By the process of biological decomposition by microfauna, bacteria, and fungi, CO2 and H2O, nutrient elements, and a decomposition-resistant organic substance called humus are released. Humus composes the bulk of organic matter in the lower soil profile. The decline of nutrient ratios is also a function of decomposition of litterfall (i.e. as litterfall decomposes, more nutrients enter the soil below and the litter will have a lower nutrient ratio). Litterfall containing high nutrient concentrations will decompose more rapidly and asymptote as those nutrients decrease. Knowing this, ecologists have been able to use nutrient concentrations as measured by remote sensing as an index of a potential rate of decomposition for any given area. Globally, data from various forest ecosystems shows an inverse relationship in the decline in nutrient ratios to the apparent nutrition availability of the forest. Once nutrients have re-entered the soil, the plants can then reabsorb them through their roots. Therefore, nutrient reabsorption during senescence presents an opportunity for a plant's future net primary production use. A relationship between nutrient stores can also be defined as: annual storage of nutrients in plant tissues + replacement of losses from litterfall and leaching = the amount of uptake in an ecosystem Non-terrestrial Litterfall Non-terrestrial litterfall follows a very different path. Litter is produced both inland by terrestrial plants and moved to the coast by fluvial processes, and by mangrove ecosystems. From the coast Robertson & Daniel 1989 found it is then removed by the tide, crabs and microbes. They also noticed that which of those three is most significant depends on the tidal regime. Nordhaus et al. 2011 find crabs forage for leaves at low tide and if their detritivory is the predominant disposal route, they can take 80% of leaf material. Bakkar et al 2017 studied the chemical contribution of the resulting crab defecation. They find crabs pass a noticeable amount of undegraded lignins to both the sediments and water composition. They also find that the exact carbonaceous contribution of each plant species can be traced from the plant, through the crab, to its sediment or water disposition in this way. Crabs are usually the only significant macrofauna in this process, however Raw et al 2017 find Terebralia palustris competes with crabs unusually vigorously in southeast Asia. Collection and analysis The main objectives of litterfall sampling and analysis are to quantify litterfall production and chemical composition over time in order to assess the variation in litterfall quantities, and hence its role in nutrient cycling across an environmental gradient of climate (moisture and temperature) and soil conditions. Ecologists employ a simple approach to the collection of litterfall, most of which centers around one piece of equipment, known as a litterbag. A litterbag is simply any type of container that can be set out in any given area for a specified amount of time to collect the plant litter that falls from the canopy above. Litterbags are generally set in random locations within a given area and marked with GPS or local coordinates, and then monitored on a specific time interval. Once the samples have been collected, they are usually classified on type, size and species (if possible) and recorded on a spreadsheet. When measuring bulk litterfall for an area, ecologists will weigh the dry contents of the litterbag. By this method litterfall flux can be defined as: litterfall (kg m−2 yr−1) = total litter mass (kg) / litterbag area (m2) The litterbag may also be used to study decomposition of the litter layer. By confining fresh litter in the mesh bags and placing them on the ground, an ecologist can monitor and collect the decay measurements of that litter. An exponential decay pattern has been produced by this type of experiment: , where is the initial leaf litter and is a constant fraction of detrital mass. The mass-balance approach is also utilized in these experiments and suggests that the decomposition for a given amount of time should equal the input of litterfall for that same amount of time. litterfall = k(detrital mass) For study various groups from edaphic fauna you need a different mesh sizes in the litterbags Issues Change due to invasive earthworms In some regions of glaciated North America, earthworms have been introduced where they are not native. Non-native earthworms have led to environmental changes by accelerating the rate of decomposition of litter. These changes are being studied, but may have negative impacts on some inhabitants such as salamanders. Forest litter raking Leaf litter accumulation depends on factors like wind, decomposition rate and species composition of the forest. The quantity, depth and humidity of leaf litter varies in different habitats. The leaf litter found in primary forests is more abundant, deeper and holds more humidity than in secondary forests. This condition also allows for a more stable leaf litter quantity throughout the year. This thin, delicate layer of organic material can be easily affected by humans. For instance, forest litter raking as a replacement for straw in husbandry is an old non-timber practice in forest management that has been widespread in Europe since the seventeenth century. In 1853, an estimated 50 Tg of dry litter per year was raked in European forests, when the practice reached its peak. This human disturbance, if not combined with other degradation factors, could promote podzolisation; if managed properly (for example, by burying litter removed after its use in animal husbandry), even the repeated removal of forest biomass may not have negative effects on pedogenesis.
Physical sciences
Soil science
Earth science
5381102
https://en.wikipedia.org/wiki/Ammonium%20ferric%20citrate
Ammonium ferric citrate
Ammonium ferric citrate (also known as ferric ammonium citrate or ammoniacal ferrous citrate) has the formula . The iron in this compound is trivalent. All three carboxyl groups and the central hydroxyl group of citric acid are deprotonated. A distinguishing feature of this compound is that it is very soluble in water, in contrast to ferric citrate which is not very soluble. In its crystal structure each moiety of citric acid has lost four protons. The deprotonated hydroxyl group and two of the carboxylate groups ligate to the ferric center, while the third carboxylate group coordinates with the ammonium. Uses Ammonium ferric citrate has a range of uses, including: As a food ingredient, it has an INS number 381, and is used as an acidity regulator. Most notably used in the Scottish beverage Irn-Bru. Water purification As a reducing agent of metal salts of low activity like gold and silver With potassium ferricyanide as part of the cyanotype photographic process Used in Kligler's Iron Agar (KIA) test to identify enterobacteriaceae bacteria by observing their metabolism of different sugars, producing hydrogen sulfide In medical imaging, ammonium ferric citrate is used as a contrast medium. As a hematinic
Physical sciences
Citrates
Chemistry