id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,494,086 | https://en.wikipedia.org/wiki/Marine%20larval%20ecology | Marine larval ecology is the study of the factors influencing dispersing larvae, which many marine invertebrates and fishes have. Marine animals with a larva typically release many larvae into the water column, where the larvae develop before metamorphosing into adults.
Marine larvae can disperse over long distances, although determining the actual distance is challenging, because of their size and the lack of a good tracking method. Knowing dispersal distances is important for managing fisheries, effectively designing marine reserves, and controlling invasive species.
Theories on the evolution of a biphasic life history
Larval dispersal is one of the most important topics in marine ecology, today. Many marine invertebrates and many fishes have a bi-phasic life cycle with a pelagic larva or pelagic eggs that can be transported over long distances, and a demersal or benthic adult. There are several theories behind why these organisms have evolved this biphasic life history:
Larvae use different food sources than adults, which decreases competition between life stages.
Pelagic larvae can disperse large distances, colonize new territory, and move away from habitats that has become overcrowded or otherwise unsuitable.
A long pelagic larval phase can help a species to break its parasite cycles.
Pelagic larvae avoid benthic predators.
Dispersing as pelagic larvae can be risky. For example, while larvae do avoid benthic predators, they are still exposed to pelagic predators in the water column.
Larval development strategies
Marine larvae develop via one of three strategies: Direct, lecithotrophic, or planktotrophic. Each strategy has risks of predation and the difficulty of finding a good settlement site.
Direct developing larvae look like the adult. They have typically very low dispersal potential, and are known as "crawl-away larvae", because they crawl away from their egg after hatching. Some species of frogs and snails hatch this way.
Lecithotrophic larvae have greater dispersal potential than direct developers. Many fish species and some benthic invertebrates have lecithotrophic larvae, which have yolk droplets or a yolk sac for nutrition during dispersal. Though some lecithotrophic species can feed in the water column, too. But many, such as tunicates, cannot, and so must settle before depleting their yolk. Consequently, these species have short pelagic larval durations and do not disperse long distances.
Planktotrophic larvae feed while they are in the water column and can be over a long time pelagic and so disperse over long distances. This disperse ability is a key adaptation of benthic marine invertebrates. Planktotrophic larvae feed on phytoplankton and small zooplankton, including other larvae. Planktotrophic development is the most common type of larval development, especially among benthic invertebrates.
Because planktotrophic larvae are for a long time in the water column and recruit successfully with low probability, early researchers developed the "lottery hypothesis", which states that animals release huge numbers of larvae to increase the chances that at least one will survive, and that larvae cannot influence their probability of success. This hypothesis views larval survival and successful recruitment as chance events, which numerous studies on larval behavior and ecology have since shown to be false. Though it has been generally disproved, the larval lottery hypothesis represents an important understanding of the difficulties faced by larvae during their time in the water column.
Predator defense
Predation is a major threat to marine larvae, which are an important food source for many organisms. Invertebrate larvae in estuaries are particularly at risk because estuaries are nursery grounds for planktivorous fishes. Larvae have evolved strategies to cope with this threat, including direct defense and avoidance.
Direct defense
Direct defense can include protective structures and chemical defenses. Most planktivorous fishes are gape-limited predators, meaning their prey is determined by the width of their open mouths, making larger larvae difficult to ingest. One study proved that spines serve a protective function by removing spines from estuarine crab larvae and monitoring differences in predation rates between de-spined and intact larvae. The study also showed that predator defense is also behavioral, as they can keep spines relaxed but erect them in the presence of predators.
Avoidance
Larvae can avoid predators on small and large spatial scales. Some larvae do this by sinking when approached by a predator. A more common avoidance strategy is to become active at night and remain hidden during the day to avoid visual predators. Most larvae and plankton undertake diel vertical migrations between deeper waters with less light and fewer predators during the day and shallow waters in the photic zone at night, where microalgae is abundant. Estuarine invertebrate larvae avoid predators by developing in the open ocean, where there are fewer predators. This is done using reverse tidal vertical migrations. Larvae use tidal cycles and estuarine flow regimes to aid their departure to the ocean, a process that is well-studied in many estuarine crab species.
An example of reverse tidal migration performed by crab species would begin with larvae being released on a nocturnal spring high tide to limit predation by planktivorous fishes. As the tide begins to ebbs, larvae swim to the surface to be carried away from the spawning site. When the tide begins to flood, larvae swim to the bottom, where water moves more slowly due to the boundary layer. When the tide again changes back to ebb, the larvae swim to the surface waters and resume their journey to the ocean. Depending on the length of the estuary and the speed of the currents, this process can take anywhere from one tidal cycle to several days.
Dispersal and settlement
The most widely accepted theory explaining the evolution of a pelagic larval stage is the need for long-distance dispersal ability. Sessile and sedentary organisms such as barnacles, tunicates, and mussels require a mechanism to move their young into new territory, since they cannot move long distances as adults. Many species have relatively long pelagic larval durations on the order of weeks or months. During this time, larvae feed and grow, and many species metamorphose through several stages of development. For example, barnacles molt through six naupliar stages before becoming a cyprid and seeking appropriate settlement substrate.
This strategy can be risky. Some larvae have been shown to be able to delay their final metamorphosis for a few days or weeks, and most species cannot delay it at all. If these larvae metamorphose far from a suitable settlement site, they perish. Many invertebrate larvae have evolved complex behaviors and endogenous rhythms to ensure successful and timely settlement.
Many estuarine species exhibit swimming rhythms of reverse tidal vertical migration to aid in their transport away from their hatching site. Individuals can also exhibit tidal vertical migrations to reenter the estuary when they are competent to settle.
As larvae reach their final pelagic stage, they become much more tactile; clinging to anything larger than themselves. One study observed crab postlarvae and found that they would swim vigorously until they encountered a floating object, which they would cling to for the remainder of the experiment. It was hypothesized that by clinging to floating debris, crabs can be transported towards shore due to the oceanographic forces of internal waves, which carry floating debris shoreward regardless of the prevailing currents.
Once returning to shore, settlers encounter difficulties concerning their actual settlement and recruitment into the population. Space is a limiting factor for sessile invertebrates on rocky shores. Settlers must be wary of adult filter feeders, which cover substrate at settlement sites and eat particles the size of larvae. Settlers must also avoid becoming stranded out of water by waves, and must select a settlement site at the proper tidal height to prevent desiccation and avoid competition and predation. To overcome many of these difficulties, some species rely on chemical cues to assist them in selecting an appropriate settlement site. These cues are usually emitted by adult conspecifics, but some species cue on specific bacterial mats or other qualities of the substrate.
Larval sensory systems
Although with a pelagic larva, many species can increase their dispersal range and decrease the risk of inbreeding, a larva comes with challenges: Marine larvae risk being washed away without finding a suitable habitat for settlement. Therefore, they have evolved many sensory systems:
Sensory systems
Magnetic fields
Far from shore, larvae are able to use magnetic fields to orient themselves towards the coast over large spatial scales. There is additional evidence that species can recognize anomalies in the magnetic field to return to the same location multiple times throughout their life. Though the mechanisms that these species use is poorly understood, it appears that magnetic fields play an important role in larval orientation offshore, where other cues such as sound and chemicals may be difficult to detect.
Vision and non-visual light perception
Phototaxis (ability to differentiate between light and dark areas) is important to find a suitable habitat. Phototaxis evolved relatively quickly and taxa that lack developed eyes, such as schyphozoans, use phototaxis to find shaded areas to settle away from predators.
Phototaxis is not the only mechanism that guides larvae by light. The larvae of the annelid Platynereis dumerilii do not only show positive and negative phototaxis over a broad range of the light spectrum, but swim down to the center of gravity when they are exposed to non-directional UV-light. This behavior is a UV-induced positive gravitaxis. This gravitaxis and negative phototaxis induced by light coming from the water surface form a ratio-metric depth-gauge. Such a depth gauge is based on the different attenuation of light across the different wavelengths in water. In clear water blue light (470 nm) penetrates the deepest. And so the larvae need only to compare the two wavelength ranges UV/violet (< 420 nm) and the other wavelengths to find their preferred depth.
Species that produce more complex larvae, such as fish, can use full vision to find a suitable habitat on small spatial scales. Larvae of damselfish use vision to find and settle near adults of their species.
Sound
Marine larvae use sound and vibrations to find a good habitat where they can settle and metamorphose into juveniles. This behavior has been seen in fish as well as in the larvae of scleractinian corals. Many families of coral reef fish are particularly attracted to high-frequency sounds produced by invertebrates, which larvae use as an indicator of food availability and complex habitat where they may be protected from predators. It is thought that larvae avoid low frequency sounds because they may be associated with transient fish or predators and is therefore not a reliable indicator of safe habitat.
The spatial range at which larvae detect and use sound waves is still uncertain, though some evidence suggests that it may only be reliable at very small scales. There is concern that changes in community structure in nursery habitats, such as seagrass beds, kelp forests, and mangroves, could lead to a collapse in larval recruitment due to a decrease in sound-producing invertebrates. Other researchers argue that larvae may still successfully find a place to settle even if one cue is unreliable.
Olfaction
Many marine organisms use olfaction (chemical cues in the form of scent) to locate a safe area to metamorphose at the end of their larval stage. This has been shown in both vertebrates and invertebrates. Research has shown that larvae are able to distinguish between water from the open ocean and water from more suitable nursery habitats such as lagoons and seagrass beds. Chemical cues can be extremely useful for larvae, but may not have a constant presence, as water input can depend on currents and tidal flow.
Human impacts on sensory systems
Recent research in the field of larval sensory biology has begun focusing more on how human impacts and environmental disturbance affect settlement rates and larval interpretation of different habitat cues. Ocean acidification due to anthropogenic climate change and sedimentation have become areas of particular interest.
Ocean acidification
Although several behaviours of coral reef fish, including larvae, has been found to be detrimentally affected from projected end-of-21st-century ocean acidification in previous experiments, a 2020 replication study found that "end-of-century ocean acidification levels have negligible effects on [three] important behaviours of coral reef fishes" and with "data simulations, [showed] that the large effect sizes and small within-group variances that have been reported in several previous studies are highly improbable". In 2021, it emerged that some of the previous studies about coral reef fish behaviour changes have been accused of being fraudulent. Furthermore, effect sizes of studies assessing ocean acidification effects on fish behaviour have declined dramatically over a decade of research on this topic, with effects appearing negligible since 2015.
Ocean acidification has been shown to alter the way that pelagic larvae are able to process information and production of the cues themselves. Acidification can alter larval interpretations of sounds, particularly in fish, leading to settlement in suboptimal habitat. Though the mechanism for this process is still not fully understood, some studies indicate that this breakdown may be due to a decrease in size or density of their otoliths. Furthermore, sounds produced by invertebrates that larvae rely on as an indicator of habitat quality can also change due to acidification. For example, snapping shrimp produce different sounds that larvae may not recognize under acidified conditions due to differences in shell calcification.
Hearing is not the only sense that may be altered under future ocean chemistry conditions. Evidence also suggests that larval ability to process olfactory cues was also affected when tested under future pH conditions. Red color cues that coral larvae use to find crustose coralline algae, with which they have a commensal relationship, may also be in danger due to algal bleaching.
Sedimentation
Sediment runoff, from natural storm events or human development, can also impact larval sensory systems and survival. One study focusing on red soil found that increased turbidity due to runoff negatively influenced the ability of fish larvae to interpret visual cues. More unexpectedly, they also found that red soil can also impair olfactory capabilities.
Self-recruitment
Marine ecologists are often interested in the degree of self-recruitment in populations. Historically, larvae were considered passive particles that were carried by ocean currents to faraway locations. This led to the belief that all marine populations were demographically open, connected by long distance larval transport. Recent work has shown that many populations are self-recruiting, and that larvae and juveniles are capable of purposefully returning to their natal sites.
Researchers take a variety of approaches to estimating population connectivity and self-recruitment, and several studies have demonstrated their feasibility. Jones et al. and Swearer et al., for example, investigated the proportion of fish larvae returning to their natal reef. Both studies found higher than expected self-recruitment in these populations using mark, release, and recapture sampling. These studies were the first to provide conclusive evidence of self-recruitment in a species with the potential to disperse far from its natal site, and laid the groundwork for numerous future studies.
Conservation
Ichthyoplankton have a high mortality rate as they transition their food source from yolk sac to zooplankton. It is proposed that this mortality rate is related to food supply as well as an inability to move through the water effectively at this stage of development, leading to starvation. Turbidity of water can also impact the organisms' ability to feed even when there is a high density of prey. Reducing hydrodynamic constraints on cultivated populations could lead to higher yields for repopulation efforts and has been proposed as a means of conserving fish populations by acting at the larval level.
A network of marine reserves has been initiated for the conservation of the world's marine larval populations. These areas restrict fishing and therefore increase the number of otherwise fished species. This leads to a healthier ecosystem and affects the number of overall species within the reserve as compared to nearby fished areas; however, the full effect of an increase in larger predator fish on larval populations is not currently known. Also, the potential for utilizing the motility of fish larvae to repopulate the water surrounding the reserve is not fully understood. Marine reserves are a part of a growing conservation effort to combat overfishing; however, reserves still only comprise about 1% of the world's oceans. These reserves are also not protected from other human-derived threats, such as chemical pollutants, so they cannot be the only method of conservation without certain levels of protection for the water around them as well.
For effective conservation, it is important to understand the larval dispersal patterns of the species in danger, as well as the dispersal of invasive species and predators which could impact their populations. Understanding these patterns is an important factor when creating protocol for governing fishing and creating reserves. A single species may have multiple dispersal patterns. The spacing and size of marine reserves must reflect this variability to maximize their beneficial effect. Species with shorter dispersal patterns are more likely to be affected by local changes and require higher priority for conservation because of the separation of subpopulations.
Implications
The principles of marine larval ecology can be applied in other fields, too whether marine or not. Successful fisheries management relies heavily on understanding population connectivity and dispersal distances, which are driven by larvae. Dispersal and connectivity must also be considered when designing natural reserves. If populations are not self-recruiting, reserves may lose their species assemblages. Many invasive species can disperse over long distances, including the seeds of land plants and larvae of marine invasive species. Understanding the factors influencing their dispersal is key to controlling their spread and managing established populations.
See also
Crustacean larvae
Ichthyoplankton
References
Marine biology | Marine larval ecology | [
"Biology"
] | 3,666 | [
"Marine biology"
] |
9,494,348 | https://en.wikipedia.org/wiki/Deferiprone | Deferiprone, sold under the brand name Ferriprox among others, is a medication that chelates iron and is used to treat iron overload in thalassaemia major. It was first approved and indicated for use in treating thalassaemia major in 1994 and had been licensed for use in the European Union for many years while awaiting approval in Canada and in the United States. On 14 October 2011, it was approved for use in the US under the FDA's accelerated approval program.
The most common side effects include red-brown urine (showing that iron is being removed through the urine), nausea (feeling sick), abdominal pain (stomach ache) and vomiting. Less common but more serious side effects are agranulocytosis (very low levels of granulocytes, a type of white blood cell) and neutropenia (low levels of neutrophils, a type of white blood cell that fights infections).
Medical uses
Deferiprone monotherapy is indicated in the European Union for the treatment of iron overload in those with thalassaemia major when current chelation therapy is contraindicated or inadequate.
Deferiprone in combination with another chelator is indicated in the European Union in those with thalassaemia major when monotherapy with any iron chelator is ineffective, or when prevention or treatment of life-threatening consequences of iron overload (mainly cardiac overload) justifies rapid or intensive correction.
The researchers found that the oral drug, deferiprone, reactivates the “altruistic suicide response” of an HIV-infected cell, killing the HIV RNA it carries. Effective suppression of HIV-1 generation and induction of apoptosis both require deferiprone at a concentration around 150 μM in infected T-cell lines. Since a 0.5 log10 decrement in HIV-1 RNA corresponds to an additional 2 years of AIDS-free survival and a 0.3 log10 decrement reduces the annual risk of progression to AIDS-related death by 25%, the measurements suggested biological significance.
Controversy
Deferiprone was at the center of a protracted struggle between Nancy Olivieri, a Canadian haematologist and researcher, and the Hospital for Sick Children and the pharmaceutical company Apotex, that started in 1996, and delayed approval of the drug in North America. Olivieri's data suggested that deferiprone can lead to progressive liver failure.
History
Deferiprone was approved for medical use in the European Union in August 1999.
It was approved for medical use in the United States in October 2011. Generic versions were approved in August 2019.
The safety and effectiveness of deferiprone is based on an analysis of data from twelve clinical studies in 236 participants. Participants in the study did not respond to prior iron chelation therapy. Deferiprone was considered a successful treatment for participants who experienced at least a 20 percent decrease in serum ferritin, a protein that stores iron in the body for later use. Half of the participants in the study experienced at least a 20 percent decrease in ferritin levels.
References
Chelating agents
Chelating agents used as drugs
Antidotes
4-Pyridones | Deferiprone | [
"Chemistry"
] | 658 | [
"Chelating agents",
"Process chemicals"
] |
2,227,366 | https://en.wikipedia.org/wiki/Photoheterotroph | Photoheterotrophs (Gk: photo = light, hetero = (an)other, troph = nourishment) are heterotrophic phototrophs—that is, they are organisms that use light for energy, but cannot use carbon dioxide as their sole carbon source. Consequently, they use organic compounds from the environment to satisfy their carbon requirements; these compounds include carbohydrates, fatty acids, and alcohols. Examples of photoheterotrophic organisms include purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria. These microorganisms are ubiquitous in aquatic habitats, occupy unique niche-spaces, and contribute to global biogeochemical cycling. Recent research has also indicated that the oriental hornet and some aphids may be able to use light to supplement their energy supply.
Research
Studies have shown that mammalian mitochondria can also capture light and synthesize ATP when mixed with pheophorbide, a light-capturing metabolite of chlorophyll. Research demonstrated that the same metabolite when fed to the worm Caenorhabditis elegans leads to increase in ATP synthesis upon light exposure, along with an increase in life span.
Furthermore, inoculation experiments suggest that mixotrophic Ochromonas danica (i.e., Golden algae)—and comparable eukaryotes—favor photoheterotrophy in oligotrophic (i.e., nutrient-limited) aquatic habitats. This preference may increase energy-use efficiency and growth by reducing investment in inorganic carbon fixation (e.g., production of autotrophic machineries such as RuBisCo and PSII).
Metabolism
Photoheterotrophs generate ATP using light, in one of two ways: they use a bacteriochlorophyll-based reaction center, or they use a bacteriorhodopsin. The chlorophyll-based mechanism is similar to that used in photosynthesis, where light excites the molecules in a reaction center and causes a flow of electrons through an electron transport chain (ETS). This flow of electrons through the proteins causes hydrogen ions to be pumped across a membrane. The energy stored in this proton gradient is used to drive ATP synthesis. Unlike in photoautotrophs, the electrons flow only in a cyclic pathway: electrons released from the reaction center flow through the ETS and return to the reaction center. They are not utilized to reduce any organic compounds. Purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria are examples of bacteria that carry out this scheme of photoheterotrophy.
Other organisms, including halobacteria and flavobacteria and vibrios have purple-rhodopsin-based proton pumps that supplement their energy supply. The archaeal version is called bacteriorhodopsin, while the eubacterial version is called proteorhodopsin. The pump consists of a single protein bound to a Vitamin A derivative, retinal. The pump may have accessory pigments (e.g., carotenoids) associated with the protein. When light is absorbed by the retinal molecule, the molecule isomerises. This drives the protein to change shape and pump a proton across the membrane. The hydrogen ion gradient can then be used to generate ATP, transport solutes across the membrane, or drive a flagellar motor. One particular flavobacterium cannot reduce carbon dioxide using light, but uses the energy from its rhodopsin system to fix carbon dioxide through anaplerotic fixation. The flavobacterium is still a heterotroph as it needs reduced carbon compounds to live and cannot subsist on only light and CO2. It cannot carry out reactions in the form of
n CO2 + 2n H2D + photons → (CH2O)n + 2n D + n H2O,
where H2D may be water, H2S or another compound/compounds providing the reducing electrons and protons; the 2D + H2O pair represents an oxidized form.
However, it can fix carbon in reactions like:
CO2 + pyruvate + ATP (from photons) → malate + ADP +Pi
where malate or other useful molecules are otherwise obtained by breaking down other compounds by
carbohydrate + O2 → malate + CO2 + energy.
This method of carbon fixation is useful when reduced carbon compounds are scarce and cannot be wasted as CO2 during interconversions, but energy is plentiful in the form of sunlight.
Ecology
Distribution and niche partitioning
Photoheterotrophs—either 1) cyanobacteria (i.e. facultative heterotrophs in nutrient-limited environments like Synechococcus and Prochlorococcus), 2) aerobic anoxygenic photoheterotrophic bacteria (AAP; employing bacteriochlorophyll-based reaction centers), 3) proteorhodopsin (PR)-containing bacteria and archaea, and 4) heliobacteria (i.e., the only phototroph with bacteriochlorophyll g pigments, or Gram-positive membrane) are found in various aquatic habitats including oceans, stratified lakes, rice fields, and environmental extremes.
In oceans' photic zones, up to 10% of bacterial cells are capable of AAP, whereas greater than 50% of net marine microorganisms house PR—reaching up to 90% in coastal biomes. As demonstrated in inoculation experiments, photoheterotrophy may provide these planktonic microbes competitive advantages 1) relative to chemoheterotrophs in oligotrophic (i.e., nutrient-poor) environments via increased nutrient use-efficiency (i.e., organic carbon fuels biosynthesis, excessively, versus energy production) and 2) by eliminating investment in physiologically costly autotrophic enzymes/complexes (RuBisCo and PSII). Furthermore, in Arctic oceans, AAP and PR photoheterotrophs are prominent in ice-covered regions during wintertime per light scarcity. Lastly, seasonal turnover has been observed in marine AAPs as ecotypes (i.e., genetically similar taxa with differing functional trait and/or environmental preferences) segregate into temporal niches.
In stratified (i.e., euxinic) lakes, photoheterotrophs—alongside other anoxygenic phototrophs (e.g., purple/green sulfur bacteria fixing carbon dioxide via electron donors such as ferrous iron, sulfide, and hydrogen gas)—often occupy the chemocline in the water column and/or sediments. In this zone, dissolved oxygen is reduced, light is limited to long wavelengths (e.g., red and infrared) left-over by oxygenic phototrophs (e.g., cyanobacteria), and anaerobic metabolisms (i.e., those occurring in the absence of oxygen) begin introducing sulfide and bioavailable nutrients (e.g., organic carbon, phosphate, and ammonia) through upward diffusion.
Heliobacteria are obligate anaerobes primarily located in rice fields, where low sulfide concentrations prevent competitive exclusion of purple/green sulfur bacteria. These waterlogged environments may facilitate symbiotic relationships between heliobacteria and rice plants as fixed nitrogen—from the former—is exchanged for carbon-rich root exudates.
Observation studies have characterized photoheterotrophs (e.g., Green non-sulfur bacteria such as Chloroflexi and AAPs) within photosynthetic mats at environmental extremes (e.g., hot springs and hypersaline lagoons). Notably, temperature and pH drive anoxygenic phototroph community composition in Yellowstone National Park's geothermal features. In addition, various, light-dependent niches in the Great Salt Lake's hypersaline mats support phototrophic diversity as microbes optimize energy production and combat osmotic stress.
Biogeochemical cycling
Photoheterotrophs influence global carbon cycling by assimilating dissolved organic carbon (DOC). Therefore, when harvesting light-energy, carbon is maintained in the microbial loop without corresponding respiration (i.e., carbon dioxide release to the atmosphere as DOC is oxidized to fuel energy production). This disconnect, the discovery of facultative photoheterotrophs (e.g., AAPs with flexible energy sources), and previous measurements taken in the dark (i.e., to avoid skewed oxygen consumption values due to photooxidation, UV light, and oxygenic photosynthesis) lead to overestimated aquatic emissions. For example, a 15.2% decrease in community respiration was observed in Cep Lake, Czechia—alongside preferential glucose and pyruvate uptake—is attributed to facultative photoheterotrophs preferring light-energy during the daytime, given fitness benefits mentioned previously.
Flowchart
See also
Primary nutritional groups
References
Sources
Biology terminology
Trophic ecology
Microbial growth and nutrition | Photoheterotroph | [
"Biology"
] | 1,960 | [
"nan"
] |
2,227,469 | https://en.wikipedia.org/wiki/Lake%20retention%20time | Lake retention time (also called the residence time of lake water, or the water age or flushing time) is a calculated quantity expressing the mean time that water (or some dissolved substance) spends in a particular lake. At its simplest, this figure is the result of dividing the lake volume by the flow in or out of the lake. It roughly expresses the amount of time taken for a substance introduced into a lake to flow out of it again. The retention time is particularly important where downstream flooding or pollutants are concerned.
Global retention time
The global retention time for a lake (the overall mean time that water spends in the lake) is calculated by dividing the lake volume by either the mean rate of inflow of all tributaries, or by the mean rate of outflow (ideally including evaporation and seepage). This metric assumes that water in the lake is well-mixed (rather than stratified), so that any portion of the lake water is much like any other. In reality, larger and deeper lakes are generally not well-mixed. Many large lakes can be divided into distinct portions with only limited flow between them. Deep lakes are generally stratified, with deeper water mixing infrequently with surface water. These are often better modeled as several distinct sub-volumes of water.
More specific residence times
It is possible to calculate more specific residence time figures for a particular lake, such as individual residence times for sub-volumes (e.g. particular arms), or a residence time distribution for the various layers of a stratified lake. These figures can often better express the hydrodynamics of the lake. However, any such approach remains a simplification and must be guided by an understanding of the processes operating in the lake.
Two approaches can be used (often in combination) to elucidate how a particular lake works: field measurements and mathematical modeling. One common technique for field measurement is to introduce a tracer into the lake and monitor its movement. This can be a solid tracer, such as a float constructed to be neutrally buoyant within a particular water layer, or sometimes a liquid. This approach is sometimes referred to as using a Lagrangian reference frame. Another field measurement approach, using an Eulerian reference frame, is to capture various properties of the lake water (including mass movement, water temperature, electrical conductivity and levels of dissolved substances, typically oxygen) at various fixed positions in the lake. From these can be constructed an understanding of the dominant processes operating in the various parts of the lake and their range and duration.
Field measurements alone are usually not a reliable basis for generating residence times, mainly because they necessarily represent a small subset of locations and conditions. Therefore, the measurements are generally used as the input for numerical models. In theory it would be possible to integrate a system of hydrodynamic equations with variable boundary conditions over a very long period sufficient for inflowing water particles to exit the lake. One could then calculate the traveling times of the particles using a Lagrangian method. However, this approach exceeds the detail available in current hydrodynamic models and the capacity of current computer resources. Instead, residence time models developed for gas and fluid dynamics, chemical engineering, and bio-hydrodynamics can be adapted to generate residence times for sub-volumes of lakes.
Renewal time
One useful mathematical model is the measurement of how quickly inflows are able to refill a lake. Renewal time is a specific measure of retention time, where the focus is on 'how long does it take to completely replace all water in a lake.' This is modeling can only be done with an accurate budget of all water gained and lost by the system. Renewal time simply becomes a question how quickly could the inflows of the lake fill the entire volume of the basin (this does still assume the outflows are unchanged). For example if Lake Michigan was emptied, it would take 99 years for its tributaries to completely refill the lake.
List of residence times of lake water
The residence time listed is taken from the infobox in the associated article unless otherwise specified.
See also
Water cycle: Residence times
References
Further reading
External links
EPA's Great Lakes Factsheet #1
EPA's Great Lakes Atlas
- relationship between residence time of lakes of New Zealand and koaro, smelt and common bully populations.
Aquatic ecology
Retention time | Lake retention time | [
"Biology",
"Environmental_science"
] | 895 | [
"Lakes",
"Aquatic ecology",
"Ecosystems",
"Hydrology"
] |
2,227,485 | https://en.wikipedia.org/wiki/Recursive%20data%20type | In computer programming languages, a recursive data type (also known as a recursively-defined, inductively-defined or inductive data type) is a data type for values that may contain other values of the same type. Data of recursive types are usually viewed as directed graphs.
An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to an arbitrarily large size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time.
Sometimes the term "inductive data type" is used for algebraic data types which are not necessarily recursive.
Example
An example is the list type, in Haskell:
data List a = Nil | Cons a (List a)
This indicates that a list of a's is either an empty list or a cons cell containing an 'a' (the "head" of the list) and another list (the "tail").
Another example is a similar singly linked type in Java:
class List<E> {
E value;
List<E> next;
}
This indicates that non-empty list of type E contains a data member of type E, and a reference to another List object for the rest of the list (or a null reference to indicate that this is the end of the list).
Mutually recursive data types
Data types can also be defined by mutual recursion. The most important basic example of this is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically:
f: [t[1], ..., t[k]]
t: v f
A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types.
This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:
t: v [t[1], ..., t[k]]
A tree t consists of a pair of a value v and a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list another, which require disentangling to prove results about.
In Standard ML, the tree and forest data types can be mutually recursively defined as follows, allowing empty trees:
datatype 'a tree = Empty | Node of 'a * 'a forest
and 'a forest = Nil | Cons of 'a tree * 'a forestIn Haskell, the tree and forest data types can be defined similarly:data Tree a = Empty
| Node (a, Forest a)
data Forest a = Nil
| Cons (Tree a) (Forest a)
Theory
In type theory, a recursive type has the general form μα.T where the type variable α may appear in the type T and stands for the entire type itself.
For example, the natural numbers (see Peano arithmetic) may be defined by the Haskell datatype:
data Nat = Zero | Succ Nat
In type theory, we would say: where the two arms of the sum type represent the Zero and Succ data constructors. Zero takes no arguments (thus represented by the unit type) and Succ takes another Nat (thus another element of ).
There are two forms of recursive types: the so-called isorecursive types, and equirecursive types. The two forms differ in how terms of a recursive type are introduced and eliminated.
Isorecursive types
With isorecursive types, the recursive type and its expansion (or unrolling) (where the notation indicates that all instances of Z are replaced with Y in X) are distinct (and disjoint) types with special term constructs, usually called roll and unroll, that form an isomorphism between them. To be precise: and , and these two are inverse functions.
Equirecursive types
Under equirecursive rules, a recursive type and its unrolling are equal – that is, those two type expressions are understood to denote the same type. In fact, most theories of equirecursive types go further and essentially specify that any two type expressions with the same "infinite expansion" are equivalent. As a result of these rules, equirecursive types contribute significantly more complexity to a type system than isorecursive types do. Algorithmic problems such as type checking and type inference are more difficult for equirecursive types as well. Since direct comparison does not make sense on an equirecursive type, they can be converted into a canonical form in O(n log n) time, which can easily be compared.
Isorecursive types capture the form of self-referential (or mutually referential) type definitions seen in nominal object-oriented programming languages, and also arise in type-theoretic semantics of objects and classes. In functional programming languages, isorecursive types (in the guise of datatypes) are common too.
Recursive type synonyms
In TypeScript, recursion is allowed in type aliases. Thus, the following example is allowed.
type Tree = number | Tree[];
let tree: Tree = [1, [2, 3]];
However, recursion is not allowed in type synonyms in Miranda, OCaml (unless -rectypes flag is used or it's a record or variant), or Haskell; so, for example the following Haskell types are illegal:
type Bad = (Int, Bad)
type Evil = Bool -> Evil
Instead, they must be wrapped inside an algebraic data type (even if they only has one constructor):
data Good = Pair Int Good
data Fine = Fun (Bool -> Fine)
This is because type synonyms, like typedefs in C, are replaced with their definition at compile time. (Type synonyms are not "real" types; they are just "aliases" for convenience of the programmer.) But if this is attempted with a recursive type, it will loop infinitely because no matter how many times the alias is substituted, it still refers to itself, e.g. "Bad" will grow indefinitely: Bad → (Int, Bad) → (Int, (Int, Bad)) → ... .
Another way to see it is that a level of indirection (the algebraic data type) is required to allow the isorecursive type system to figure out when to roll and unroll.
See also
Recursive definition
Algebraic data type
Inductive type
Node (computer science)
References
Sources
Data types
Type theory | Recursive data type | [
"Mathematics"
] | 1,488 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
2,227,503 | https://en.wikipedia.org/wiki/Langmuir%E2%80%93Blodgett%20film | A Langmuir–Blodgett (LB) film is an emerging kind of 2D materials to fabricate heterostructures for nanotechnology, formed when Langmuir films—or Langmuir monolayers (LM)—are transferred from the liquid-gas interface to solid supports during the vertical passage of the support through the monolayers. LB films can contain one or more monolayers of an organic material, deposited from the surface of a liquid onto a solid by immersing (or emersing) the solid substrate into (or from) the liquid. A monolayer is adsorbed homogeneously with each immersion or emersion step, thus films with very accurate thickness can be formed. This thickness is accurate because the thickness of each monolayer is known and can therefore be added to find the total thickness of a Langmuir–Blodgett film.
The monolayers are assembled vertically and are usually composed either of amphiphilic molecules (see chemical polarity) with a hydrophilic head and a hydrophobic tail (example: fatty acids) or nowadays commonly of nanoparticles.
Langmuir–Blodgett films are named after Irving Langmuir and Katharine B. Blodgett, who invented this technique while working in Research and Development for General Electric Co.
Historical background
Advances to the discovery of LB and LM films began with Benjamin Franklin in 1773 when he dropped about a teaspoon of oil onto a pond. Franklin noticed that the waves were calmed almost instantly and that the calming of the waves spread for about half an acre. What Franklin did not realize was that the oil had formed a monolayer on top of the pond surface. Over a century later, Lord Rayleigh quantified what Benjamin Franklin had seen. Knowing that the oil, oleic acid, had spread evenly over the water, Rayleigh calculated that the thickness of the film was 1.6 nm by knowing the volume of oil dropped and the area of coverage.
With the help of her kitchen sink, Agnes Pockels showed that area of films can be controlled with barriers. She added that surface tension varies with contamination of water. She used different oils to deduce that surface pressure would not change until area was confined to about 0.2 nm2. This work was originally written as a letter to Lord Rayleigh who then helped Agnes Pockels become published in the journal, Nature, in 1891.
Agnes Pockels’ work set the stage for Irving Langmuir who continued to work and confirmed Pockels’ results. Using Pockels’ idea, he developed the Langmuir (or Langmuir–Blodgett) trough. His observations indicated that chain length did not impact the affected area since the organic molecules were arranged vertically.
Langmuir’s breakthrough did not occur until he hired Katherine Blodgett as his assistant. Blodgett initially went to seek for a job at General Electric (GE) with Langmuir during her Christmas break of her senior year at Bryn Mawr College, where she received a BA in Physics. Langmuir advised to Blodgett that she should continue her education before working for him. She thereafter attended University of Chicago for her MA in Chemistry. Upon her completion of her Master's, Langmuir hired her as his assistant. However, breakthroughs in surface chemistry happened after she received her PhD degree in 1926 from Cambridge University.
While working for GE, Langmuir and Blodgett discovered that when a solid surface is inserted into an aqueous solution containing organic moieties, the organic molecules will deposit a monolayer homogeneously over the surface. This is the Langmuir–Blodgett film deposition process. Through this work in surface chemistry and with the help of Blodgett, Langmuir was awarded the Nobel Prize in 1932. In addition, Blodgett used Langmuir–Blodgett film to create 99% transparent anti-reflective glass by coating glass with fluorinated organic compounds, forming a simple anti-reflective coating.
Physical insight
Langmuir films are formed when amphiphilic (surfactants) molecules or nanoparticles are spread on the water at an air–water interface. Surfactants (or surface-acting agents) are molecules with hydrophobic 'tails' and hydrophilic 'heads'. When surfactant concentration is less than the minimum surface concentration of collapse and it is completely insoluble in water, the surfactant molecules arrange themselves as shown in Figure 1 below. This tendency can be explained by surface-energy considerations. Since the tails are hydrophobic, their exposure to air is favoured over that to water. Similarly, since the heads are hydrophilic, the head–water interaction is more favourable than head-air interaction. The overall effect is reduction in the surface energy (or equivalently, surface tension of water).
For very small concentrations, far from the surface density compatible with the collapse of the monolayer (which leads to polylayers structures) the surfactant molecules execute a random motion on the water–air interface. This motion can be thought to be similar to the motion of ideal-gas molecules enclosed in a container. The corresponding thermodynamic variables for the surfactant system are, surface pressure (), surface area (A) and number of surfactant molecules (N). This system behaves similar to a gas in a container. The density of surfactant molecules as well as the surface pressure increases upon reducing the surface area A ('compression' of the 'gas'). Further compression of the surfactant molecules on the surface shows behavior similar to phase transitions. The ‘gas’ gets compressed into ‘liquid’ and ultimately into a perfectly closed packed array of the surfactant molecules on the surface corresponding to a ‘solid’ state. The liquid state is usually separated in the liquid-expanded and liquid-condensed states. All the Langmuir film states are classified according to the compressionality factor of the films, defined as , usually related to the in-plane elasticity of the monolayer.
The condensed Langmuir films (in surface pressures usually higher than 15 mN/m – typically 30 mN/m) can be subsequently transferred onto a solid substrate to create highly organized thin film coatings. Langmuir–Blodgett troughs
Besides LB film from surfactants depicted in Figure 1, similar monolayers can also be made from inorganic nanoparticles.
Pressure–area characteristics
Adding a monolayer to the surface reduces the surface tension, and the surface pressure, is given by the following equation:
where is equal to the surface tension of the water and is the surface tension due to the monolayer. But the concentration-dependence of surface tension (similar to Langmuir isotherm) is as follows:
Thus,
or
The last equation indicates a relationship similar to ideal gas law. However, the concentration-dependence of surface tension is valid only when the solutions are dilute and concentrations are low. Hence, at very low concentrations of the surfactant, the molecules behave like ideal gas molecules.
Experimentally, the surface pressure is usually measured using the Wilhelmy plate. A pressure sensor/electrobalance arrangement detects the pressure exerted by the monolayer. Also monitored is the area to the side of the barrier which the monolayer resides.
Figure 2. A Wilhelmy plate
A simple force balance on the plate leads to the following equation for the surface pressure:
only when . Here, and are the dimensions of the plate, and is the difference in forces. The Wilhelmy plate measurements give pressure – area isotherms that show phase transition-like behaviour of the LM films, as mentioned before (see figure below). In the gaseous phase, there is minimal pressure increase for a decrease in area. This continues until the first transition occurs and there is a proportional increase in pressure with decreasing area. Moving into the solid region is accompanied by another sharp transition to a more severe area dependent pressure. This trend continues up to a point where the molecules are relatively close packed and have very little room to move. Applying an increasing pressure at this point causes the monolayer to become unstable and destroy the monolayer forming polylayer structures towards the air phase. The surface pressure during the monolayer collapse may remain approximately constant (in a process near the equilibrium) or may decay abruptly (out of equilibrium - when the surface pressure was over-increased because lateral compression was too fast for monomolecular rearrangements).
Figure 3. (i) Surface pressure – Area isotherms. (ii) Molecular configuration in the three regions marked in the -A curve; (a) gaseous phase, (b) liquid-expanded phase, and (c) condensed phase. (Adapted from Osvaldo N. Oliveira Jr., Brazilian Journal of Physics, vol. 22, no. 2, June 1992)
Applications
Many possible applications have been suggested over years for LM and LB films. Their characteristics are extremely thin films and high degree of structural order. These films have different optical, electrical and biological properties which are composed of some specific organic compounds. Organic compounds usually have more positive responses than inorganic materials for outside factors (pressure, temperature or gas change). LM films can be used also as models for half a cellular membrane.
LB films consisting of nanoparticles can be used for example to create functional coatings, sophisticated sensor surfaces and to coat silicon wafers.
LB films can be used as passive layers in MIS (metal-insulator-semiconductor) which have more open structure than silicon oxide, and they allow gases to penetrate to the interface more effectively.
LB films also can be used as biological membranes. Lipid molecules with the fatty acid moiety of long carbon chains attached to a polar group have received extended attention because of being naturally suited to the Langmuir method of film production. This type of biological membrane can be used to investigate: the modes of drug action, the permeability of biologically active molecules, and the chain reactions of biological systems.
Also, it is possible to propose field effect devices for observing the immunological response and enzyme-substrate reactions by collecting biological molecules such as antibodies and enzymes in insulating LB films.
Anti-reflective glass can be produced with successive layers of fluorinated organic film.
The glucose biosensor can be made of poly(3-hexyl thiopene) as Langmuir–Blodgett film, which entraps glucose-oxide and transfers it to a coated indium-tin-oxide glass plate.
UV resists can be made of poly(N-alkylmethacrylamides) Langmuir–Blodgett film.
UV light and conductivity of a Langmuir–Blodgett film.
Langmuir–Blodgett films are inherently 2D-structures and can be built up layer by layer, by dipping hydrophobic or hydrophilic substrates into a liquid sub-phase.
Langmuir–Blodgett patterning is a new paradigm for large-area patterning with mesostructured features
Recently, it has been demonstrated that Langmuir–Blodgett is an effective technique even to produce ultra-thin films of emerging two-dimensional layered materials on a large scale.
See also
References
Bibliography
R. W. Corkery, Langmuir, 1997, 13 (14), 3591–3594
Osvaldo N. Oliveira Jr., Brazilian Journal of Physics, vol. 22, no. 2, June 1992
Roberts G G, Pande K P and Barlow, Phys. Technol., Vol. 12, 1981
Singhal, Rahul. Poly-3-Hexyl Thiopene Langmuir-Blodgett Films for Application to Glucose Biosensor. National Physics Laboratory: Biotechnology and Bioengineering, p 277-282, February 5, 2004. John and Wiley Sons Inc.
Guo, Yinzhong. Preparation of poly(N-alkylmethacrylamide) Langmuir–Blodgett films for the application to a novel dry-developed positive deep UV resist. Macromolecules, p1115-1118, February 23, 1999. ACS
Franklin, Benjamin, Of the stilling of Waves by means of Oil. Letter to William Brownrigg and the Reverend Mr. Farish. London, November 7, 1773.
Pockels, A., Surface Tension, Nature, 1891, 43, 437.
Blodgett, Katherine B., Use of Interface to Extinguish Reflection of Light from Glass. Physical Review, 1939, 55,
A. Ulman, An Introduction to Ultrathin Organic Films From Langmuir-Blodgett to Self-Assembly, Academic Press, Inc.: San Diego (1991).
I.R. Peterson, "Langmuir Blodgett Films ", J. Phys. D 23, 4, (1990) 379–95.
I.R. Peterson, "Langmuir Monolayers", in T.H. Richardson, Ed., Functional Organic and Polymeric Materials Wiley: NY (2000).
L.S. Miller, D.E. Hookes, P.J. Travers and A.P. Murphy, "A New Type of Langmuir-Blodgett Trough", J. Phys. E 21 (1988) 163–167.
I.R.Peterson, J.D.Earls. I.R.Girling and G.J.Russell, "Disclinations and Annealing in Fatty-Acid Monolayers", Mol. Cryst. Liq. Cryst. 147 (1987) 141–147.
Syed Arshad Hussain, D. Bhattacharjee, "Langmuir-Blodgett Films and Molecular Electronics", Modern Physics Letters B vol. 23 No. 27 (2009) 3437–3451.
Nanotechnology
Phases of matter
Thin films
ja:Langmuir-Blodgett膜 | Langmuir–Blodgett film | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,913 | [
"Phases of matter",
"Materials science",
"Nanotechnology",
"Planes (geometry)",
"Thin films",
"Matter"
] |
2,227,555 | https://en.wikipedia.org/wiki/Network%20redirector | In DOS and Windows, a network redirector, or redirector, is an operating system driver that sends data to and receives data from a remote device. A network redirector provides mechanisms to locate, open, read, write, and delete files and submit print jobs.
It provides application services such as named pipes and MailSlots. When an application needs to send or receive data from a remote device, it sends a call to the redirector. The redirector provides the functionality of the presentation layer of the OSI model.
Networks Hosts communicate through use of this client software: Shells, Redirectors and Requesters.
In Microsoft Networking, the network redirectors are implemented as Installable File System (IFS) drivers.
See also
Universal Naming Convention (UNC)
References
External links
Network Redirector Drivers at Microsoft Docs
Device drivers
Operating system technology | Network redirector | [
"Technology"
] | 181 | [
"Computing stubs",
"Computer network stubs"
] |
2,227,640 | https://en.wikipedia.org/wiki/Image-forming%20optical%20system | In optics, an image-forming optical system is a system capable of being used for imaging. The diameter of the aperture of the main objective is a common criterion for comparison among optical systems, such as large telescopes.
The two traditional optical systems are mirror-systems (catoptrics) and lens-systems (dioptrics). However, in the late twentieth century, optical fiber was introduced as a technology for transmitting images over long distances. Catoptrics and dioptrics have a focal point that concentrates light onto a specific point, while optical fiber the transfer of an image from one plane to another without the need for an optical focus.
Isaac Newton is reported to have designed what he called a catadioptrical phantasmagoria, which can be interpreted to mean an elaborate structure of both mirrors and lenses.
Catoptrics and optical fiber have no chromatic aberration, while dioptrics need to have this error corrected. Newton believed that such correction was impossible, because he thought the path of the light depended only on its color. In 1757 John Dollond made an achromatised dioptric, which was the forerunner of the lenses used in all popular photographic equipment today.
Lower-energy X-rays are the highest-energy electromagnetic radiation that can be focused into an image, using a Wolter telescope. There are three types of Wolter telescopes. Near-infrared is typically the longest wavelength that are handled optically, such as in some large telescopes.
References
Optics
Telescopes | Image-forming optical system | [
"Physics",
"Chemistry",
"Astronomy"
] | 307 | [
"Applied and interdisciplinary physics",
"Optics",
"Telescopes",
" molecular",
"Astronomical instruments",
"Atomic",
" and optical physics"
] |
2,227,778 | https://en.wikipedia.org/wiki/Catoptrics | Catoptrics (from katoptrikós, "specular", from katoptron "mirror") deals with the phenomena of reflected light and image-forming optical systems using mirrors. A catoptric system is also called a catopter (catoptre).
Ancient texts
Catoptrics is the title of two texts from ancient Greece:
The Pseudo-Euclidean Catoptrics. This book is attributed to Euclid, although the contents are a mixture of work dating from Euclid's time together with work which dates to the Roman period. It has been argued that the book may have been compiled by the 4th century mathematician Theon of Alexandria. The book covers the mathematical theory of mirrors, particularly the images formed by plane and spherical concave mirrors.
Hero's Catoptrics. Written by Hero of Alexandria, this work concerns the practical application of mirrors for visual effects. In the Middle Ages, this work was falsely ascribed to Ptolemy. It only survives in a Latin translation.
The Latin translation of Alhazen's (Ibn al-Haytham) main work, Book of Optics (Kitab al-Manazir), exerted a great influence on Western science: for example, on the work of Roger Bacon, who cites him by name. His research in catoptrics (the study of optical systems using mirrors) centred on spherical and parabolic mirrors and spherical aberration. He made the observation that the ratio between the angle of incidence and refraction does not remain constant, and investigated the magnifying power of a lens. His work on catoptrics also contains the problem known as "Alhazen's problem". Alhazen's work influenced Averroes' writings on optics, and his legacy was further advanced through the 'reforming' of his Optics by Persian scientist Kamal al-Din al-Farisi (d. ca. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics).
Catoptric telescopes
The first practical catoptric telescope (the "Newtonian reflector") was built by Isaac Newton as a solution to the problem of chromatic aberration exhibited in telescopes using lenses as objectives (dioptric telescopes).
See also
Dioptrics
Catadioptrics
Optical telescope
List of telescope types
Image-forming optical system
Fresnel lens
Lighthouse lens
References
Bibliography
Optics
Mirrors | Catoptrics | [
"Physics",
"Chemistry"
] | 502 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
2,227,794 | https://en.wikipedia.org/wiki/Subtribe | Subtribe is a taxonomic category ranking which is below the rank of tribe and above genus. The standard suffix for a subtribe is -ina (in animals) or -inae (in plants). The early use of this word is from 19th century. An example of a subtribe is Hyptidinae, a group of flowering plants that contains approximately 400 accepted species distributed in 19 genera.
References
Botanical nomenclature
Plant taxonomy
Zoological nomenclature | Subtribe | [
"Biology"
] | 93 | [
"Zoological nomenclature",
"Botanical nomenclature",
"Plants",
"Botanical terminology",
"Biological nomenclature",
"Plant taxonomy"
] |
2,227,992 | https://en.wikipedia.org/wiki/Mains%20hum | Mains hum, electric hum, cycle hum, or power line hum is a sound associated with alternating current which is twice the frequency of the mains electricity. The fundamental frequency of this sound is usually double that of fundamental 50/60Hz, i.e., 100/120Hz, depending on the local power-line frequency. The sound often has heavy harmonic content above 50/60Hz. Due to the presence of mains current in mains-powered audio equipment as well as ubiquitous AC electromagnetic fields from nearby appliances and wiring, 50/60Hz electrical noise can get into audio systems, and is heard as mains hum from their speakers. Mains hum may also be heard coming from powerful electric power grid equipment such as utility transformers, caused by mechanical vibrations induced by magnetostriction in magnetic cores. Onboard aircraft (or spacecraft) the frequency heard is often higher pitched, due to the use of 400 Hz AC power in these settings because 400 Hz transformers are much smaller and lighter.
Causes
Electric hum around transformers is caused by stray magnetic fields causing the enclosure and accessories to vibrate. Magnetostriction is a second source of vibration, in which the core iron changes shape minutely when exposed to magnetic fields. The intensity of the fields, and thus the "hum" intensity, is a function of the applied voltage. Due to the magnetic flux density is strongest twice every electrical cycle, the fundamental "hum" frequency will be twice the electrical frequency. Additional harmonics above 100/120Hz will be caused by the non-linear behavior of most common magnetic materials.
Around high-voltage power lines, hum may be produced by corona discharge.
In the realm of sound reinforcement (as in public address systems and loudspeakers), electric hum is often caused by induction. This hum is generated by oscillating electric currents induced in sensitive (high gain or high impedance) audio circuitry by the alternating electromagnetic fields emanating from nearby mains-powered devices like power transformers. The audible aspect of this sort of electric hum is produced by amplifiers and loudspeakers (note that this is not to be confused with acoustic feedback).
The other major source of hum in audio equipment is shared impedances; when a heavy current is flowing through a conductor (a ground trace) that a small-signal device is also connected to. All practical conductors will have a finite, if small, resistance, and the small resistance present means that devices using different points on the conductor as a ground reference will be at slightly different potentials. This hum is usually at the second harmonic of the power line frequency (100 Hz or 120 Hz), since the heavy ground currents are from AC to DC power supplies that rectify the mains waveform. (See also ground loop.)
In vacuum tube equipment, one potential source of hum is current leakage between the heaters and cathodes of the tubes. Another source is direct emission of electrons from the heater, or magnetic fields produced by the heater. Tubes for critical applications may have the heater circuit powered by direct current to prevent this source of hum.
Leakage of analogue video signals can give rise to hum sounding very similar to mains hum.
Prevention
It is often the case that electric hum at a venue is picked up via a ground loop. In this situation, an amplifier and a mixing desk are typically at some distance from one another. The chassis of each item is grounded via the mains earth pin, and is also connected along a different pathway via the conductor of a shielded cable. As these two pathways do not run alongside each other, an electrical circuit in the shape of a loop is formed. The same situation occurs between musical instrument amplifiers on stage and the mixing desk. To fix this, stage equipment often has a "ground lift" switch which breaks the loop. Another solution is to connect the source and destination through a 1:1 isolation transformer, called variously audio humbucker or iso coil. An extremely deadly option is to break contact with the ground wire by using an AC ground lift adapter or by breaking the earth pin off the power plug used at the mixing deck. Depending on the design and layout of the audio equipment, lethal voltages between the (now isolated) ground at the mixing desk and earth ground can then develop. Any contact between the AC line live terminals and the equipment chassis will energize all the cable shields and interconnected equipment.
Humbucking
Humbucking is a technique of introducing a small amount of line-frequency signal so as to cancel any hum introduced, or otherwise arrange to electrically cancel the effect of induced line frequency hum.
Humbucking is a process in which "hum" that is causing objectionable artifacts, generally in audio or video systems, is reduced. In a humbucker electric guitar pickup or microphone, two coils are used instead of one; they are arranged in opposing polarity so that AC hum induced in the two coils will cancel, while still giving a signal for the movement of the guitar strings or diaphragm.
In certain vacuum-tube radio receivers, a winding on the dynamic speaker field coil was connected in series with the power supply to help cancel any residual hum.
Some other common applications of this process are:
Humbucking transformers or coils used in video systems.
Telephone (and other audio) system and computer communications wiring.
Consequences
In music
In musical instruments, hum is usually treated as a nuisance, and various electrical modifications are made to eliminate it. For instance, humbucker pickups on electric guitars are designed to "buck" or reduce the hum. Sometimes hum is used creatively, for example in dub and glitch music.
John Lennon demos
In the late 1970s, former Beatle John Lennon recorded some demo songs at his and Yoko Ono's Dakota apartment. These demos did not see any official release at the time, nor were they properly recorded for Double Fantasy or its follow-up Milk and Honey, but they did spread as bootlegs amongst Lennon fans.
In the mid-1990s, as part of the Beatles anthology series, the three surviving members, Paul McCartney, George Harrison, and Ringo Starr, regrouped to record initially incidental music for the albums, but decided to rework some John Lennon demos instead. Several demos were given to McCartney from Ono, the most notable being "Free as a Bird", "Real Love", and "Now and Then".
Of the demos received, only the aforementioned three were worked on. Of the three, "Real Love" and "Now and Then" were the most difficult to work on as, compared to "Free as a Bird"; both contained a prominent 60-cycle mains hum, as a result of the cheap recording equipment Lennon used to record the demos. While the mains hum was removed from "Real Love", it was noticeably louder on "Now and Then", which made it much harder to remove. This, and to a much bigger extent, Harrison's distaste for that particular demo, lead to it being scrapped altogether, although reports circulated in the years since that McCartney was hoping to finish it. In 2009, a version of Lennon's demo, supposedly without the mains hum that hampered the Beatles version, appeared as a bootleg. In 2023, the mains hum was finally removed thanks to Peter Jackson's sound source separation technology, and the track was released on November 2, 2023.
In audio systems
Power line hum can be alleviated using a band-stop filter.
In video systems
In analog video, mains hum can be seen as hum bars, (bands of slightly different brightness) scrolling vertically up the screen. Broadcast television frame rates are chosen to match the line frequency, to minimize the disturbance these bars cause to the picture. A hum bar can be caused by a ground loop in cables carrying analog video signals, poor power supply smoothing, or magnetic interference with the cathode-ray tube.
In forensics
Electrical network frequency (ENF) analysis is a forensic technique for validating audio recordings by comparing frequency changes in background mains hum in the recording with long-term high-precision historical records of mains frequency changes from a database. In effect the mains hum signal is treated as a time-dependent digital watermark that can be used to find when the recording was created, and to help to detect any edits in the sound recording.
See also
Electromagnetically induced acoustic noise
Ground loop
High frequency noise in CRTs
Valve amplifier
References
Electrical phenomena
Sounds by type
Noise | Mains hum | [
"Physics"
] | 1,743 | [
"Physical phenomena",
"Electrical phenomena"
] |
2,227,994 | https://en.wikipedia.org/wiki/Oxygen%20tank | An oxygen tank is an oxygen storage vessel, which is either held under pressure in gas cylinders, referred to in the industry as high pressure oxygen cylinders, or as liquid oxygen in a cryogenic storage tank.
Uses
Oxygen tanks are used to store gas for:
medical breathing (oxygen therapy) at medical facilities and at home (high pressure cylinder)
breathing at altitude in aviation, either in a decompression emergency, or constantly (as in unpressurized aircraft), usually in high pressure cylinders
oxygen first aid sets, in small portable high pressure cylinders
gas blending, for mixing breathing gases such as nitrox, trimix and heliox
open-circuit scuba sets - mainly used for accelerated decompression in technical diving, in high pressure cylinders
some types of diving rebreather: oxygen rebreathers and fully closed circuit rebreathers, usually in high pressure cylinders
use in climbing, "Bottled oxygen" refers to oxygen in lightweight high pressure cylinders for mountaineering
industrial processes, including the manufacture of steel and monel
oxyacetylene welding equipment, glass lampworking torches, and some gas cutting torches, usually in high pressure cylinders
use as liquid rocket propellants for rocket engines, usually as liquid oxygen at ambient pressure
athletes, specifically on American football sidelines, to expedite recovery after exertion, in high-pressure cylinders.
Breathing oxygen is delivered from the storage tank to users by use of the following methods: oxygen mask, nasal cannula, full face diving mask, diving helmet, demand valve, oxygen rebreather, built in breathing system (BIBS), oxygen tent, and hyperbaric oxygen chamber.
Contrary to popular belief most scuba divers do not carry oxygen tanks. The vast majority of divers breathe air or nitrox stored in a diving cylinder. A small minority breathe trimix, heliox or other exotic gases. Some may carry pure oxygen for accelerated decompression or as supply gas to a rebreather. Some shallow divers, particularly naval combat divers, use oxygen rebreathers, and they use a small oxygen cylinder to provide the gas.
Oxygen is rarely held at pressures higher than , due to the risks of fire triggered by high temperatures caused by adiabatic heating when the gas changes pressure when moving from one vessel to another. Medical use liquid oxygen airgas tanks are typically .
All equipment coming into contact with high pressure oxygen must be "oxygen clean" and "oxygen compatible", to reduce the risk of fire. "Oxygen clean" means the removal of any substance that could act as a source of ignition. "Oxygen compatible" means that internal components must not burn readily or degrade easily in a high pressure oxygen environment.
In some countries there are legal and insurance requirements and restrictions on the use, storage and transport of pure oxygen. Oxygen tanks are normally stored in well-ventilated locations, far from potential sources of fire and concentrations of people.
See also
Bottled gas
Gas cylinder
Dewar flask
References
Underwater breathing apparatus
Decompression equipment
Pressure vessels
Tank
Gas technologies | Oxygen tank | [
"Physics",
"Chemistry",
"Engineering"
] | 622 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
2,228,126 | https://en.wikipedia.org/wiki/E2F | E2F is a group of genes that encodes a family of transcription factors (TF) in higher eukaryotes. Three of them are activators: E2F1, 2 and E2F3a. Six others act as repressors: E2F3b, E2F4-8. All of them are involved in the cell cycle regulation and synthesis of DNA in mammalian cells. E2Fs as TFs bind to the TTTCCCGC (or slight variations of this sequence) consensus binding site in the target promoter sequence.
E2F family
Schematic diagram of the amino acid sequences of E2F family members (N-terminus to the left, C-terminus to the right) highlighting the relative locations of functional domains within each member:
Genes
Homo sapiens E2F1 mRNA or
E2F1 protein sequences from NCBI protein and nucleotide database.
Structure
X-ray crystallographic analysis has shown that the E2F family of transcription factors has a fold similar to the winged-helix DNA-binding motif.
Role in the cell cycle
E2F family members play a major role during the G1/S transition in mammalian and plant cell cycle (see KEGG cell cycle pathway). DNA microarray analysis reveals unique sets of target promoters among E2F family members suggesting that each protein has a unique role in the cell cycle. Among E2F transcriptional targets are cyclins, CDKs, checkpoints regulators, DNA repair and replication proteins. Nonetheless, there is a great deal of redundancy among the family members. Mouse embryos lacking E2F1, E2F2, and one of the E2F3 isoforms, can develop normally when either E2F3a or E2F3b, is expressed.
The E2F family is generally split by function into two groups: transcription activators and repressors. Activators such as E2F1, E2F2, E2F3a promote and help carryout the cell cycle, while repressors inhibit the cell cycle. Yet, both sets of E2F have similar domains. E2F1-6 have DP1,2 heterodimerization domain which allows them to bind to DP1 or DP2, proteins distantly related to E2F. Binding with DP1,2 provides a second DNA binding site, increasing E2F binding stability. Most E2F have a pocket protein binding domain. Pocket proteins such as pRB and related proteins p107 and p130, can bind to E2F when hypophosphorylated. In activators, E2F binding with pRB has been shown to mask the transactivation domain responsible for transcription activation. In repressors E2F4 and E2F5, pocket protein binding (more often p107 and p130 than pRB) mediates recruitment of repression complexes to silence target genes. E2F6, E2F7, and E2F8 do not have pocket protein binding sites and their mechanism for gene silencing is unclear. Cdk4(6)/cyclin D and cdk2/cyclin E phosphorylate pRB and related pocket proteins allowing them to disassociate from E2F. Activator E2F proteins can then transcribe S phase promoting genes. In REF52 cells, overexpression of activator E2F1 is able to push quiescent cells into S phase. While repressors E2F4 and 5 do not alter cell proliferation, they mediate G1 arrest.
E2F activator levels are cyclic, with maximal expression during G1/S. In contrast, E2F repressors stay constant, especially since they are often expressed in quiescent cells. Specifically, E2F5 is only expressed in terminally differentiated cells in mice. The balance between repressor and activator E2F regulate cell cycle progression. When activator E2F family proteins are knocked out, repressors become active to inhibit E2F target genes.
E2F/pRb complexes
The Rb tumor suppressor protein (pRb) binds to the E2F1 transcription factor preventing it from interacting with the cell's transcription machinery. In the absence of pRb, E2F1 (along with its binding partner DP1) mediates the trans-activation of E2F1 target genes that facilitate the G1/S transition and S-phase. E2F targets genes that encode proteins involved in DNA replication (for example DNA polymerase, thymidine kinase, dihydrofolate reductase and cdc6), and chromosomal replication (replication origin-binding protein HsOrc1 and MCM5). When cells are not proliferating, E2F DNA binding sites contribute to transcriptional repression. In vivo footprinting experiments obtained on Cdc2 and B-myb promoters demonstrated E2F DNA binding site occupation during G0 and early G1, when E2F is in transcriptional repressive complexes with the pocket proteins.
pRb is one of the targets of the oncogenic human papilloma virus protein E7, and human adenovirus protein E1A. By binding to pRB, they stop the regulation of E2F transcription factors and drive the cell cycle to enable virus genome replication.
Activators: E2F1, E2F2, E2F3a
Activators are maximally expressed late in G1 and can be found in association with E2F regulated promoters during the G1/S transition. The activation of E2F-3a genes follows upon the growth factor stimulation and the subsequent phosphorylation of the E2F inhibitor retinoblastoma protein, pRB. The phosphorylation of pRB is initiated by cyclin D/cdk4, cdk6 complex and continued by cyclin E/cdk2. Cyclin D/cdk4,6 itself is activated by the MAPK signaling pathway.
When bound to E2F-3a, pRb can directly repress E2F-3a target genes by recruiting chromatin remodeling complexes and histone modifying activities (e.g. histone deacetylase, HDAC) to the promoter.
Inhibitors: E2F3b, E2F4, E2F5, E2F6, E2F7, E2F8
E2F3b, E2F4, E2F5 are expressed in quiescent cells and can be found associated with E2F-binding elements on E2F-target promoters during G0-phase. E2F-4 and 5 preferentially bind to p107/p130.
E2F-6 acts as a transcriptional repressor, but through a distinct, pocket protein independent manner. E2F-6 mediates repression by direct binding to polycomb-group proteins or via the formation of a large multimeric complex containing Mga and Max proteins.
The repressor genes E2F7/E2F8, located on chromosome 7, are transcription factors responsible for protein coding cell cycle regulation. Together, they are essential for the development of an intact, organized, and functional placental structure during embryonic development. While the specific molecular pathways remain unknown, researchers have used placental and fetal lineage specific cre mice to determine the functions of the synergistic E2F7 and E2Fhe8 genes. Knockout mice, deplete of E2F7 and E2F8, result in abnormal trophoblastic proliferation accompanied by advanced cellular apoptosis. Phenotypically, the placenta presents with disruptions in cellular architecture to include large clusters of undifferentiated trophoblastic cells, which have failed to invade the maternal decidua. E2F7 and E2F8 proteins can function as repressors independently of DP interaction. They are unique in having a duplicated conserved E2F-like DNA-binding domain and in lacking a DP1,2-dimerization domain. They also appear to play a role in angiogenesis through the activation of vascular endothelial growth factor A. Using zebrafish, severe vascular defects of the head and somatic vessels were discovered when animals were depleted of E2F7 and E2F8. Antagonized by E2F3a, a transcriptional program has been discovered that functions through the coordination of multiple genes in the E2F family in order to ensure proper development of the placenta.
Transcriptional targets
Cell cycle: CCNA1,2, CCND1,2, CDK2, MYB, E2F1,2,3, TFDP1, CDC25A
Negative regulators: E2F7, RB1, TP107, TP21
Checkpoints: TP53, BRCA1,2, BUB1
Apoptosis: TP73, APAF1, CASP3,7,8, MAP3K5,14
Nucleotide synthesis: thymidine kinase (tk), thymidylate synthase (ts), DHFR
DNA repair: BARD1, RAD51, UNG1,2, FANCA, FANCC, FANCJ
DNA replication: PCNA, histone H2A, DNA pol and , RPA1,2,3, CDC6, MCM2,3,4,5,6,7
See also
Transcription factor DP
Type 3c (Pancreatogenic) Diabetes
References
External links
Drosophila E2F transcription factor - The Interactive Fly
Drosophila E2F transcription factor 2 - The Interactive Fly
Cell cycle
Transcription factors | E2F | [
"Chemistry",
"Biology"
] | 2,070 | [
"Gene expression",
"Signal transduction",
"Cellular processes",
"Induced stem cells",
"Cell cycle",
"Transcription factors"
] |
2,228,163 | https://en.wikipedia.org/wiki/Syphon%20recorder | The syphon or siphon recorder is an obsolete electromechanical device used as a receiver for submarine telegraph cables invented by William Thomson, 1st Baron Kelvin in 1867. It automatically records an incoming telegraph message as a wiggling ink line on a roll of paper tape. Later a trained telegrapher would read the tape, translating the pulses representing the "dots" and "dashes" of the Morse code to characters of the text message.
The syphon recorder replaced Thomson’s previous invention, the mirror galvanometer as the standard receiving instrument for submarine telegraph cables, allowing long cables to be worked using just a few volts at the sending end. The disadvantage of the mirror galvanometer was that it required two operators, one with a steady eye to read and call off the signal, the other to write down the characters received. Its use spread to ordinary telegraph lines and radiotelegraphy radio receivers. A major advantage of the syphon recorder was that no operator has to monitor the line constantly waiting for messages to come in. The paper tape preserved a record of the actual message before translation to text, so errors in translation could be checked.
Principle of operation
The siphon recorder works on the principle of a d'Arsonval galvanometer. A light coil of wire is suspended between the poles of a permanent magnet so it can turn freely. The coil is attached via two wire linkages to the metal plate siphon support, which pivots on a horizontal suspension thread. From this plate a narrow glass siphon tube hangs down vertically with its end almost touching a paper tape. The paper tape is pulled by motorized rollers at a constant speed under the siphon pen. Ink is drawn up from a reservoir into the tube by siphon action and comes out a tiny orifice in the end of the siphon tube, drawing a line down the moving paper tape. In order not to affect the motion of the coil, the siphon tube itself never touches the paper, only the ink.
The current from the telegraph line is applied to the coil. The pulses of current representing the Morse code "dots" and "dashes" flowing through the coil create a magnetic field which interacts with the magnetic field of the magnet, creating a torque which causes the coil to rotate slightly about its vertical suspension axis. The wire linkages cause the siphon support plate to rotate about its horizontal axis, swinging the siphon tube across the paper tape. This draws a displacement in the ink line on the tape as long as the current is present in the coil. Thus the ink line on the tape forms a graph of the current in the telegraph line, with displacements representing the "dots" and "dashes" of the Morse code. An operator knowing Morse code later translates the line on the tape to characters of the text message, and types them onto a telegram form.
Kelvin's electrostatic syphon
The siphon and an ink reservoir are together supported by an ebonite bracket, separate from the rest of the instrument, and insulated from it. This separation permits the ink to be electrified to a high potential while the body of the instrument, including the paper and metal writing tablet, are grounded, and at low potential. The tendency of a charged body is to move from a place of higher to a place of lower potential, and consequently the ink tends to flow downwards to the writing tablet. The only avenue of escape for it is by the fine glass siphon, and through this it rushes accordingly and discharges itself upon the paper. The natural repulsion between its like-electrified particles causes the shower to issue in spray. As the paper moves over the pulleys a delicate hair line is marked, straight when the syphon is stationary, but curved when the siphon is pulled from side to side by the oscillations of the signal coil.
Power to pull the roll of paper tape through the syphon recorder was usually supplied by one Froment's mouse mill motors. These also drove an electrostatic machine to generate the electricity to power the syphon.
Muirhead's vibrating recorder
A simpler mechanism, operating quite differently, was developed by Alexander Muirhead. This used a vibrating pen to avoid the same problem of the ink sticking to the paper. The recording pen was suspended on a thin wire, vibrated by an electromagnet mechanism similar to that of an electric bell, to break contact with the paper.
References
Telegraphy
Non-impact printing
Recording devices | Syphon recorder | [
"Technology"
] | 905 | [
"Recording devices"
] |
2,228,245 | https://en.wikipedia.org/wiki/Quadrupole%20ion%20trap | In experimental physics, a quadrupole ion trap or paul trap is a type of ion trap that uses dynamic electric fields to trap charged particles. They are also called radio frequency (RF) traps or Paul traps in honor of Wolfgang Paul, who invented the device and shared the Nobel Prize in Physics in 1989 for this work. It is used as a component of a mass spectrometer or a trapped ion quantum computer.
Overview
A charged particle, such as an atomic or molecular ion, feels a force from an electric field. It is not possible to create a static configuration of electric fields that traps the charged particle in all three directions (this restriction is known as Earnshaw's theorem). It is possible, however, to create an average confining force in all three directions by use of electric fields that change in time. To do so, the confining and anti-confining directions are switched at a rate faster than it takes the particle to escape the trap. The traps are also called "radio frequency" traps because the switching rate is often at a radio frequency.
The quadrupole is the simplest electric field geometry used in such traps, though more complicated geometries are possible for specialized devices. The electric fields are generated from electric potentials on metal electrodes. A pure quadrupole is created from hyperbolic electrodes, though cylindrical electrodes are often used for ease of fabrication. Microfabricated ion traps exist where the electrodes lie in a plane with the trapping region above the plane. There are two main classes of traps, depending on whether the oscillating field provides confinement in three or two dimensions. In the two-dimension case (a so-called "linear RF trap"), confinement in the third direction is provided by static electric fields.
Theory
The 3D trap itself generally consists of two hyperbolic metal electrodes with their foci facing each other and a hyperbolic ring electrode halfway between the other two electrodes. The ions are trapped in the space between these three electrodes by AC (oscillating) and DC (static) electric fields. The AC radio frequency voltage oscillates between the two hyperbolic metal end cap electrodes if ion excitation is desired; the driving AC voltage is applied to the ring electrode. The ions are first pulled up and down axially while being pushed in radially. The ions are then pulled out radially and pushed in axially (from the top and bottom). In this way the ions move in a complex motion that generally involves the cloud of ions being long and narrow and then short and wide, back and forth, oscillating between the two states. Since the mid-1980s most 3D traps (Paul traps) have used ~1 mTorr of helium. The use of damping gas and the mass-selective instability mode developed by Stafford et al. led to the first commercial 3D ion traps.
The quadrupole ion trap has two main configurations: the three-dimensional form described above and the linear form made of 4 parallel electrodes. A simplified rectilinear configuration is also used. The advantage of the linear design is its greater storage capacity (in particular of Doppler-cooled ions) and its simplicity, but this leaves a particular constraint on its modeling. The Paul trap is designed to create a saddle-shaped field to trap a charged ion, but with a quadrupole, this saddle-shaped electric field cannot be rotated about an ion in the centre. It can only 'flap' the field up and down. For this reason, the motions of a single ion in the trap are described by Mathieu equations, which can only be solved numerically by computer simulations.
The intuitive explanation and lowest order approximation is the same as strong focusing in accelerator physics. Since the field affects the acceleration, the position lags behind (to lowest order by half a period). So the particles are at defocused positions when the field is focusing and vice versa. Being farther from center, they experience a stronger field when the field is focusing than when it is defocusing.
Equations of motion
Ions in a quadrupole field experience restoring forces that drive them back toward the center of the trap. The motion of the ions in the field is described by solutions to the Mathieu equation. When written for ion motion in a trap, the equation is
where represents the x, y and z coordinates, is a dimensionless variable given by , and and are dimensionless trapping parameters. The parameter is the radial frequency of the potential applied to the ring electrode. By using the chain rule, it can be shown that
Substituting into the Mathieu yields
Multiplying by m and rearranging terms shows us that
By Newton's laws of motion, the above equation represents the force on the ion. This equation can be exactly solved using the Floquet theorem or the standard techniques of multiple scale analysis. The particle dynamics and time averaged density of charged particles in a Paul trap can also be obtained by the concept of ponderomotive force.
The forces in each dimension are not coupled, thus the force acting on an ion in, for example, the x dimension is
Here, is the quadrupolar potential, given by
where is the applied electric potential and , , and are weighting factors, and is a size parameter constant. In order to satisfy Laplace's equation, , it can be shown that
For an ion trap, and and for a quadrupole mass filter, and .
Transforming Equation 6 into a cylindrical coordinate system with , , and and applying the Pythagorean trigonometric identity gives
The applied electric potential is a combination of RF and DC given by
where and is the applied frequency in hertz.
Substituting into with gives
Substituting Equation 9 into Equation 5 leads to
Comparing terms on the right hand side of Equation 1 and Equation 10 leads to
and
Further ,
and
The trapping of ions can be understood in terms of stability regions in and space. The boundaries of the shaded regions in the figure are the boundaries of stability in the two directions (also known as boundaries of bands). The domain of overlap of the two regions is the trapping domain. For calculation of these boundaries and similar diagrams as above see Müller-Kirsten.
Linear ion trap
The linear ion trap uses a set of quadrupole rods to confine ions radially and a static electrical potential on-end electrodes to confine the ions axially. The linear form of the trap can be used as a selective mass filter, or as an actual trap by creating a potential well for the ions along the axis of the electrodes. Advantages of the linear trap design are increased ion storage capacity, faster scan times, and simplicity of construction (although quadrupole rod alignment is critical, adding a quality control constraint to their production. This constraint is additionally present in the machining requirements of the 3D trap).
Cylindrical ion trap
The cylindrical ion trap (CIT) emerged as a derivative of the quadrupole ion trap with simpler geometric structure in which the electrodes are arranged in a cylindrical shape rather than the traditional hyperbolic or linear configuration.
The cylindrical ion trap consists of a central cylindrical electrode (ring electrode) and two end-cap electrodes. By applying a combination of static (DC) and oscillating (RF) voltages to these electrodes, a three-dimensional quadrupole field is generated. The ions are trapped in the center of this field due to the restoring forces created by the electric fields, which confine the ions along the axis and radial directions.
Ion traps with a cylindrical rather than a hyperbolic ring electrode have been developed and microfabricated in arrays to develop miniature mass spectrometers for chemical detection in medical diagnosis and other fields. However, the reduction in ion storage volumes remains a problem in small ion traps.
Planar ion trap
Quadrupole traps can also be "unfolded" to create the same effect using a set of planar electrodes. This trap geometry can be made using standard micro-fabrication techniques, including the top metal layer in a standard CMOS microelectronics process, and is a key technology for scaling trapped ion quantum computers to useful numbers of qubits.
Combined radio frequency trap
A combined radio frequency trap is a combination of a Paul ion trap and a Penning trap. One of the main bottlenecks of a quadrupole ion trap is that it can confine only single-charged species or multiple species with similar masses. But in certain applications like antihydrogen production it is important to confine two species of charged particles of widely varying masses. To achieve this objective, a uniform magnetic field is added in the axial direction of the quadrupole ion trap.
Digital ion trap
The digital ion trap (DIT) is a quadrupole ion trap (linear or 3D) that differs from conventional traps by the driving waveform. A DIT is driven by digital signals, typically rectangular waveforms that are generated by switching rapidly between discrete voltage levels. Major advantages of the DIT are its versatility and virtually unlimited mass range. The digital ion trap has been developed mainly as a mass analyzer.
See also
Quadrupole magnet
References
Bibliography
W. Paul Electromagnetic Traps for Charged and Neutral Particles Taken from Proceedings of the International School of Physics <<Enrico Fermi>> Course CXVIII “Laser Manipulation of Atoms and Ions”, (North Holland, New York, 1992) p. 497-517
R.I. Thompson, T.J. Harmon, and M.G. Ball, The rotating-saddle trap: a mechanical analogy to RF-electric-quadrupole ion trapping? (Canadian Journal of Physics, 2002: 80 12) p. 1433–1448
M. Welling, H.A. Schuessler, R.I. Thompson, H. Walther Ion/Molecule Reactions, Mass Spectrometry and Optical Spectroscopy in a Linear Ion Trap (International Journal of Mass Spectrometry and Ion Processes, 1998: 172) p. 95-114.
K. Shah and H. Ramachandran, Analytic, nonlinearly exact solutions for an rf confined plasma, Phys. Plasmas 15, 062303 (2008), Pradip K. Ghosh, Ion Traps, International Series of Monographs in Physics, Oxford University Press (1995), https://web.archive.org/web/20111102190045/http://www.oup.com/us/catalog/general/subject/Physics/AtomicMolecularOpticalphysics/?view=usa
Patents
External links
Nobel Prize in Physics 1989
Mass spectrometry
Measuring instruments
German inventions
Particle traps | Quadrupole ion trap | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,183 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Measuring instruments",
"Particle traps",
"Mass spectrometry",
"Matter"
] |
2,228,302 | https://en.wikipedia.org/wiki/Wide%20open%20throttle | Wide open throttle or wide-open throttle (WOT), also called full throttle, is the fully opened state of a throttle on an engine (internal combustion engine or steam engine). The term also, by extension, usually refers to the maximum-speed state of running the engine, as the normal result of a fully opened throttle plate/butterfly valve. In an internal combustion engine, this state entails the maximum intake of air and fuel that occurs when the throttle plates inside the carburetor or throttle body are "wide open" (fully opened up), providing the least resistance to the incoming air. In the case of an automobile, WOT is when the accelerator is depressed fully, sometimes referred to as "flooring it" (because automotive throttle controls are usually a pedal, so full throttle is selected by pressing the pedal to the floor, or as near as it will go). A throttle on a steam engine controls how much steam is sent to the cylinders from the boiler.
In the case of a diesel engine, which does not have a throttle valve, WOT is the point at which the maximum amount of fuel is being injected relative to the amount of air pumped by the engine, generally in order to bring the fuel-air mixture up to the stoichiometric point. If any more fuel were to be injected then black smoke would result. (Regardless of the non-literal nature of the term when applied to diesel contexts, it is nonetheless figuratively common and well understood.)
At wide open throttle, manifold vacuum decreases. The higher manifold pressure in turn allows more air to enter the combustion cylinders, and thus additional fuel is required to balance the combustion reaction. (Carburetors and fuel injection systems are arranged so as to provide the correct air–fuel ratio as conditions dynamically shift.) The additional air and fuel reacting together produce more power.
Throttle position is a data point in electronic engine control and in on-board diagnostics (OBD). In the many generations and designs of engine control units, a throttle position sensor (TPS) is typically one of the sensors providing input to the computer. Often an air–fuel ratio meter is also used.
In both control theory (involving humans and machines) and control logic (as a machine-based application thereof), the concept of wide open throttle can be divided logically into operator intent, throttle position itself, the resultant/net effect on the state of engine running at each moment, and the feedback loops among those factors. This is true even in a system without electronic control, as, for example, when the operator holds the throttle open (pedal floored) to overcome flooding in a carbureted engine. The intent of WOT in that case is not to rev up the engine (which is not even running yet) but simply to lean out the air–fuel ratio enough to get the engine started. In electronic control, the feedback between the factors can be finessed and exploited in countless ways, even to the extent that in drive by wire systems the operator's input (which is pedal position) is a completely separate concern from throttle position itself, and the computer constantly makes new decisions about how the two should be correlated when the state of engine running changes from second to second. In the carburetion era, carbs had jets and fuel circuits arranged with a certain logic to overcome the transient differences between throttle position changes and their resultant effects on the engine's running (for example, jets to prevent hesitation).
See also
balls to the wall, a similar concept with probable origins in aircraft throttle control levers
balls-out, a similar concept with probable origins in ball-style governors on steam engines and gas engines
References
Engine technology | Wide open throttle | [
"Technology"
] | 757 | [
"Engine technology",
"Engines"
] |
2,228,439 | https://en.wikipedia.org/wiki/Genetic%20redundancy | Genetic redundancy is a term typically used to describe situations where a given biochemical function is redundantly encoded by two or more genes. In these cases, mutations (or defects) in one of these genes will have a smaller effect on the fitness of the organism than expected from the genes’ function. Characteristic examples of genetic redundancy include (Enns, Kanaoka et al. 2005) and (Pearce, Senis et al. 2004). Many more examples are thoroughly discussed in (Kafri, Levy & Pilpel. 2006).
The main source of genetic redundancy is the process of gene duplication which generates multiplicity in gene copy number. A second and less frequent source of genetic redundancy are convergent evolutionary processes leading to genes that are close in function but unrelated in sequence (Galperin, Walker & Koonin 1998). Genetic redundancy is typically associated with signaling networks, in which many proteins act together to accomplish teleological functions. In contrast to expectations, genetic redundancy is not associated with gene duplications [Wagner, 2007], neither do redundant genes mutate faster than essential genes [Hurst 1999]. Therefore, genetic redundancy has classically aroused much debate in the context of evolutionary biology (Nowak et al., 1997; Kafri, Springer & Pilpel . 2009).
From an evolutionary standpoint, genes with overlapping functions imply minimal, if any, selective pressures acting on these genes. One therefore expects that the genes participating in such buffering of mutations will be subject to severe mutational drift diverging their functions and/or expression patterns with considerably high rates. Indeed it has been shown that the functional divergence of paralogous pairs in both yeast and human is an extremely rapid process. Taking these notions into account, the very existence of genetic buffering, and the functional redundancies required for it, presents a paradox in light of the evolutionary concepts. On one hand, for genetic buffering to take place there is a necessity for redundancies of gene function, on the other hand such redundancies are clearly unstable in face of natural selection and are therefore unlikely to be found in evolved genomes.
Duplicated genes that diverge in function may undergo subfunctionalization or can become degenerate. When two protein coding genes are degenerate there will be conditions where the gene products appear functionally redundant and also conditions where the gene products take on unique functions.
References
Pearce, A. C., Y. A. Senis, et al. (2004). "Vav1 and vav3 have critical but redundant roles in mediating platelet activation by collagen." J Biol Chem 279(52): 53955-62.
Enns, L. C., M. M. Kanaoka, et al. (2005). "Two callose synthases, GSL1 and GSL5, play an essential and redundant role in plant and pollen development and in fertility." Plant Mol Biol 58(3): 333-49.
Kafri, R., M. Levy, et al. (2006). "The regulatory utilization of genetic redundancy through responsive backup circuits." Proc Natl Acad Sci U S A 103(31): 11653-8.
Galperin, M. Y., Walker, D. R. & Koonin, E. V. (1998) Genome Res 8, 779-90.
Kafri R, Springer M, Pilpel Y. Genetic redundancy: new tricks for old genes. Cell. 2009 Feb 6;136(3):389-92.
Wagner A, Wright J. Alternative routes and mutational robustness in complex regulatory networks. Biosystems. 2007 Mar;88(1-2):163-72. Epub 2006 Jun 15.
Hurst LD, Smith NG. Do essential genes evolve slowly? Curr Biol. 1999 Jul 15;9(14):747-50.
Genetics terms | Genetic redundancy | [
"Biology"
] | 841 | [
"Genetics terms"
] |
2,228,580 | https://en.wikipedia.org/wiki/Austrumi%20Linux | (Austrum Latvijas Linukss) is a bootable live CD Linux distribution based on Slackware. It was created and is actively maintained by a group from the Latgale region of Latvia. The entire operating system and all the applications run from RAM, making Austrumi faster than larger distributions that must read from a disk, and allowing the boot medium to be removed after the operating system has booted.
See also
Comparison of Linux Live Distros
Lightweight Linux distribution
List of Linux distributions that run from RAM
References
External links
Downloading ISO images
Slackware
Linux distributions without systemd
Linux distributions
Language-specific Linux distributions | Austrumi Linux | [
"Technology"
] | 130 | [
"Natural language and computing",
"Language-specific Linux distributions"
] |
2,228,726 | https://en.wikipedia.org/wiki/Surface%20charge | A surface charge is an electric charge present on a two-dimensional surface. These electric charges are constrained on this 2-D surface, and surface charge density, measured in coulombs per square meter (C•m−2), is used to describe the charge distribution on the surface. The electric potential is continuous across a surface charge and the electric field is discontinuous, but not infinite; this is unless the surface charge consists of a dipole layer. In comparison, the potential and electric field both diverge at any point charge or linear charge.
In physics, at equilibrium, an ideal conductor has no charge on its interior; instead, the entirety of the charge of the conductor resides on the surface. However, this only applies to the ideal case of infinite electrical conductivity; the majority of the charge of an actual conductor resides within the skin depth of the conductor's surface. For dielectric materials, upon the application of an external electric field, the positive charges and negative charges in the material will slightly move in opposite directions, resulting in polarization density in the bulk body and bound charge at the surface.
In chemistry, there are many different processes which can lead to a surface being charged, including adsorption of ions, protonation or deprotonation, and, as discussed above, the application of an external electric field. Surface charge emits an electric field, which causes particle repulsion and attraction, affecting many colloidal properties.
Surface charge practically always appears on the particle surface when it is placed into a fluid. Most fluids contain ions, positive (cations) and negative (anions). These ions interact with the object surface. This interaction might lead to the adsorption of some of them onto the surface. If the number of adsorbed cations exceeds the number of adsorbed anions, the surface would have a net positive electric charge.
Dissociation of the surface chemical group is another possible mechanism leading to surface charge.
Density
Surface charge density is defined as the amount of electric charge, q, that is present on a surface of given area, A:
Conductors
According to Gauss’s law, a conductor at equilibrium carrying an applied current has no charge on its interior. Instead, the entirety of the charge of the conductor resides on the surface, and can be expressed by the equation:
where E is the electric field caused by the charge on the conductor and is the permittivity of the free space. This equation is only strictly accurate for conductors with infinitely large area, but it provides a good approximation if E is measured at an infinitesimally small Euclidean distance from the surface of the conductor.
Colloids and immersed objects
When a surface is immersed in a solution containing electrolytes, it develops a net surface charge. This is often because of ionic adsorption. Aqueous solutions universally contain positive and negative ions (cations and anions, respectively), which interact with partial charges on the surface, adsorbing to and thus ionizing the surface and creating a net surface charge. This net charge results in a surface potential [L], which causes the surface to be surrounded by a cloud of counter-ions, which extends from the surface into the solution, and also generally results in repulsion between particles. The larger the partial charges in the material, the more ions are adsorbed to the surface, and the larger the cloud of counter-ions. A solution with a higher concentration of electrolytes also increases the size of the counter-ion cloud. This ion/counterion layer is known as the electric double layer.
A solution's pH can also greatly affect surface charge because functional groups present on the surface of particles can often contain oxygen or nitrogen, two atoms which can be protonated or deprotonated to become charged. Thus, as the concentration of hydrogen ions changes, so does the surface charge of the particles. At a certain pH, the average surface charge will be equal to zero; this is known as the point of zero charge (PZC). A list of common substances and their associated PZCs is shown to the right.
Interfacial potential
An interface is defined as the common boundary formed between two different phases, such as between a solid and gas. Electric potential, or charge, is the result of an object's capacity to be moved in an electric field. An interfacial potential is thus defined as a charge located at the common boundary between two phases (for example, an amino acid such as glutamate on the surface of a protein can have its side chain carboxylic acid deprotonated in environments with pH greater than 4.1 to produce a charged amino acid at the surface, which would create an interfacial potential). Interfacial potential is responsible for the formation of the electric double layer, which has a broad range of applications in what is termed electrokinetic phenomena. The development of the theory of the electric double layer is described below.
Helmholtz
The model dubbed the 'electric double layer' was first introduced by Hermann von Helmholtz. It assumes that a solution is only composed of electrolytes, no reactions occur near the electrode which could transfer electrons, and that the only Van der Waals interactions are present between the ions in solution and the electrode. These interactions arise only due to the charge density associated with the electrode which arises from either an excess or deficiency of electrons at the electrode's surface. To maintain electrical neutrality the charge of the electrode will be balanced by a redistribution of ions close to its surface. The attracted ions thus form a layer balancing the electrode's charge. The closest distance an ion can come to the electrode will be limited to the radius of the ion plus a single solvation sphere around an individual ion. Overall, two layers of charge and a potential drop from the electrode to the edge of the outer layer (outer Helmholtz Plane) are observed.
Given the above description, the Helmholtz model is equivalent in nature to an electrical capacitor with two separated plates of charge, for which a linear potential drop is observed at increasing distance from the plates.
The Helmholtz model, while a good foundation for the description of the interface does not take into account several important factors: diffusion/mixing in solution, the possibility of adsorption on to the surface and the interaction between solvent dipole moments and the electrode.
Gouy-Chapman
Gouy-Chapman theory describes the effect of a static surface charge on a surface's potential. "Gouy suggested that interfacial potential at the charged surface could be attributed to the presence of a number of ions of given charge attached to its surface, and to an equal number of ions of opposite charge in the solution." A positive surface charge will form a double layer, since negative ions in solution tend to balance the positive surface charge. Counter ions are not rigidly held, but tend to diffuse into the liquid phase until the counter potential set up by their departure restricts this tendency. The kinetic energy of the counter ions will, in part, affect the thickness of the resulting diffuse double layer. The relation between C, the counter ion concentration at the surface, and , the counter ion concentration in the external solution, is the Boltzmann factor:
where z is the charge on the ion, e is the charge of a proton, kB is the Boltzmann constant and ψ is the potential of the charged surface.
This however is inaccurate close to the surface, because it assumes that molar concentration is equal to activity. It also assumes that ions were modeled as point charges and was later modified. An improvement of this theory, known as the modified Gouy-Chapman theory, included the finite size of the ions with respect to their interaction with the surface in the form of a plane of closest approach.
Surface potential
The relation between surface charge and surface potential can be expressed by the Grahame equation, derived from the Gouy-Chapman theory by assuming the electroneutrality condition, which states that the total charge of the double layer must be equal to the negative of the surface charge. Using the one-dimensional Poisson equation and assuming that, at an infinitely great distance, the potential gradient is equal to 0, the Grahame equation is obtained:
where is the bulk concentration of the electrolyte, Avogadro's constant, and T the absolute temperature.
For the case of lower potentials, can be expanded to , and is defined as the Debye length. Which leads to the simple expression:
Stern
The Otto Stern model of the double layer is essentially a combination of Helmholtz and Gouy-Chapman theories. His theory states that ions do have finite size, so cannot approach the surface closer than a few nanometers. Through a distance known as the Stern Layer, ions can be adsorbed onto the surface up to a point referred to as the slipping plane, where the ions adsorbed meet the bulk liquid. At the slipping plane the potential Ψ has decreased to what is known as the zeta potential. Although zeta potential is an intermediate value, it is sometimes considered to be more significant than surface potential as far as electrostatic repulsion is concerned.
Applications
Charged surfaces are extremely important and are used in many applications. For example, solutions of large colloidal particles depend almost entirely on repulsion due to surface charge in order to stay dispersed. If these repulsive forces were to be disrupted, perhaps by the addition of a salt or a polymer, the colloidal particles would no longer be able to sustain suspension and would subsequently flocculate.
Electrokinetic phenomena
Electrokinetic phenomena refers to a variety of effects resulting from an electrical double layer. A noteworthy example is electrophoresis, where a charged particle suspended in a media will move as a result of an applied electrical field. Electrophoresis is widely used in biochemistry to distinguish molecules, such as proteins, based on size and charge. Other examples include electro-osmosis, sedimentation potential, and streaming potential.
Proteins
Proteins often have groups present on their surfaces that can be ionized or deionized depending on pH, making it relatively easy to change the surface charge of a protein. This has particularly important ramifications on the activity of proteins that function as enzymes or membrane channels, mainly, that the protein's active site must have the right surface charge in order to be able to bind a specific substrate.
Adhesives/coatings
Charged surfaces are often useful in creating surfaces that will not adsorb certain molecules (for example, in order to prevent the adsorption of basic proteins, a positively charged surface should be used). Polymers are very useful in this respect in that they can be functionalized so that they contain ionizable groups, which serve to provide a surface charge when submerged in an aqueous solution.
References
Electric charge | Surface charge | [
"Physics",
"Mathematics"
] | 2,211 | [
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities",
"Electric charge"
] |
2,228,872 | https://en.wikipedia.org/wiki/Fileset | In computing, a fileset is a set of computer files linked by defining property or common characteristic. There are different types of fileset though the context will usually give the defining characteristic. Sometimes it is necessary to explicitly state the fileset type to avoid ambiguity, an example is the emacs editor which explicitly mentions its Version Control (VC) fileset type to distinguish from its "named files" fileset type.
Fileset types
While there is probably no classification of fileset types some common usage cases do emerge:
A fileset type where the set of files in the fileset are simply enumerated or selected, as an example in the way named filesets are constructed in emacs.
The set of files included in a software installation package is used in both the AIX operating system installation packaging system, and the HP-UX packaging system.
For fileset types relating to filesystems there may be a relationship to directories. In terms of Namespace Database (NSDB) Protocol for Federated File Systems:
In coding forms some libraries may define a fileset object type, typically as a case specific name Fileset, or FileSet which is used to hold an object which references a set of files.
Specific examples
Fileset has several meanings and usages, depending on the context. Some examples are:
In the AIX operating system installation packaging system it is the smallest individually installable unit (a collection of files that provides a specific function).
DCE/DFS uses the term fileset to define a tree containing directories, files, and mount points (links to other DFS filesets). A DFS fileset is also a unit of administrative control. Properties such as data location, storage quota, and replication are controlled at this level of granularity. The concept of a fileset in DFS is essentially identical to the concept of a volume in AFS. The glamor filesystem uses the same concept of filesets. Filesets are lightweight components compared to file systems, so management of a file set is easier.
In IBM GPFS represents a set of files within a file system which have an independent inode space.
References
Computing terminology | Fileset | [
"Technology"
] | 444 | [
"Computing terminology"
] |
2,229,095 | https://en.wikipedia.org/wiki/Dimension%20stone | Dimension stone is natural stone or rock that has been selected and finished (e.g., trimmed, cut, drilled or ground) to specific sizes or shapes. Color, texture and pattern, and surface finish of the stone are also normal requirements. Another important selection criterion is durability: the time measure of the ability of dimension stone to endure and to maintain its essential and distinctive characteristics of strength, resistance to decay, and appearance.
Quarries that produce dimension stone or crushed stone (used as construction aggregate) are interconvertible. Since most quarries can produce either one, a crushed stone quarry can be converted to dimension stone production. However, first the stone shattered by heavy and indiscriminate blasting must be removed. Dimension stone is separated by more precise and delicate techniques, such as diamond wire saws, diamond belt saws, burners (jet-piercers), or light and selective blasting with Primacord, a weak explosive.
Stone and rock types
A variety of igneous, metamorphic, and sedimentary rocks are used as structural and decorative dimension stone. These rock types are more commonly known as granite, limestone, marble, travertine, quartz-based stone (sandstone, quartzite) and slate. Other varieties of dimension stone that are normally considered to be special minor types include alabaster (massive gypsum), soapstone (massive talc), serpentine and various products fashioned from natural stone.
A variety of finishes can be applied to dimension stone to achieve diverse architectural and aesthetic effects. These finishes include, but are not limited to, the following. A polished finish gives the surface a high luster and strong reflection of incident light (almost mirror-like). A honed finish provides a smooth, satin-like ("eggshell"), nonreflective surface. More textured finishes include brush-hammered, sandblasted, and thermal. A brush-hammered finish, similar to a houndstooth pattern, creates a rough, but uniformly patterned surface with impact tools varying in coarseness. A sandblasted surface provides an irregular pitted surface by impacting sand or metal particles at high velocity against a stone surface. A thermal (or flamed) finish produces a textured, nonreflective surface with only a few reflections from cleavage faces, by applying a high-temperature flame. This finish may change the natural color of the stone depending on mineralogical composition, particularly with stones containing higher levels of iron.
The most easily accessible general (non-graphic) references are the latest Minerals Yearbook Chapter (production and foreign trade, with statistics), and the latest (Issue 31) Dimension Stone Advocate News (new "building green" developments and demand statistics); see below. The most comprehensive, graphic references are Natural Stone Database by Abraxas Verlag (www.natural-stone-database.com), "Dimension Stones of the World, Volumes I & II" (Marble Institute of America) and "Natural Stones Worldwide CD" .
Major applications
While common colors used in some of the major applications are listed below, there is an extraordinarily wide range of colours, available in thousands of patterns. These patterns are created by geological phenomena such as mineral grains, inclusions, veins, cavity fillings, blebs, and streaks. In addition, rocks and stones not normally classed as dimension stone are sometimes selected for these applications. These can include jade, agate, and jasper.
Stone (usually granite) countertops and bathroom vanities both involve a finished slab of stone, usually polished but sometimes with another finish (such as honed or sandblasted). Industry standard thicknesses in the United States are . Often 19 mm slabs will be laminated at the edge to create the appearance of a thicker edge profile. The slabs are cut to fit the top of the kitchen or bathroom cabinet, by measuring, templating or digital templating. Countertop slabs are commonly sawn from rough blocks of stone by reciprocating gangsaws using steel shot as abrasive. More modern technology utilizes diamond wire saws which use less water and energy. Multi-wire saws with as many as 60 wires can slab a block in less than two hours. The slabs are finished (i.e., polished, honed), then sealed with resin to fill micro-fissures and surface imperfections typically due to the loss of poorly bonded elements such as biotite. The fabricators' shop cuts these slabs down to final size and finishes the edges with equipment such as hand-held routers, grinders, CNC equipment, or polishers. In 2008, concerns were raised regarding radon emissions from granite countertops; the National Safety Council states that the contributions of radon to inside air come from the soil and rock around the residence (69%), the outdoor air and the water supply (28%), and only 2.5% from all building materials-including granite countertops. A concerned homeowner can employ ASTM radon mitigation and removal techniques. The stone for countertops or vanities is usually granite, but often is marble (especially for vanity tops), and is sometimes limestone or slate. The majority of the stone for this application is produced in Brazil, Italy, and China.
Tile is a thin modular stone unit, commonly square and deep. Other popular sizes are square, square, and square; these will usually be deeper than the 12-inch square. The majority of tile has a polished finish, but other finishes such as honed are becoming more common. Almost all stone tile is mass-produced by automated tile lines to identical size, finish, and close tolerances. Exceptions include slate flooring tile and special orders: tile with odd sizes or shapes, unusual finishes, or inlay work. In summary, the automated tile line is a complicated complex of cutting and calibrating machines, honing-polishing machines, edging machines that put on flat or rounded edges, and interconnecting conveyors to move the stone from the slab input to the final tile product. The stone for tiles is most commonly marble, but often is granite, and sometimes limestone, slate, or quartz-based stone. Common colors are white and light earth colors. Much of the stone for this application is produced in Italy and China.
Stone monuments include tombstones, grave markers or as mausoleums. After being gangsawed into big deep (up to wide and over deep) slabs, smaller saws or guillotines (they break the granite and make the rough edges commonly seen on monuments) shape the monuments. The fronts and backs are usually polished. The individual monuments are then carved, shaped, and further defined by hand tools and sandblasting equipment. At this time, the stone for monuments is most commonly granite, sometimes marble (as in military cemeteries), and rarely others. Granite and quartz both demonstrate good durability, especially because rain is naturally acidic. (This is a natural consequence of the carbon dioxide present in the atmosphere, which generates a weak solution of carbonic acid in rainfall; further acidification of rainfall arises from oxides of sulphur and nitrogen due to anthropogenic emissions). (Limestone and sandstone were commonly chosen for monuments in the nineteenth century, but they are no widely longer used because of the rapid erosion rates due to dissolution of acid-vulnerable carbonates by acidic rainfall.) The most common monument colors for granite are gray, black, and mahogany; for marble, white is most popular. Today, the majority of the stone used in North America in this application is imported from countries such as India and China. This has depressed traditional North American monument centers such as Georgia and Quebec.
There are a number of smaller applications for buildings and traffic-related uses. Building components include stone used as veneer, a non load-bearing facing of stone attached to a backing of an ornamental nature, although it also protects and insulates; and ashlar, a squared block of stone, often brick-sized, for facing of walls (primarily exterior). Other shapes include rectangular blocks used for stair treads, sills, and coping (coping is sometimes nonrectangular). The shapes subject to foot traffic will usually have an abrasive finish such as honed or sandblasted. The stone is mostly limestone, but often is quartz-based stone (sandstone), or even marble or granite. Roofing slate is a thin-split shingle-sized piece of slate, and when in place forms the most permanent kind of roof; slate is also used as countertops and flooring tile. Traffic-related stone is that which is used for curbing (vehicular) and flagstone (pedestrian). Curbing is thin stone slabs used along streets or highways to maintain the integrity of sidewalks and borders. Flagstone is a shallow naturally irregular-edged slab of stone, sometimes sawed into a rectangular shape, used as paving (almost always pedestrian). For curbing, the stone is almost always granite, and for flagstone the stone is almost always quartz-based stone (sandstone or quartzite).
There are several other applications resembling flagstone in using rough dimension (or crushed) stone, usually as quarried, sometimes made smaller (i.e. by a jackhammer), often simply put in place: dry stone and riprap.
The stone used in these applications usually has to have certain properties, or meet a standard specification. The American Society for Testing and Materials (ASTM) has such specifications for granite, marble, limestone, quartz-based dimension stone (C616), slate (C629), travertine (C1527), and serpentine (C1526).
Production
The major producers of dimension stone include Brazil, China, India, Italy, and Spain, and each have annual production levels of nine to over twenty-two million tons. Portugal produces 3 million tons of dimension stone each year.
According to the USGS, 2007 U.S. dimension stone production was 1.39 million tons valued at $275 million, compared to 1.33 million tons (revised) valued at $265 million in 2006. Of these, granite production was 453,000 tons valued at $106 million in 2007 and 428,000 tons valued at $105 million in 2006, and limestone was 493,000 tons valued at $93.3 million in 2007 and 559,000 tons valued at $96.1 million in 2006. The United States is at best a mid-level dimension stone producer on the world scene; Portugal produces twice as much dimension stone annually.
World comparison for dimension stone demand: The DSAN World Demand Index for (finished) Granite was 227 in 2006, 247 in 2007, and 249 in 2008, and the World Demand Index for (finished) Marble was 200 in 2006, 248 in 2007, and 272 in 2008. The DSAN World Demand for (finished) Granite Index showed a growth of 12% annually for the 2000-2008 period, compared to 14% annually for the 2000-2007 period, and compared to 15% annually for the 2000-2006 period. The DSAN World Demand for (finished) Marble Index showed a growth of 13.5% annually for the 2000-2008 period, compared to 14.0% annually for the 2000-2007 period, and compared to 12.5% annually for the 2000-2006 period. The indexes show world demand for granite has clearly been weakening since 2006, while the world demand for marble only weakened from 2007 to 2008. Other DSAN indexes for 2008 indicate that the 2000-2008 growth was down from the 2000-2007 growth.
The DSAN U.S. Ceramic Tile Demand Index shows a drop of 4.8% annually for the 2000-2007 period, compared to growth of 5.0% annually for the 2000-2006 period. The "traditional" major ceramic tile suppliers, Italy and Spain, have been losing markets to new entrants Brazil and China. The same thing has been happening with dimension stone with increasing supplies from Brazil, China and India.
In 2008, Chinese exports of granite countertops and marble tile increased from 2007, while those of Italy and Spain did not (see above, world demand).
"Building green" with dimension stone
The concept of environmentally friendly construction with natural materials, known as green building, has had advocates since before the early 1990s. Energy price increases and the need for energy conservation when heating or cooling buildings since the 1980s have meant that associated design questions are pertinent to the architectural, construction, and civil engineering industries. This resulted in the formation in 1993 of the U.S. Green Building Council (USGBC), which has developed the building rating system, Leadership in Energy and Environmental Design (LEED). Educational institutions (colleges, universities, grade, and high schools) are often requiring new buildings to be green, and some jurisdictions have some rules promoting green building. When building with these goals in mind, dimension stone has an advantage over steel, concrete, glazed glass and laminated plastics, whose productions are all energy intensive and create significant air and water pollution. As an entirely natural product, dimension stone also has an advantage over synthetic/artificial stone products, as well as composite and space-age materials.
One LEED requirement provides that the dimension stone used in a green building be quarried within a radius of the building being constructed. This gives a clear advantage to domestic dimension stone.
When demolishing a structure, dimension stone is 100% reusable and can be salvaged for new construction, used as paving or crushed for use as aggregates. There are also stone cleaning methods with less environmental impact, either in development or already in use, such as removing the black gypsum crusts that form on marble and limestone by applying sulfate-reducing bacteria to the crust to gasify it, breaking up the crust for easy removal. See DSAN for updates on "building green" and dimension stone recycling.
The Natural Stone Council has a library of information on building green with dimension stone, including life-cycle inventory data for each major dimension stone, giving the amount of energy, water, other inputs, and processing emissions, plus some best practice studies. In addition, it has shown ways that dimension stone can contribute LEED points, such as using a light-colored dimension stone to reduce heat-island effects, using dimension stone's thermal mass to impact indoor ambient air temperature thereby increasing energy efficiency, and especially by reusing dimension stone rather sending it to the landfill.
Sustainability
Dimension stone is one of the most sustainable of the industrial minerals since it is created by separating it from the natural bedrock underlying all land on every continent. Dimension stone rates very well in terms of the criteria on the ASTM checklist for sustainability of building products: there are no toxic materials used in its processing, there are no direct greenhouse gas emissions during processing, the dust created is controlled, the water used is almost completely recycled (per OSHA/MSHA regulation), and it is a perpetual resource (virtually inexhaustible in a human time scale). Dimension stone in use can last many generations, even centuries, so the dimension stone manufacturers have not needed a product recycling program. However, there are practical qualifications to and constraints on that sustainability. The dimension stone color and pattern can be changed by weathering when it is very near the surface. The color and pattern can also be changed by proximity to an igneous rock body or by the presence of circulating groundwater charged with carbon dioxide (i.e., limestone, travertine, marble). On the other hand, changes in color and/or pattern can be positive. For example, there are at least 14 separately trade-named varieties of Carrara Marble with many patterns (or no patterns) ranging in shade from white to gray. The presence of faults or closely spaced joints can render the stone unusable. These faults and joints do not have to be at odd angles in the stone mass. Closely spaced, wrongly spaced, or nonparallel bedding planes can make the stone unusable, particularly if the bedding planes are planes of weakness. If part of the stone in one area is unusable, there will be another usable part of the stone elsewhere in the formation. A quarry is not a short-term project unless it encounters one of these constraints. Examples of big, old quarries operating for more than a century include the Barre (VT) granite quarry, the Georgia Marble quarry at Tate, several of the Carrara (Italy) marble quarries, and the Penrhyn (Wales) slate quarry. A quarry will produce dust, noise, and some water pollution, but these can be remedied without too much trouble. The landscape may also have to be restored if quarry waste is temporarily or permanently placed on adjacent land.
Recycling and reuse
Recycling dimension stone can occur when structures are demolished, along with recycling timber and recycling construction aggregate in the form of concrete. The material most likely to be recycled is concrete, and this represents the largest volume of recycled construction material. Not too many structures incorporate dimension stone, and even fewer of them have dimension stone worth saving. Stone recycling is usually done by specialists that monitor local demolition activity, looking for stone-containing houses, buildings, bridge abutments, and other dimension stone structures scheduled for demolition. Particularly treasured are old hand-carved stone pieces with the chisel marks still on them, local stones no longer quarried or that are quarried in a different shade of color or appearance. There is no national or regional trade in reclaimed stone, so a large storage yard is required, since the recovered stone may not be quickly sold and reused. The recycled dimension stone is used in old stone buildings being renovated (to replace deteriorated stone pieces), in fireplace mantels, benches, veneer, or for landscaping (like for retaining walls).
Related to stone recycling and stone reuse is the deconstruction and reconstruction of a stone building. The building is taken apart stone block by stone block and the location and orientation of each block is carefully noted. Any roofing slate and interior stone in place is catalogued and moved in the same fashion. After the blocks, slate, and other stone used have been transported to the new location, they are put back in place where and how they were originally, thus reassembling the building. This is typically very expensive and rare but valuable in terms of historic preservation.
Dimension stone is also reused. Buildings immediately spring to mind, but such things as the ornate stone walls, arches, stairways and balustrades alongside a boulevard can also be renovated and reused. Sometimes the old interior of the building is kept as is, after repair. Sometimes the old building is gutted, leaving only a shell or facade and the space inside reconfigured and modernized. The stone work will usually need attention too.
The old stone work may only need cleaning or sandblasting, but it may need more. Firstly, the building exterior (facade) needs to be inspected for unsafe conditions. Next, the building walls need to be inspected for water leakages. The most likely needs are mortar restoration (repointing), applying consolidants to the old stone, or replacing pieces of stone that are deteriorated (damaged) beyond the point of any repair. The repointing is the removal of existing damaged mortar from the outer portion of the joint between stone units and its replacement by new mortar matching the appearance of the old. The consolidants re-establish the original natural bonding between the stone particles that weathering has removed. Deteriorated pieces of stone work are replaced with pieces of stone that match the original as much as possible. Exterior dimension stone will often change color after exposure to weather over time. For example, Indiana Limestone will weather from a tan to an attractive light yellow. Interior dimension stone can sometimes change its shade a little over time too. For both, it may not be possible to find an exact match, even from the original quarry. Stone will often change its appearance from location to location in the same quarry. If the dimension stone renovationist is truly fortunate, the original builder put aside some spare pieces of the stone for future need.
Life-cycle assessment and best practices
As in every economic sector, the construction industry's purchases of materials and services creates a whole chain of processes from raw material selecting in situ, removal from the earth, usually proceeding to cutting, finishing, or processing/manufacturing, then transport, and retailing. All of these activities have significant upstream (off-site) environmental impacts, whether in terms of energy and raw resource use or emissions to air, land, or water impacting living organisms or the Earth's surface (non-organic). Life cycle assessment is a method for estimating and comparing a range of environmental performance measures (e.g. global warming, acidification potential, toxicity, ozone depletion potentials) over the full life cycle of a product, a building assembly, or a whole building. As such, it provides a comprehensive means for evaluating and comparing products rather than prescriptive measures of individual product characteristics.
The ASTM has some relevant standards, particularly a guide on environmental life cycle assessment of building materials/products (E1991) that shows how to minimize the subjectivity that commonly mars and confuses environmental decision making. In particular, this guide describes the inventory analysis phase that requires data that is suitable for its intended purpose, thus covering data quality (such as completeness, reliability, accuracy, and credibility) as well as the allocation of the data (for multiple inputs and outputs), among other things. Results have to be on a common basis to allow a statistically significant comparison of alternative building product differences in the interpretation.
The Natural Stone Council (NSC) has commissioned some life-cycle inventory data for use in life cycle assessments. Almost 90% of the effort in doing a life cycle assessment involves getting reliable data. For example, the NSC has data that the Global warming potential for granite quarrying is 100 kg of carbon dioxide equivalents and for granite processing is 500 (same units); and the Global Warming Potential for limestone quarrying is 20 kg carbon dioxide equivalents while for limestone processing it is 80 (same units). The data on energy and water use include everything back to removal of overburden in the dimension stone quarry and upstream production of energy and fuels, and forward to packaging of finished dimension stone product or slabs for shipment and transport, or to moving scrap stone to storage or reclamation and to capturing and treatment of dust and waste water. The data is then placed in an impact category (i.e. changes to air, changes to water), characterized as to the contribution of the item to the impact compared to other items, and then the impact categories are assigned weights among themselves to show their relative importance.
The Natural Stone Council has also commissioned four Best Practices. One is on water consumption, treatment, and reuse while extracting and processing dimension stone, including dust mitigation, sludge management, and maximizing water recycling. Another is on site maintenance and quarry closure, including minimizing dust, noise, vibration and keeping the operation clean and tidy, both of which help in restoring the surface upon quarry closure. A third one is on solid waste management, including overburden, damaged stone unsaleable as product, sludge deposited from waste water, spent or spilled petroleum products, or metal scrap. The fourth one is on efficiently transporting stone to be finished as products, then transporting the products to consumers by centralizing freight management, consolidating small loads, choosing appropriate trucks, balancing and securing the load, and packaging with sustainable materials.
Selection and cleaning
The selector of dimension stone begins by considering stone color and appearance, and how the stone will match its surroundings. In addition to many hundreds of different stones with different colors and patterns, each stone can change radically in color and appearance when a different finish is put on it. A polished finish accentuates the color and makes any pattern more vivid, and the rougher finishes (i.e. honed, thermal) lighten the color and make the patterns more subdued.
In addition to selecting a stone color and pattern, the suitability of its properties for the intended use must be considered. Stone being chosen for countertops or vanities should be nonabsorptive, resist stains, and be heat and impact resistant. Stone being used in tiles should be sealed in order to resist staining by spilled liquids. Stone being used for flooring, paving, or surfaces subject to foot or vehicular traffic ought to have a semiabrasive finish for slip resistance, such as bush-hammered or thermal. A glossy polished finish will be slick. Most flagstone surfaces are rough enough to be naturally slip-resistant.
Dimension stone requires some specialized methods for cleaning and maintenance. Abrasive cleaners should not be used on a polished stone finish because it will wear the polish off. Acidic cleaners can not be used on marble or limestone because it will remove (i.e. dissolve) the finish. Textured finishes (thermal, bush-hammered) can be treated with some mildly abrasive cleaners but not bleach or an acidic cleaner (if marble or limestone). Stains are another consideration; stains can be organic (food, grease, or oil) or metallic (iron, copper). Stains require some special removal techniques, such as the poultice method. A new method of cleaning stone on ancient buildings (medieval and renaissance) has been developed in Europe: sulfur-reducing bacteria are used on the black gypsum-containing crusts that form on such buildings to convert the sulfur to a gas that dissipates, thus destroying the crust while leaving the patina produced by aging on the underlying stone. This method is still in development and not yet commercially available.
Finishes
The surface of a stone may be finished in a variety of ways. Below are some typical terms:
Polished finish - a glossy surface which brings out the full color and character of the stone.
Honed finish - a satin smooth surface finish with little or no gloss, frequently used for commercial floors.
Brushed finish - a smooth, undulating finish achieved by applying stiff metal or plastic brushes on spinning heads.
Thermal finish - a surface treatment applied by intense heat flaming.
Diamond sawn - finish produced by sawing with a diamond toothed saw.
Rough sawn - a surface finish resulting from the gang sawing (or frame saw) process.
Bush-hammered - a mechanical process which produces a textured surface by hammering the stone surface with a textured metal or composite head. Texture varies from subtle to rough.
See also
Categories:
Building stone
Stonemasonry
Notes
References
USGS 2007 Minerals Yearbook: Stone, Dimension
Dimension Stone Advocate News (DSAN) Issue 31
Dimension Stone Statistics and Information - United States Geological Survey minerals information for dimension stone
Building materials
Pavements
Recycled building materials
Industrial minerals | Dimension stone | [
"Physics",
"Engineering"
] | 5,515 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
2,229,292 | https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20second%20kind | In mathematics, particularly in combinatorics, a Stirling number of the second kind (or Stirling partition number) is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by or . Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions. They are named after James Stirling.
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the second kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definition
The Stirling numbers of the second kind, written or or with other notations, count the number of ways to partition a set of labelled objects into nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with precisely equivalence classes that can be defined on an element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. Obviously,
for n ≥ 0, and for n ≥ 1,
as the only way to partition an n-element set into n parts is to put each element of the set into its own part, and the only way to partition a nonempty set into one part is to put all of the elements in the same part. Unlike Stirling numbers of the first kind, they can be calculated using a one-sum formula:
The Stirling numbers of the first kind may be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials
(In particular, (x)0 = 1 because it is an empty product.)
Stirling numbers of the second kind satisfy the relation
Notation
Various notations have been used for Stirling numbers of the second kind. The brace notation was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers.<ref>Antonio Salmeri, Introduzione alla teoria dei coefficienti fattoriali, Giornale di Matematiche di Battaglini 90 (1962), pp. 44–54.</ref> This led Knuth to use it, as shown here, in the first volume of The Art of Computer Programming (1968).Donald E. Knuth, Fundamental Algorithms, Reading, Mass.: Addison–Wesley, 1968. According to the third edition of The Art of Computer Programming, this notation was also used earlier by Jovan Karamata in 1935.Jovan Karamata, Théorèmes sur la sommabilité exponentielle et d'autres sommabilités s'y rattachant, Mathematica (Cluj) 9 (1935), pp, 164–178. The notation S(n, k) was used by Richard Stanley in his book Enumerative Combinatorics and also, much earlier, by many other writers.
The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources.
Relation to Bell numbers
Since the Stirling number counts set partitions of an n-element set into k parts, the sum
over all values of k is the total number of partitions of a set with n members. This number is known as the nth Bell number.
Analogously, the ordered Bell numbers can be computed from the Stirling numbers of the second kind via
Table of values
Below is a triangular array of values for the Stirling numbers of the second kind :
As with the binomial coefficients, this table could be extended to , but the entries would all be 0.
Properties
Recurrence relation
Stirling numbers of the second kind obey the recurrence relation
with initial conditions
For instance, the number 25 in column k = 3 and row n = 5 is given by 25 = 7 + (3×6), where 7 is the number above and to the left of 25, 6 is the number above 25 and 3 is the column containing the 6.
To prove this recurrence, observe that a partition of the objects into k nonempty subsets either contains the -th object as a singleton or it does not. The number of ways that the singleton is one of the subsets is given by
since we must partition the remaining objects into the available subsets. In the other case the -th object belongs to a subset containing other objects. The number of ways is given by
since we partition all objects other than the -th into k subsets, and then we are left with k choices for inserting object . Summing these two values gives the desired result.
Another recurrence relation is given by
which follows from evaluating at .
Simple identities
Some simple identities include
This is because dividing n elements into sets necessarily means dividing it into one set of size 2 and sets of size 1. Therefore we need only pick those two elements;
and
To see this, first note that there are 2 ordered pairs of complementary subsets A and B. In one case, A is empty, and in another B is empty, so ordered pairs of subsets remain. Finally, since we want unordered pairs rather than ordered pairs we divide this last number by 2, giving the result above.
Another explicit expansion of the recurrence-relation gives identities in the spirit of the above example.
Identities
The table in section 6.1 of Concrete Mathematics provides a plethora of generalized forms of finite sums involving the Stirling numbers. Several particular finite sums relevant to this article include
Explicit formula
The Stirling numbers of the second kind are given by the explicit formula:
This can be derived by using inclusion-exclusion to count the surjections from n to k and using the fact that the number of such surjections is .
Additionally, this formula is a special case of the kth forward difference of the monomial evaluated at x = 0:
Because the Bernoulli polynomials may be written in terms of these forward differences, one immediately obtains a relation in the Bernoulli numbers:
The evaluation of incomplete exponential Bell polynomial Bn,k(x1,x2,...) on the sequence of ones equals a Stirling number of the second kind:
Another explicit formula given in the NIST Handbook of Mathematical Functions is
Parity
The parity of a Stirling number of the second kind is same as the parity of a related binomial coefficient:
where
This relation is specified by mapping n and k coordinates onto the Sierpiński triangle.
More directly, let two sets contain positions of 1's in binary representations of results of respective expressions:
One can mimic a bitwise AND operation by intersecting these two sets:
to obtain the parity of a Stirling number of the second kind in O(1) time. In pseudocode:
where is the Iverson bracket.
The parity of a central Stirling number of the second kind is odd if and only if is a fibbinary number, a number whose binary representation has no two consecutive 1s.
Generating functions
For a fixed integer n, the ordinary generating function for Stirling numbers of the second kind is given by
where are Touchard polynomials. If one sums the Stirling numbers against the falling factorial instead, one can show the following identities, among others:
and
which has special case
For a fixed integer k, the Stirling numbers of the second kind have rational ordinary generating function
and have an exponential generating function given by
A mixed bivariate generating function for the Stirling numbers of the second kind is
Lower and upper bounds
If and , then
Asymptotic approximation
For fixed value of the asymptotic value of the Stirling numbers of the second kind as is given by
If (where o denotes the little o notation) then
A uniformly valid approximation also exists: for all such that , one has
where , and is the unique solution to . Relative error is bounded by about .
Unimodality
For fixed , is unimodal, that is, the sequence increases and then decreases. The maximum is attained for at most two consecutive values of k. That is, there is an integer such that
Looking at the table of values above, the first few values for are
When is large
and the maximum value of the Stirling number can be approximated with
Applications
Moments of the Poisson distribution
If X is a random variable with a Poisson distribution with expected value λ, then its n-th moment is
In particular, the nth moment of the Poisson distribution with expected value 1 is precisely the number of partitions of a set of size n, i.e., it is the nth Bell number (this fact is Dobiński's formula).
Moments of fixed points of random permutations
Let the random variable X be the number of fixed points of a uniformly distributed random permutation of a finite set of size m. Then the nth moment of X is
Note: The upper bound of summation is m, not n.
In other words, the nth moment of this probability distribution is the number of partitions of a set of size n into no more than m parts.
This is proved in the article on random permutation statistics, although the notation is a bit different.
Rhyming schemes
The Stirling numbers of the second kind can represent the total number of rhyme schemes for a poem of n lines. gives the number of possible rhyming schemes for n lines using k unique rhyming syllables. As an example, for a poem of 3 lines, there is 1 rhyme scheme using just one rhyme (aaa), 3 rhyme schemes using two rhymes (aab, aba, abb), and 1 rhyme scheme using three rhymes (abc).
Variants
r-Stirling numbers of the second kind
The r-Stirling number of the second kind counts the number of partitions of a set of n objects into k non-empty disjoint subsets, such that the first r elements are in distinct subsets. These numbers satisfy the recurrence relation
Some combinatorial identities and a connection between these numbers and context-free grammars can be found in
Associated Stirling numbers of the second kind
An r-associated Stirling number of the second kind is the number of ways to partition a set of n objects into k subsets, with each subset containing at least r elements. It is denoted by and obeys the recurrence relation
The 2-associated numbers appear elsewhere as "Ward numbers" and as the magnitudes of the coefficients of Mahler polynomials.
Reduced Stirling numbers of the second kind
Denote the n objects to partition by the integers 1, 2, ..., n. Define the reduced Stirling numbers of the second kind, denoted , to be the number of ways to partition the integers 1, 2, ..., n into k nonempty subsets such that all elements in each subset have pairwise distance at least d. That is, for any integers i and j in a given subset, it is required that . It has been shown that these numbers satisfy
(hence the name "reduced"). Observe (both by definition and by the reduction formula), that , the familiar Stirling numbers of the second kind.
See also
Stirling number
Stirling numbers of the first kind
Bell number – the number of partitions of a set with n'' members
Stirling polynomials
Twelvefold way
References
.
.
Calculator for Stirling Numbers of the Second Kind
Set Partitions: Stirling Numbers
Permutations
Factorial and binomial topics
Triangles of numbers
Operations on numbers
pl:Liczby Stirlinga#Liczby Stirlinga II rodzaju | Stirling numbers of the second kind | [
"Mathematics"
] | 2,350 | [
"Functions and mappings",
"Factorial and binomial topics",
"Permutations",
"Mathematical objects",
"Combinatorics",
"Arithmetic",
"Mathematical relations",
"Triangles of numbers",
"Operations on numbers"
] |
2,229,296 | https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20first%20kind | In mathematics, especially in combinatorics, Stirling numbers of the first kind arise in the study of permutations. In particular, the unsigned Stirling numbers of the first kind count permutations according to their number of cycles (counting fixed points as cycles of length one).
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the first kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definitions
Definition by algebra
Signed Stirling numbers of the first kind are the coefficients in the expansion of the falling factorial
into powers of the variable :
For example, , leading to the values , , and .
The unsigned Stirling numbers may also be defined algebraically as the coefficients of the rising factorial:
.
The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources; the square bracket notation is also common notation for the Gaussian coefficients.
Definition by permutation
Subsequently, it was discovered that the absolute values of these numbers are equal to the number of permutations of certain kinds. These absolute values, which are known as unsigned Stirling numbers of the first kind, are often denoted or . They may be defined directly to be the number of permutations of elements with disjoint cycles.
For example, of the permutations of three elements, there is one permutation with three cycles (the identity permutation, given in one-line notation by or in cycle notation by ), three permutations with two cycles (, , and ) and two permutations with one cycle ( and ). Thus , and . These can be seen to agree with the previous algebraic calculations of for .
For another example, the image at right shows that : the symmetric group on 4 objects has 3 permutations of the form
(having 2 orbits, each of size 2),
and 8 permutations of the form
(having 1 orbit of size 3 and 1 orbit of size 1).
These numbers can be calculated by considering the orbits as conjugacy classes. Alfréd Rényi observed that the unsigned Stirling number of the first kind also counts the number of permutations of size with left-to-right maxima.
Signs
The signs of the signed Stirling numbers of the first kind depend only on the parity of .
Recurrence relation
The unsigned Stirling numbers of the first kind follow the recurrence relation
for , with the boundary conditions
for .
It follows immediately that the signed Stirling numbers of the first kind satisfy the recurrence
.
Table of values
Below is a triangular array of unsigned values for the Stirling numbers of the first kind, similar in form to Pascal's triangle. These values are easy to generate using the recurrence relation in the previous section.
Properties
Simple identities
Using the Kronecker delta one has,
and
if , or more generally if k > n.
Also
and
Similar relationships involving the Stirling numbers hold for the Bernoulli polynomials. Many relations for the Stirling numbers shadow similar relations on the binomial coefficients. The study of these 'shadow relationships' is termed umbral calculus and culminates in the theory of Sheffer sequences. Generalizations of the Stirling numbers of both kinds to arbitrary complex-valued inputs may be extended through the relations of these triangles to the Stirling convolution polynomials.
Note that all the combinatorial proofs above use either binomials or multinomials of .
Therefore if is prime, then:
for .
Expansions for fixed k
Since the Stirling numbers are the coefficients of a polynomial with roots 0, 1, ..., , one has by Vieta's formulas that
In other words, the Stirling numbers of the first kind are given by elementary symmetric polynomials evaluated at 0, 1, ..., . In this form, the simple identities given above take the form
and so on.
One may produce alternative forms for the Stirling numbers of the first kind with a similar approach preceded by some algebraic manipulation: since
it follows from Newton's formulas that one can expand the Stirling numbers of the first kind in terms of generalized harmonic numbers. This yields identities like
where Hn is the harmonic number and Hn(m) is the generalized harmonic number
These relations can be generalized to give
where w(n, m) is defined recursively in terms of the generalized harmonic numbers by
(Here δ is the Kronecker delta function and is the Pochhammer symbol.)
For fixed these weighted harmonic number expansions are generated by the generating function
where the notation means extraction of the coefficient of from the following formal power series (see the non-exponential Bell polynomials and section 3 of ).
More generally, sums related to these weighted harmonic number expansions of the Stirling numbers of the first kind can be defined through generalized zeta series transforms of generating functions.
One can also "invert" the relations for these Stirling numbers given in terms of the -order harmonic numbers to write the integer-order generalized harmonic numbers in terms of weighted sums of terms involving the Stirling numbers of the first kind. For example, when the second-order and third-order harmonic numbers are given by
More generally, one can invert the Bell polynomial generating function for the Stirling numbers expanded in terms of the -order harmonic numbers to obtain that for integers
Finite sums
Since permutations are partitioned by number of cycles, one has
The identities
and
can be proved by the techniques at
Stirling numbers and exponential generating functions#Stirling numbers of the first kind and Binomial coefficient#Ordinary generating functions.
The table in section 6.1 of Concrete Mathematics provides a plethora of generalized forms of finite sums involving the Stirling numbers. Several particular finite sums relevant to this article include
Additionally, if we define the second-order Eulerian numbers by the triangular recurrence relation
we arrive at the following identity related to the form of the Stirling convolution polynomials which can be employed to generalize both Stirling number triangles to arbitrary real, or complex-valued, values of the input :
Particular expansions of the previous identity lead to the following identities expanding the Stirling numbers of the first kind for the first few small values of :
Software tools for working with finite sums involving Stirling numbers and Eulerian numbers are provided by the RISC Stirling.m package utilities in Mathematica. Other software packages for guessing formulas for sequences (and polynomial sequence sums) involving Stirling numbers and other special triangles is available for both Mathematica and Sage here and here, respectively.
Congruences
The following congruence identity may be proved via a generating function-based approach:
More recent results providing Jacobi-type J-fractions that generate the single factorial function and generalized factorial-related products lead to other new congruence results for the Stirling numbers of the first kind.
For example, working modulo we can prove that
Where is the Iverson bracket.
and working modulo we can similarly prove that
More generally, for fixed integers if we define the ordered roots
then we may expand congruences for these Stirling numbers defined as the coefficients
in the following form where the functions, , denote fixed
polynomials of degree in for each , , and :
Section 6.2 of the reference cited above provides more explicit expansions related to these congruences for the -order harmonic numbers and for the generalized factorial products, .
Generating functions
A variety of identities may be derived by manipulating the generating function (see change of basis):
Using the equality
it follows that
and
This identity is valid for formal power series, and the sum converges in the complex plane for |z| < 1.
Other identities arise by exchanging the order of summation, taking derivatives, making substitutions for z or u, etc. For example, we may derive:
or
and
where and are the Riemann zeta function and the Hurwitz zeta function respectively, and even evaluate this integral
where is the gamma function. There also exist more complicated expressions for the zeta-functions involving the Stirling numbers. One, for example, has
This series generalizes Hasse's series for the Hurwitz zeta-function (we obtain Hasse's series by setting k=1).
Asymptotics
The next estimate given in terms of the Euler gamma constant applies:
For fixed we have the following estimate :
Explicit formula
It is well-known that we don't know any one-sum formula for Stirling numbers of the first kind. A two-sum formula can be obtained using one of the symmetric formulae for Stirling numbers in conjunction with the explicit formula for Stirling numbers of the second kind.
As discussed earlier, by Vieta's formulas, one getThe Stirling number s(n,n-p) can be found from the formula
where The sum is a sum over all partitions of p.
Another exact nested sum expansion for these Stirling numbers is computed by elementary symmetric polynomials corresponding to the coefficients in of a product of the form . In particular, we see that
Newton's identities combined with the above expansions may be used to give an alternate proof of the weighted expansions involving the generalized harmonic numbers already noted above.
Relations to natural logarithm function
The nth derivative of the μth power of the natural logarithm involves the signed Stirling numbers of the first kind:
where is the falling factorial, and is the signed Stirling number.
It can be proved by using mathematical induction.
Other formulas
Stirling numbers of the first kind appear in the formula for Gregory coefficients and in a finite sum identity involving Bell numbers
Infinite series involving the finite sums with the Stirling numbers often lead to the special functions. For example
and
or even
where γm are the Stieltjes constants and δm,0 represents the Kronecker delta function.
Notice that this last identity immediately implies relations between the polylogarithm functions, the Stirling number exponential generating functions given above, and the Stirling-number-based power series for the generalized Nielsen polylogarithm functions.
Generalizations
There are many notions of generalized Stirling numbers that may be defined (depending on application) in a number of differing combinatorial contexts. In so much as the Stirling numbers of the first kind correspond to the coefficients of the distinct polynomial expansions of the single factorial function, , we may extend this notion to define triangular recurrence relations for more general classes of products.
In particular, for any fixed arithmetic function and symbolic parameters , related generalized factorial products of the form
may be studied from the point of view of the classes of generalized Stirling numbers of the first kind defined by the following coefficients of the powers of in the expansions of and then by the next corresponding triangular recurrence relation:
These coefficients satisfy a number of analogous properties to those for the Stirling numbers of the first kind as well as recurrence relations and functional equations related to the f-harmonic numbers, .
One special case of these bracketed coefficients corresponding to allows us to expand the multiple factorial, or multifactorial functions as polynomials in .
The Stirling numbers of both kinds, the binomial coefficients, and the first and second-order Eulerian numbers are all defined by special cases of a triangular super-recurrence of the form
for integers and where whenever or . In this sense, the form of the Stirling numbers of the first kind may also be generalized by this parameterized super-recurrence for fixed scalars (not all zero).
See also
Stirling numbers
Stirling numbers of the second kind
Stirling polynomials
Random permutation statistics
References
The Art of Computer Programming
Concrete Mathematics
.
Permutations
Factorial and binomial topics
Triangles of numbers
Operations on numbers
pl:Liczby Stirlinga#Liczby Stirlinga I rodzaju | Stirling numbers of the first kind | [
"Mathematics"
] | 2,398 | [
"Functions and mappings",
"Factorial and binomial topics",
"Permutations",
"Mathematical objects",
"Combinatorics",
"Arithmetic",
"Mathematical relations",
"Triangles of numbers",
"Operations on numbers"
] |
2,229,421 | https://en.wikipedia.org/wiki/Diisobutylaluminium%20hydride | Diisobutylaluminium hydride (DIBALH, DIBAL, DIBAL-H or DIBAH) is a reducing agent with the formula (i-Bu2AlH)2, where i-Bu represents isobutyl (-CH2CH(CH3)2). This organoaluminium compound is a reagent in organic synthesis.
Properties
Like most organoaluminum compounds, the compound's structure is most probably more than that suggested by its empirical formula. A variety of techniques, not including X-ray crystallography, suggest that the compound exists as a dimer and a trimer, consisting of tetrahedral aluminium centers sharing bridging hydride ligands. Hydrides are small and, for aluminium derivatives, are highly basic, thus they bridge in preference to the alkyl groups.
DIBAL can be prepared by heating triisobutylaluminium (itself a dimer) to induce β-hydride elimination:
(i-Bu3Al)2 → (i-Bu2AlH)2 + 2 (CH3)2C=CH2
Although DIBAL can be purchased commercially as a colorless liquid, it is more commonly purchased and dispensed as a solution in an organic solvent such as toluene or hexane.
Use in organic synthesis
DIBAL reacts slowly with electron-poor compounds and more quickly with electron-rich compounds. Thus, it is an electrophilic reducing agent whereas LiAlH4 can be thought of as a nucleophilic reducing agent.
DIBAL is useful in organic synthesis for a variety of reductions, including converting carboxylic acids, their derivatives, and nitriles to aldehydes. DIBAL efficiently reduces α-β unsaturated esters to the corresponding allylic alcohol. By contrast, LiAlH4 reduces esters and acyl chlorides to primary alcohols, and nitriles to primary amines [using Fieser work-up procedure]. Similarly, DIBAL reduces lactones to hemiacetals (the equivalent of an aldehyde).
Although DIBAL reliably reduces nitriles to aldehydes, the reduction of esters to aldehydes is infamous for often producing large quantities of alcohols. Nevertheless, it is possible to avoid these unwanted byproducts through careful control of the reaction conditions using continuous flow chemistry.
DIBALH was investigated originally as a cocatalyst for the polymerization of alkenes.
Safety
DIBAL, like most alkylaluminium compounds, reacts violently with air and water, potentially leading to explosion.
References
External links
Isobutyl compounds
Metal hydrides
Organoaluminium compounds
Reducing agents | Diisobutylaluminium hydride | [
"Chemistry"
] | 572 | [
"Inorganic compounds",
"Metal hydrides",
"Redox",
"Reducing agents"
] |
2,229,523 | https://en.wikipedia.org/wiki/Fr%C3%B6licher%20space | In mathematics, Frölicher spaces extend the notions of calculus and smooth manifolds. They were introduced in 1982 by the mathematician Alfred Frölicher.
Definition
A Frölicher space consists of a non-empty set X together with a subset C of Hom(R, X) called the set of smooth curves, and a subset F of Hom(X, R) called the set of smooth real functions, such that for each real function
f : X → R
in F and each curve
c : R → X
in C, the following axioms are satisfied:
f in F if and only if for each γ in C, in C∞(R, R)
c in C if and only if for each φ in F, in C∞(R, R)
Let A and B be two Frölicher spaces. A map
m : A → B
is called smooth if for each smooth curve c in CA, is in CB. Furthermore, the space of all such smooth maps has itself the structure of a Frölicher space. The smooth functions on
C∞(A, B)
are the images of
References
, section 23
Smooth functions
Structures on manifolds | Frölicher space | [
"Mathematics"
] | 238 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
2,229,678 | https://en.wikipedia.org/wiki/Phosphorus%20pentoxide | Phosphorus pentoxide is a chemical compound with molecular formula P4O10 (with its common name derived from its empirical formula, P2O5). This white crystalline solid is the anhydride of phosphoric acid. It is a powerful desiccant and dehydrating agent.
Structure
Phosphorus pentoxide crystallizes in at least four forms or polymorphs. The most familiar one, a metastable form (shown in the figure), comprises molecules of P4O10. Weak van der Waals forces hold these molecules together in a hexagonal lattice (However, in spite of the high symmetry of the molecules, the crystal packing is not a close packing). The structure of the P4O10 cage is reminiscent of adamantane with Td symmetry point group. It is closely related to the corresponding anhydride of phosphorous acid, P4O6. The latter lacks terminal oxo groups. Its density is 2.30 g/cm3. It boils at 423 °C under atmospheric pressure; if heated more rapidly it can sublimate. This form can be made by condensing the vapor of phosphorus pentoxide rapidly, and the result is an extremely hygroscopic solid.
The other polymorphs are polymeric, but in each case the phosphorus atoms are bound by a tetrahedron of oxygen atoms, one of which forms a terminal P=O bond involving the donation of the terminal oxygen p-orbital electrons to the antibonding phosphorus-oxygen single bonds. The macromolecular form can be made by heating the compound in a sealed tube for several hours, and maintaining the melt at a high temperature before cooling the melt to the solid. The metastable orthorhombic "O"-form (density 2.72 g/cm3, melting point 562 °C) adopts a layered structure consisting of interconnected P6O6 rings, not unlike the structure adopted by certain polysilicates. The stable form is a higher density phase, also orthorhombic, the so-called O' form. It consists of a 3-dimensional framework, density 3.5 g/cm3. The remaining polymorph is a glass or amorphous form; it can be made by fusing any of the others.
Preparation
P4O10 is prepared by burning white phosphorus with a sufficient supply of oxygen:
P4 + 5 O2 → P4O10
The dehydration of phosphoric acid to give phosphorus pentoxide is not possible, as on heating it forms various polyphosphates but will not dehydrate sufficiently to form P4O10.
Applications
Phosphorus pentoxide is a potent dehydrating agent as indicated by the exothermic nature of its hydrolysis producing phosphoric acid:
P4O10 + 6 H2O → 4 H3PO4 (–177 kJ)
However, its utility for drying is limited somewhat by its tendency to form a protective viscous coating that inhibits further dehydration by unspent material. A granular form of P4O10 is used in desiccators.
Consistent with its strong desiccating power, P4O10 is used in organic synthesis for dehydration. The most important application is for the conversion of primary amides into nitriles:
P4O10 + RC(O)NH2 → P4O9(OH)2 + RCN
The indicated coproduct P4O9(OH)2 is an idealized formula for undefined products resulting from the hydration of P4O10.
Alternatively, when combined with a carboxylic acid, the result is the corresponding anhydride:
P4O10 + RCO2H → P4O9(OH)2 + [RC(O)]2O
The "Onodera reagent", a solution of P4O10 in DMSO, is employed for the oxidation of alcohols. This reaction is reminiscent of the Swern oxidation.
The desiccating power of P4O10 is strong enough to convert many mineral acids to their anhydrides. Examples: HNO3 is converted to N2O5; H2SO4 is converted to SO3; HClO4 is converted to Cl2O7; CF3SO3H is converted to (CF3)2S2O5.
As a proxy measurement
content is often used by industry as proxy value for all the phosphorus oxides in a material. For example, fertilizer grade phosphoric acid can also contain various related phosphorous compounds which are also of use. All these compounds are described collectively in terms of ' content' to allow convenient comparison of the phosphorous content of different products. Despite this, phosphorus pentoxide is not actually present in most samples as it is not stable in aqueous solutions.
Related phosphorus oxides
Between the commercially important P4O6 and P4O10, phosphorus oxides are known with intermediate structures.
On observation it will be seen that double bonded oxygen in P4O8 at 1,2 position or 1,3 position are identical and both positions have same steric hindrance. Cycle 12341 and ABCDA are identical.
Hazards
Phosphorus pentoxide itself is not flammable. Just like sulfur trioxide, it reacts vigorously with water and water-containing substances like wood or cotton, liberates much heat and may even cause fire due to the highly exothermic nature of such reactions. It is corrosive to metal and is very irritating – it may cause severe burns to the eye, skin, mucous membrane, and respiratory tract even at concentrations as low as 1 mg/m3.
See also
Eaton's reagent
References
External links
Inorganic phosphorus compounds
Acid anhydrides
Acidic oxides
Glass compositions
Dehydrating agents
Adamantane-like molecules
Phosphorus oxides
Phosphorus(V) compounds
Deliquescent materials | Phosphorus pentoxide | [
"Chemistry"
] | 1,248 | [
"Glass chemistry",
"Inorganic compounds",
"Glass compositions",
"Deliquescent materials",
"Reagents for organic chemistry",
"Inorganic phosphorus compounds",
"Dehydrating agents"
] |
2,229,780 | https://en.wikipedia.org/wiki/Zig-zag%20bridge | A zig-zag bridge is a pedestrian bridge composed of short segments, each set at an angle relative to its neighbors and usually with an alternating right and left turn required when traveling across the bridge. It is used in standard crossings for structural stability; and in traditional and contemporary Asian and Western landscape design across water gardens.
When constructed of wood, each segment is formed from planks and is supported by posts. When constructed of stone, the bridge will use short or long rectilinear slabs set upon stone footings.
Garden and ceremonial bridge
A zig-zag bridge is often seen in the Chinese garden, Japanese garden, and Zen rock garden. It may be made of stone slabs or planks as part of a pond design and is frequently seen in rustic gardens. It is also used in high art modern fountain gardens, often in public urban park and botanic garden landscapes.
The objective in employing such a bridge, constructed according to Zen philosophy and teachings, is to focus the walker's attention to the mindfulness of the current place and time moment – "being here, now". As it often has no railings, it is quite possible for an inattentive walker to simply fall off an end into the water.
The zig-zag of paths and bridges also follows a principle of Chinese Feng Shui.
Standard bridge
The post and plank version has an advantage when employed as a crossing of a muddy bottom or marsh: It is structurally stable, where a straight bridge might tend to tip due to the posts moving in the soft mud. Each segment of walkway mutually supports the next from twisting and tipping by being securely fastened to it. This is the same advantage possessed by a zig-zag split rail fence.
See also
Footbridge
Moon bridge
S bridge
Step-stone bridge
References
External links
Footbridges
Stone bridges
Garden features
Stonemasonry
Chinese gardening styles
Japanese style of gardening
Bridges | Zig-zag bridge | [
"Engineering"
] | 386 | [
"Structural engineering",
"Stonemasonry",
"Construction",
"Bridges"
] |
2,229,783 | https://en.wikipedia.org/wiki/Stack%20search | Stack search (also known as Stack decoding algorithm) is a search algorithm similar to beam search. It can be used to explore tree-structured search spaces and is often employed in Natural language processing applications, such as parsing of natural languages, or for decoding of error correcting codes where the technique goes under the name of sequential decoding.
Stack search keeps a list of the best n candidates seen so far. These candidates are incomplete solutions to the search problems, e.g. partial parse trees. It then iteratively expands the best partial solution, putting all resulting partial solutions onto the stack and then trimming the resulting list of partial solutions to the top n candidates, until a real solution (i.e. complete parse tree) has been found.
Stack search is not guaranteed to find the optimal solution to the search problem. The quality of the result depends on the quality of the search heuristic.
References
Example applications of the stack search algorithm can be found in the literature:
Frederick Jelinek. Fast sequential decoding algorithm using a stack. IBM Journal of Research and Development, pp. 675-685, 1969.
Ye-Yi Wang and Alex Waibel. Decoding algorithm in statistical machine translation. Proceedings of the 8th conference on European chapter of the Association for Computational Linguistics, pp. 366-372. Madrid, Spain, 1997.
Search algorithms | Stack search | [
"Technology"
] | 283 | [
"Computing stubs"
] |
2,229,822 | https://en.wikipedia.org/wiki/Moon%20bridge | A moon bridge (月桥), also known as “sori-bashi" (反り橋) in Japanese, or as a drum bridge (“taiko-bashi” 太鼓橋), is a highly arched pedestrian bridge. The moon bridge originated in China and was later introduced to Japan, where it became synonymous with Japanese landscape architecture. However, the general shape of this bridge can be seen throughout East Asian cultures.
Generally, these bridges are non-functional, serving as ornamentation. However, they were originally designed to allow pedestrians to cross canals while allowing the passage of barges beneath. To achieve this height in normal bridge construction, significant space from the river banks must be used for the approaches of the bridge. The climbing ascent and descent of the moon bridge has the advantage of conserving this space. These approaches can be very steep on moon bridges, sometimes requiring ladder-like rungs to be affixed to the bridge.
Moon bridges can be constructed from a variety of materials and construction techniques. Some wooden moon bridges employ a “woven-arch” style: cross beams are threaded between the longitudinal members, developing inherent stiffness and shape. Though rare, this technique is displayed on the 12th century Chinese “Rainbow Bridge”, the 1913 moon bridge in the Japanese garden of the Huntington Library in California.
In formal garden design, a moon bridge is placed so that it is reflected in still water. The high arch and its reflection form a circle, symbolizing the Moon. By forming a reflected full circle, the bridge also symbolizes purity: the Chinese words for “full” and “circle” together translate to “perfection”.
See also
Moon gate
Chinese garden
References
Footbridges
Bridges by structural type
Deck arch bridges
Garden features
Chinese gardening styles
Japanese style of gardening
Japanese architectural features
Architecture in China | Moon bridge | [
"Engineering"
] | 368 | [
"Architecture stubs",
"Architecture"
] |
2,229,944 | https://en.wikipedia.org/wiki/Pound%E2%80%93Rebka%20experiment | The Pound–Rebka experiment monitored frequency shifts in gamma rays as they rose and fell in the gravitational field of the Earth. The experiment tested Albert Einstein's 1907 and 1911 predictions, based on the equivalence principle, that photons would gain energy when descending a gravitational potential, and would lose energy when rising through a gravitational potential. It was proposed by Robert Pound and his graduate student Glen A. Rebka Jr. in 1959, and was the last of the classical tests of general relativity to be verified. The measurement of gravitational redshift and blueshift by this experiment validated the prediction of the equivalence principle that clocks should be measured as running at different rates in different places of a gravitational field. It is considered to be the experiment that ushered in an era of precision tests of general relativity.
Background
Equivalence principle argument predicting gravitational red- and blueshift
In the decade preceding Einstein's publication of the definitive version of his theory of general relativity, he anticipated several of the results of his final theory with heuristic arguments. One of these concerned the light in a gravitational field. To show that the equivalence principle implies that light is Doppler-shifted in a gravitational field, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A continuous beam of electromagnetic energy with frequency is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from
In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The frequency of light arriving at will therefore not be the frequency but the greater frequency given by
According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace by the gravitational potential difference between and so that
Advent of general relativity
In 1916, Einstein used the framework of his newly completed general theory of relativity to update his earlier heuristic arguments predicting gravitational redshift to a more rigorous form. Gravitational redshift and two other predictions from his 1916 paper, the anomalous perihelion precession of Mercury's orbit and the gravitational deflection of light by the Sun, have become known as the "classical tests" of general relativity. The anomalous perihelion precession of Mercury had long been recognized as a problem in celestial mechanics since the 1859 calculations of Urbain Le Verrier. The observation of the deflection of light by the Sun in the 1919 Eddington expedition catapulted Einstein to worldwide fame. Gravitational redshift would prove to be by far the most difficult of the three classical tests to demonstrate.
There had been little rush by experimenters to test Einstein's earlier predictions of gravitational time dilation, since the predicted effect was almost immeasurably small. Einstein's predicted displacement for spectral lines of the Sun amounted to only two parts in a million, and would be easily masked by line broadening due to temperature and pressure, and by line asymmetry due to the fact the lines represent the superposition of absorption from many turbulent layers of the solar atmosphere. Several attempts to measure the effect were negative or inconclusive. The first generally accepted claim to have measured gravitational redshift was W.S. Adams's 1925 measurement of shifts in the spectral lines of the white dwarf star Sirius B. However, even Adams's measurements have since been brought into question for various reasons.
Mössbauer effect
In atomic spectroscopy, visible and ultraviolet photons resulting from electronic transitions of outer shell electrons, when emitted by gaseous atoms in an excited state, are readily absorbed by unexcited atoms of the same species. However, a corresponding absorbance of photons emitted by the nuclei of γ-emitters had never been observed because recoil of the nuclei resulted in so much loss of energy by the emitted photons that they no longer matched the absorbance spectra of the target nuclei. In 1958, Rudolf Mössbauer, who was analyzing the 129 keV transition of Iridium-191, discovered that by lowering the temperature of the emitter to 90K, he could achieve resonant absorbance. Indeed, the energy resolutions that he achieved were of unheard-of sharpness. He had discovered the phenomenon of recoilless γ-emission.
In 1959, several research groups, most notably Robert Pound and Glen Rebka in Harvard and a team led by John Paul Schiffer in Harwell (England), announced plans to exploit this recently discovered effect to perform terrestrial tests of gravitational redshift.
In February 1960, Schiffer and his team were the first to announce success in measuring the gravitational redshift, but with a rather high error of ±47%. It was to be Pound and Rebka's somewhat later contribution in April 1960, which used a stronger radiation source, longer path length, and several refinements to reduce systematic error, which was to be accepted as having provided a definitive measurement of the redshift.
Pound and Rebka's experiment
Sources of error
After evaluating various γ-emitters for their study, Pound and Rebka chose to use 57Fe because it does not require cryogenic cooling to exhibit recoil-free emission, has a relatively low internal conversion coefficient so that it is relatively free of competing X-ray emissions that would have been difficult to distinguish from the 14.4 keV transition, and its parent 57Co has a usable half-life of 272 days.
Pound and Rebka found that a large source of systematic error resulted from temperature variations, which they attributed primarily to a second order relativistic Doppler effect due to lattice vibrations. A mere 1°C difference in temperature between emitter and absorber caused a shift about equal to the predicted effect of gravitational time dilation.
They also found frequency offsets between the lines of different combinations of source and absorber stemming from the sensitivity of the nuclear transition to an atom's physical and chemical environment. They therefore needed to adopt methodology which would allow them to distinguish these offsets from their measurement of gravitational redshift. Extreme care was also needed in sample preparation, otherwise inhomogeneities would limit the sharpness of the lines.
Experimental setup
The experiment was carried out in a tower at Harvard University's Jefferson laboratory that was, for the most part, vibrationally isolated from the rest of the building. An iron disk containing radioactive 57Co diffused into its surface was placed in the center of a ferroelectric or a moving coil
magnetic transducer (speaker coil) which was placed near the roof of the building. A 38 cm diameter absorber consisting of thin square foils of iron enriched to a level of 32% 57Fe (as opposed to a 2% natural abundance), which were pasted side by side in a flat pattern on a Mylar sheet, was placed in the basement. The distance between the source and absorber was 22.5 meters (74 ft). The gamma rays traveled through a Mylar bag filled with helium to minimize scattering of the gamma rays. A scintillation counter was placed below the absorber to detect the gamma rays that passed through.
The vibrating speaker coil imposed a continuously varying Doppler shift on the gamma ray source. Superimposed on the sinusoidal motions of the transducer was the slow (typically about 0.01 mm/s) constant motion of a slave hydraulic cylinder driven by a small diameter master cylinder controlled by a synchronous motor. The hydraulic cylinder motion was reversed multiple times during each data run after a constant integral number of transducer vibrations. Every several days, the position of the source and absorber would be reversed so that half the data runs would be of blueshift, and half would be of redshift.
Three thermocouples mounted on the source in a spiral pattern and three on the absorber were connected to Wheatstone bridges to measure the temperature differences between the source and absorber. The recorded temperature differences were used to correct the data before analysis.
Among the other steps used to compensate for possible systematic errors, Pound and Rebka varied the speaker frequency between 10 Hz and 50 Hz and tested different transducers (ferroelectric transducers versus moving coil magnetic speaker coils).
A Mössbauer monitor near the source (not illustrated) checked for possible distortions of the source signal resulting from the cylinder/transducer assembly being regularly inverted from facing downwards to facing upwards.
Modulation technique to detect small shifts
Although the 14.4 keV recoilless emission line of 57Fe had a half-width of 1.13×10−12, the anticipated gravitational frequency shift was only 2.5×10−15. Measurement of this minute amount of frequency shift, 500 times smaller than the half-width, required a sophisticated protocol for data acquisition and data analysis. The best way to measure a small shift is often by "slope detection", measuring the resonance not at its peak, but rather comparing the absorption curve near its points of maximum slope (inflection points) on either side of the peak.
The speaker coil typically operated at about 74 Hz with a maximum velocity amplitude corresponding to the maximum change of absorption with velocity of the resonance curve for a given combination of source and absorber (typically around 0.10 mm/s). Counts that were received in the quarter cycles of the oscillation period centered around the velocity maxima were accumulated in two separate registers. Likewise, counts received with the hydraulic cylinder in reverse motion were accumulated in another two separate registers, for a total of four registers of accumulated counts.
The combined motions of the vibrating transducer and hydraulic cylinder allowed the incoming photons to be collected in four channels representing source motions of +0.11 mm/s, +0.09 mm/s, −0.11 mm/s, and −0.09 mm/s. They collectively operated at a 50% duty cycle, so that out of, say, 80 million incoming photons, 10 million would fit into the time slots of each of the four recording channels. From these counts, the velocity corresponding to the absorbance maximum could be calculated.
The accuracy of determination of the line center depended on (1) the sharpness of the line, (2) the depth of the absorbance maximum, and (3) the total number of counts. They typically achieved a fractional absorbance maximum depth of about 0.3 and recorded about 1×1010 γ-rays, of which most will have been recoilless.
Results
Each data run yielded eleven numbers, i.e. four absorber register counts, four monitor register counts, and three average temperature differences. The register counts were generally recorded after twelve full back-and-forth cycles of the hydraulic piston, where each reversal of piston motion occurred after 22,000 periods of source vibration.
The source and absorber units were interchanged every several days to allow comparison between the results with the γ-rays rising versus the γ-rays falling. Combining data from runs having gravitational frequency shift of equal but opposite sign enabled the fixed frequency shift between a given source/target combination to be eliminated by subtraction.
In their 1960 paper, Pound and Rebka presented data from the first four days of counting. Six runs with the source at the bottom, after temperature correction gave a weighted average fractional frequency shift between source and absorber of −(19.7±0.8)×10−15. Eight runs with the source at the top, after temperature correction gave a weighted average fractional frequency shift of −(15.5±0.8)×10−15.
The frequency shifts, up and down, were both negative because the magnitude of the inherent frequency difference of the source/absorber combination considerably exceeded the magnitude of the expected gravitational redshifts/blueshifts. Taking half the sum of the weighted averages yielded the inherent frequency difference of the source/absorber combination, −(17.6±0.6)×10−15. Taking half the difference of the weighted averages yielded the net fractional frequency shift due to gravitational time dilation, −(2.1±0.5)×10−15.
Over the full ten days of data collection, they calculated a net fractional frequency shift due to gravitational time dilation of −(2.56±0.25)×10−15, which corresponds to the predicted value with an error margin of 10%.
In the next several years, the Pound lab published successive refinements of the gravitational redshift measurement, finally reaching the 1% level in 1964.
Current status of gravitational redshift
In the years subsequent to the series of measurements performed by the Pound lab, various tests using other technologies established the validity of gravitational redshift/time dilation with increasing precision. A notable example was the 1976 Gravity Probe A experiment, which used a space-borne hydrogen maser to increase the accuracy of the measurement to about 0.01%.
From an engineering standpoint, after the launch of the Global Positioning System (which depends on general relativity for its proper functioning) and its integration into everyday life, gravitational redshift/time dilation is no longer considered a theoretical phenomenon requiring testing, but rather is considered a practical engineering concern in various fields requiring precision measurement, along with special relativity.
From a theoretical standpoint, however, the status of gravitational redshift/time dilation is quite different. It is widely recognized that general relativity, despite accounting for all data gathered to date, cannot represent a final theory of nature.
The equivalence principle (EP) lies at the heart of the general theory of relativity. Most proposed alternatives to general relativity predict violation of the EP at some level. The EP includes three hypotheses:
Universality of free fall (UFF). This asserts that the acceleration of bodies freely falling bodies in a gravitational field is independent of their compositions.
Local Lorentz invariance (LLI). This asserts that the outcome of a local experiment is independent of the velocity and orientation of the apparatus.
Local position invariance (LPI). This asserts that clock rates are independent of their spacetime positions. Measurements of differences in the elapsed time displayed by two clocks will depend on their relative positioning in a gravitational field. But the clocks themselves are unaffected by gravitational potential.
Gravitational redshift measurements provide a direct measure of LPI. Of the three hypotheses underlying the equivalence principle, LPI has been by far the least accurately determined. There has been considerable incentive, therefore, to improve on gravitational redshift measurements both in the laboratory and using astronomical observations. For example, the much anticipated, and much delayed European Space Agency's Atomic Clock Ensemble in Space (ACES) mission is expected to improve on previous measurements by a factor of 35.
Notes
Primary sources
References
External links
Tests of general relativity
Physics experiments
Iron
Cobalt | Pound–Rebka experiment | [
"Physics"
] | 3,097 | [
"Experimental physics",
"Physics experiments"
] |
2,230,309 | https://en.wikipedia.org/wiki/List%20of%20computer%20algebra%20systems | The following tables provide a comparison of computer algebra systems (CAS). A CAS is a package comprising a set of algorithms for performing symbolic manipulations on algebraic objects, a language to implement them, and an environment in which to use the language. A CAS may include a user interface and graphics capability; and to be effective may require a large library of algorithms, efficient data structures and a fast kernel.
General
These computer algebra systems are sometimes combined with "front end" programs that provide a better user interface, such as the general-purpose GNU TeXmacs.
Functionality
Below is a summary of significantly developed symbolic functionality in each of the systems.
via SymPy
<li> via qepcad optional package
Those which do not "edit equations" may have a GUI, plotting, ASCII graphic formulae and math font printing. The ability to generate plaintext files is also a sought-after feature because it allows a work to be understood by people who do not have a computer algebra system installed.
Operating system support
The software can run under their respective operating systems natively without emulation. Some systems must be compiled first using an appropriate compiler for the source language and target platform. For some platforms, only older releases of the software may be available.
Graphing calculators
Some graphing calculators have CAS features.
See also
:Category:Computer algebra systems
Comparison of numerical-analysis software
Comparison of statistical packages
List of information graphics software
List of numerical-analysis software
List of numerical libraries
List of statistical software
Mathematical software
Web-based simulation
References
External links
Comparisons of mathematical software
Mathematics-related lists | List of computer algebra systems | [
"Mathematics"
] | 323 | [
"Computer algebra systems",
"Comparisons of mathematical software",
"Mathematical software"
] |
2,230,361 | https://en.wikipedia.org/wiki/Lambda%20lifting | Lambda lifting is a meta-process that restructures a computer program so that functions are defined independently of each other in a global scope. An individual "lift" transforms a local function into a global function. It is a two step process, consisting of;
Eliminating free variables in the function by adding parameters.
Moving functions from a restricted scope to broader or global scope.
The term "lambda lifting" was first introduced by Thomas Johnsson around 1982 and was historically considered as a mechanism for implementing functional programming languages. It is used in conjunction with other techniques in some modern compilers.
Lambda lifting is not the same as closure conversion. It requires all call sites to be adjusted (adding extra arguments to calls) and does not introduce a closure for the lifted lambda expression. In contrast, closure conversion does not require call sites to be adjusted but does introduce a closure for the lambda expression mapping free variables to values.
The technique may be used on individual functions, in code refactoring, to make a function usable outside the scope in which it was written. Lambda lifts may also be repeated, in order to transform the program. Repeated lifts may be used to convert a program written in lambda calculus into a set of recursive functions, without lambdas. This demonstrates the equivalence of programs written in lambda calculus and programs written as functions. However it does not demonstrate the soundness of lambda calculus for deduction, as the eta reduction used in lambda lifting is the step that introduces cardinality problems into the lambda calculus, because it removes the value from the variable, without first checking that there is only one value that satisfies the conditions on the variable (see Curry's paradox).
Lambda lifting is expensive on processing time for the compiler. An efficient implementation of lambda lifting is on processing time for the compiler.
In the untyped lambda calculus, where the basic types are functions, lifting may change the result of beta reduction of a lambda expression. The resulting functions will have the same meaning, in a mathematical sense, but are not regarded as the same function in the untyped lambda calculus. See also intensional versus extensional equality.
The reverse operation to lambda lifting is lambda dropping.
Lambda dropping may make the compilation of programs quicker for the compiler, and may also increase the efficiency of the resulting program, by reducing the number of parameters, and reducing the size of stack frames.
However it makes a function harder to re-use. A dropped function is tied to its context, and can only be used in a different context if it is first lifted.
Algorithm
The following algorithm is one way to lambda-lift an arbitrary program in a language which doesn't support closures as first-class objects:
Rename the functions so that each function has a unique name.
Replace each free variable with an additional argument to the enclosing function, and pass that argument to every use of the function.
Replace every local function definition that has no free variables with an identical global function.
Repeat steps 2 and 3 until all free variables and local functions are eliminated.
If the language has closures as first-class objects that can be passed as arguments or returned from other functions, the closure will need to be represented by a data structure that captures the bindings of the free variables.
Example
The following OCaml program computes the sum of the integers from 1 to 100:
let rec sum n =
if n = 1 then
1
else
let f x =
n + x in
f (sum (n - 1)) in
sum 100
(The let rec declares sum as a function that may call itself.) The function f, which adds sum's argument to the sum of the numbers less than the argument, is a local function. Within the definition of f, n is a free variable. Start by converting the free variable to a parameter:
let rec sum n =
if n = 1 then
1
else
let f w x =
w + x in
f n (sum (n - 1)) in
sum 100
Next, lift f into a global function:
let rec f w x =
w + x
and sum n =
if n = 1 then
1
else
f n (sum (n - 1)) in
sum 100
The following is the same example, this time written in JavaScript:
// Initial version
function sum(n) {
function f(x) {
return n + x;
}
if (n == 1)
return 1;
else
return f(sum(n - 1));
}
// After converting the free variable n to a formal parameter w
function sum(n) {
function f(w, x) {
return w + x;
}
if (n == 1)
return 1;
else
return f(n, sum(n - 1));
}
// After lifting function f into the global scope
function f(w, x) {
return w + x;
}
function sum(n) {
if (n == 1)
return 1;
else
return f(n, sum(n - 1));
}
Lambda lifting versus closures
Lambda lifting and closure are both methods for implementing block structured programs. It implements block structure by eliminating it. All functions are lifted to the global level. Closure conversion provides a "closure" which links the current frame to other frames. Closure conversion takes less compile time.
Recursive functions, and block structured programs, with or without lifting, may be implemented using a stack based implementation, which is simple and efficient. However a stack frame based implementation must be strict (eager). The stack frame based implementation requires that the life of functions be last-in, first-out (LIFO). That is, the most recent function to start its calculation must be the first to end.
Some functional languages (such as Haskell) are implemented using lazy evaluation, which delays calculation until the value is needed. The lazy implementation strategy gives flexibility to the programmer. Lazy evaluation requires delaying the call to a function until a request is made to the value calculated by the function. One implementation is to record a reference to a "frame" of data describing the calculation, in place of the value. Later when the value is required, the frame is used to calculate the value, just in time for when it is needed. The calculated value then replaces the reference.
The "frame" is similar to a stack frame, the difference being that it is not stored on the stack. Lazy evaluation requires that all the data required for the calculation be saved in the frame. If the function is "lifted", then the frame needs only record the function pointer, and the parameters to the function. Some modern languages use garbage collection in place of stack based allocation to manage the life of variables. In a managed, garbage collected environment, a closure records references to the frames from which values may be obtained. In contrast a lifted function has parameters for each value needed in the calculation.
Let expressions and lambda calculus
The Let expression is useful in describing lifting and dropping, and in describing the relationship between recursive equations and lambda expressions. Most functional languages have let expressions. Also, block structured programming languages like ALGOL and Pascal are similar in that they too allow the local definition of a function for use in a restricted scope.
The let expression used here is a fully mutually recursive version of let rec, as implemented in many functional languages.
Let expressions are related to Lambda calculus. Lambda calculus has a simple syntax and semantics, and is good for describing Lambda lifting. It is convenient to describe lambda lifting as a translations from lambda to a let expression, and lambda dropping as the reverse. This is because let expressions allow mutual recursion, which is, in a sense, more lifted than is supported in lambda calculus. Lambda calculus does not support mutual recursion and only one function may be defined at the outermost global scope.
Conversion rules which describe translation without lifting are given in the Let expression article.
The following rules describe the equivalence of lambda and let expressions,
Meta-functions will be given that describe lambda lifting and dropping. A meta-function is a function that takes a program as a parameter. The program is data for the meta-program. The program and the meta program are at different meta-levels.
The following conventions will be used to distinguish program from the meta program,
Square brackets [] will be used to represent function application in the meta program.
Capital letters will be used for variables in the meta program. Lower case letters represent variables in the program.
will be used for equals in the meta program.
represents a dummy variable, or an unknown value.
For simplicity the first rule given that matches will be applied. The rules also assume that the lambda expressions have been pre-processed so that each lambda abstraction has a unique name.
The substitution operator is used extensively. The expression means substitute every occurrence of G in L by S and return the expression. The definition used is extended to cover the substitution of expressions, from the definition given on the Lambda calculus page. The matching of expressions should compare expressions for alpha equivalence (renaming of variables).
Lambda lifting in lambda calculus
Each lambda lift takes a lambda abstraction which is a sub expression of a lambda expression and replaces it by a function call (application) to a function that it creates. The free variables in the sub expression are the parameters to the function call.
Lambda lifts may be used on individual functions, in code refactoring, to make a function usable outside the scope in which it was written. Such lifts may also be repeated, until the expression has no lambda abstractions, in order to transform the program.
Lambda lift
A lift is given a sub-expression within an expression to lift to the top of that expression. The expression may be part of a larger program. This allows control of where the sub-expression is lifted to. The lambda lift operation used to perform a lift within a program is,
The sub expression may be either a lambda abstraction, or a lambda abstraction applied to a parameter.
Two types of lift are possible.
Anonymous lift
Named lift
An anonymous lift has a lift expression which is a lambda abstraction only. It is regarded as defining an anonymous function. A name must be created for the function.
A named lift expression has a lambda abstraction applied to an expression. This lift is regarded as a named definition of a function.
Anonymous lift
An anonymous lift takes a lambda abstraction (called S). For S;
Create a name for the function that will replace S (called V). Make sure that the name identified by V has not been used.
Add parameters to V, for all the free variables in S, to create an expression G (see make-call).
The lambda lift is the substitution of the lambda abstraction S for a function application, along with the addition of a definition for the function.
The new lambda expression has S substituted for G. Note that L[S:=G] means substitution of S for G in L. The function definitions has the function definition G = S added.
In the above rule G is the function application that is substituted for the expression S. It is defined by,
where V is the function name. It must be a new variable, i.e. a name not already used in the lambda expression,
where is a meta function that returns the set of variables used in E.
Constructing the call
The function call G is constructed by adding parameters for each variable in the free variable set (represented by V), to the function H,
Named lift
The named lift is similar to the anonymous lift except that the function name V is provided.
As for the anonymous lift, the expression G is constructed from V by applying the free variables of S. It is defined by,
Lambda-lift transformation
A lambda lift transformation takes a lambda expression and lifts all lambda abstractions to the top of the expression. The abstractions are then translated into recursive functions, which eliminates the lambda abstractions. The result is a functional program in the form,
where M is a series of function definitions, and N is the expression representing the value returned.
For example,
The de-let meta function may then be used to convert the result back into lambda calculus.
The processing of transforming the lambda expression is a series of lifts. Each lift has,
A sub expression chosen for it by the function lift-choice. The sub expression should be chosen so that it may be converted into an equation with no lambdas.
The lift is performed by a call to the lambda-lift meta function, described in the next section,
After the lifts are applied the lets are combined together into a single let.
Then Parameter dropping is applied to remove parameters that are not necessary in the "let" expression. The let expression allows the function definitions to refer to each other directly, whereas lambda abstractions are strictly hierarchical, and a function may not directly refer to itself.
Choosing the expression for lifting
There are two different ways that an expression may be selected for lifting. The first treats all lambda abstractions as defining anonymous functions. The second, treats lambda abstractions which are applied to a parameter as defining a function. Lambda abstractions applied to a parameter have a dual interpretation as either a let expression defining a function, or as defining an anonymous function. Both interpretations are valid.
These two predicates are needed for both definitions.
lambda-free - An expression containing no lambda abstractions.
lambda-anon - An anonymous function. An expression like where X is lambda free.
Choosing anonymous functions only for lifting
Search for the deepest anonymous abstraction, so that when the lift is applied the function lifted will become a simple equation. This definition does not recognize a lambda abstractions with a parameter as defining a function. All lambda abstractions are regarded as defining anonymous functions.
lift-choice - The first anonymous found in traversing the expression or none if there is no function.
For example,
Choosing named and anonymous functions for lifting
Search for the deepest named or anonymous function definition, so that when the lift is applied the function lifted will become a simple equation. This definition recognizes a lambda abstraction with an actual parameter as defining a function. Only lambda abstractions without an application are treated as anonymous functions.
lambda-named A named function. An expression like where M is lambda free and N is lambda free or an anonymous function.
lift-choice The first anonymous or named function found in traversing the expression or none if there is no function.
For example,
Examples
For example, the Y combinator,
is lifted as,
and after Parameter dropping,
As a lambda expression (see Conversion from let to lambda expressions),
If lifting anonymous functions only, the Y combinator is,
and after Parameter dropping,
As a lambda expression,
The first sub expression to be chosen for lifting is . This transforms the lambda expression into and creates the equation .
The second sub expression to be chosen for lifting is . This transforms the lambda expression into and creates the equation .
And the result is,
Surprisingly this result is simpler than the one obtained from lifting named functions.
Execution
Apply function to ,
So,
or
The Y-Combinator calls its parameter (function) repeatedly on itself. The value is defined if the function has a fixed point. But the function will never terminate.
Lambda dropping in lambda calculus
Lambda dropping is making the scope of functions smaller and using the context from the reduced scope to reduce the number of parameters to functions. Reducing the number of parameters makes functions easier to comprehend.
In the Lambda lifting section, a meta function for first lifting and then converting the resulting lambda expression into recursive equation was described. The Lambda Drop meta function performs the reverse by first converting recursive equations to lambda abstractions, and then dropping the resulting lambda expression, into the smallest scope which covers all references to the lambda abstraction.
Lambda dropping is performed in two steps,
Sinking
Parameter dropping
Lambda drop
A Lambda drop is applied to an expression which is part of a program. Dropping is controlled by a set of expressions from which the drop will be excluded.
where,
L is the lambda abstraction to be dropped.
P is the program
X is a set of expressions to be excluded from dropping.
Lambda drop transformation
The lambda drop transformation sinks all abstractions in an expression. Sinking is excluded from expressions in a set of expressions,
where,
L is the expression to be transformed.
X is a set of sub expressions to be excluded from the dropping.
sink-tran sinks each abstraction, starting from the innermost,
Abstraction sinking
Sinking is moving a lambda abstraction inwards as far as possible such that it is still outside all references to the variable.
Application - 4 cases.
Abstraction. Use renaming to insure that the variable names are all distinct.
Variable - 2 cases.
Sink test excludes expressions from dropping,
Example
Parameter dropping
Parameter dropping is optimizing a function for its position in the function. Lambda lifting added parameters that were necessary so that a function can be moved out of its context. In dropping, this process is reversed, and extra parameters that contain variables that are free may be removed.
Dropping a parameter is removing an unnecessary parameter from a function, where the actual parameter being passed in is always the same expression. The free variables of the expression must also be free where the function is defined. In this case the parameter that is dropped is replaced by the expression in the body of the function definition. This makes the parameter unnecessary.
For example, consider,
In this example the actual parameter for the formal parameter o is always p. As p is a free variable in the whole expression, the parameter may be dropped. The actual parameter for the formal parameter y is always n. However n is bound in a lambda abstraction. So this parameter may not be dropped.
The result of dropping the parameter is,
For the main example,
The definition of drop-params-tran is,
where,
Build parameter lists
For each abstraction that defines a function, build the information required to make decisions on dropping names. This information describes each parameter; the parameter name, the expression for the actual value, and an indication that all the expressions have the same value.
For example, in,
the parameters to the function g are,
Each abstraction is renamed with a unique name, and the parameter list is associated with the name of the abstraction. For example, g there is parameter list.
build-param-lists builds all the lists for an expression, by traversing the expression. It has four parameters;
The lambda expression being analyzed.
The table parameter lists for names.
The table of values for parameters.
The returned parameter list, which is used internally by the
Abstraction - A lambda expression of the form is analyzed to extract the names of parameters for the function.
Locate the name and start building the parameter list for the name, filling in the formal parameter names. Also receive any actual parameter list from the body of the expression, and return it as the actual parameter list from this expression
Variable - A call to a function.
For a function name or parameter start populating actual parameter list by outputting the parameter list for this name.
Application - An application (function call) is processed to extract actual parameter details.
Retrieve the parameter lists for the expression, and the parameter. Retrieve a parameter record from the parameter list from the expression, and check that the current parameter value matches this parameter. Record the value for the parameter name for use later in checking.
The above logic is quite subtle in the way that it works. The same value indicator is never set to true. It is only set to false if all the values cannot be matched. The value is retrieved by using S to build a set of the Boolean values allowed for S. If true is a member then all the values for this parameter are equal, and the parameter may be dropped.
Similarly, def uses set theory to query if a variable has been given a value;
Let - Let expression.
And - For use in "let".
Examples
For example, building the parameter lists for,
gives,
and the parameter o is dropped to give,
Another example is,
Here x is equal to f. The parameter list mapping is,
and the parameter x is dropped to give,
Drop parameters
Use the information obtained by Build parameter lists to drop actual parameters that are no longer required. drop-params has the parameters,
The lambda expression in which the parameters are to be dropped.
The mapping of variable names to parameter lists (built in Build parameter lists).
The set of variables free in the lambda expression.
The returned parameter list. A parameter used internally in the algorithm.
Abstraction
where,
where,
Variable
For a function name or parameter start populating actual parameter list by outputting the parameter list for this name.
Application - An application (function call) is processed to extract
Let - Let expression.
And - For use in "let".
Drop formal parameters
drop-formal removes formal parameters, based on the contents of the drop lists. Its parameters are,
The drop list,
The function definition (lambda abstraction).
The free variables from the function definition.
drop-formal is defined as,
Which can be explained as,
If all the actual parameters have the same value, and all the free variables of that value are available for definition of the function then drop the parameter, and replace the old parameter with its value.
else do not drop the parameter.
else return the body of the function.
Example
Starting with the function definition of the Y-combinator,
Which gives back the Y combinator,
See also
Let expression
Fixed-point combinator
Lambda calculus
Deductive lambda calculus
Supercombinator
Curry's paradox
References
External links
Explanation on Stack Overflow, with a JavaScript example
Implementation of functional programming languages
Lambda calculus
Compiler construction | Lambda lifting | [
"Technology"
] | 4,373 | [
"Programming language comparisons",
"Computing comparisons"
] |
2,230,663 | https://en.wikipedia.org/wiki/Skate%20sailing | Skate sailing is a method of moving over ice standing on ice skates utilizing the force of the wind. A small sail is held in ones hands or leaned against with the whole body. Using a metal blade under foot and the height of the ice skates is of much importance in being able to steer as much it is acquiring the technique to gain an edge.
Skate sailing is a windsport with a long tradition, probably as old as ice skates themselves. It is distinct from land-based Wind skating, which is inspired by Windsurfing and commonly makes use of a skateboard or inline skates.
Skate sailing is using a sail and the wind to propel oneself across any relatively flat, hard surface. Skate sailing can be done in summer on roller blades, roller skis, cross skates, etc. Winter sailing can be enjoyed on downhill skis, snow blades, ice skates, or any other sliding footwear.
References
External links
Ontario Ministry Of Government Services, Archives
Skate Sailing with sail held to leeward
How to Skate Sail with sail carried on windward shoulder
Stand Inside Wing Skate Sails
Skate Sailing Links
The first universal folding skate sail
Sailing
Sailing
Ice in transportation | Skate sailing | [
"Physics"
] | 239 | [
"Physical systems",
"Transport",
"Ice in transportation"
] |
2,230,778 | https://en.wikipedia.org/wiki/Supersonic%20fracture | Supersonic fractures are fractures where the fracture propagation velocity is higher than the speed of sound in the material. This phenomenon was first discovered by scientists from the Max Planck Institute for Metals Research in Stuttgart (Markus J. Buehler and Huajian Gao) and IBM Almaden Research Center in San Jose, California (Farid F. Abraham).
The issues of intersonic and supersonic fracture become the frontier of dynamic fracture mechanics. The work of Burridge initiated the exploration for intersonic crack growth (when the crack tip velocity V is between the shear in wave speed C^8 and the longitudinal wave speed C^1.
Supersonic fracture was a phenomenon totally unexplained by the classical theories of fracture. Molecular dynamics simulations by the group around Abraham and Gao have shown the existence of intersonic mode I and supersonic mode II cracks. This motivated a continuum mechanics analysis of supersonic mode III cracks by Yang. Recent progress in the theoretical understanding of hyperelasticity in dynamic fracture has shown that supersonic crack propagation can only be understood by introducing a new length scale, called χ; which governs the process of energy transport near a crack tip. The crack dynamics is completely dominated by material properties inside a zone surrounding the crack tip with characteristic size equal to χ. When the material inside this characteristic zone is stiffened due to hyperelastic properties, cracks propagate faster than the longitudinal wave speed. The research group of Gao has used this concept to simulate the Broberg problem of crack propagation inside a stiff strip embedded in a soft elastic matrix. These simulations confirmed the existence of an energy characteristic length. This study also had implications for dynamic crack propagation in composite materials. If the characteristic size of the composite microstructure is larger than the energy characteristic length, χ; models that homogenize the materials into an effective continuum would be in significant error. The challenge arises of designing experiments and interpretative simulations to verify the energy characteristic length. Confirmation of the concept must be sought in the comparison of experiments on supersonic cracks and the predictions of the simulations and analysis. While much excitement rightly centres on the relatively new activity related to intersonic cracking, an old but interesting possibility remains to be incorporated in the modern work: for an interface between elastically dissimilar materials, crack propagation that is subsonic but exceeds the Rayleigh wave speed has been predicted for at least some combinations of the elastic properties of the two materials.
See also
Characteristic energy length scale
References
Fracture mechanics | Supersonic fracture | [
"Physics",
"Materials_science",
"Engineering"
] | 500 | [
"Structural engineering",
"Fracture mechanics",
"Classical mechanics stubs",
"Classical mechanics",
"Materials science",
"Materials degradation"
] |
2,230,854 | https://en.wikipedia.org/wiki/Characteristic%20energy%20length%20scale | The characteristic energy length scale describes the size of the region from which energy flows to a rapidly moving crack. If material properties change within the characteristic energy length scale, local wave speeds can dominate crack dynamics. This can lead to supersonic fracture.
References
Materials science | Characteristic energy length scale | [
"Physics",
"Materials_science",
"Engineering"
] | 53 | [
"Materials science stubs",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
2,230,954 | https://en.wikipedia.org/wiki/Shark%20repellent | A shark repellent is any method of driving sharks away from an area. Shark repellents are a category of animal repellents. Shark repellent technologies include magnetic shark repellent, electropositive shark repellents, electrical repellents, and semiochemicals. Shark repellents can be used to protect people from sharks by driving the sharks away from areas where they are likely to harm human beings. In other applications, they can be used to keep sharks away from areas they may be a danger to themselves due to human activity. In this case, the shark repellent serves as a shark conservation method. There are some naturally occurring shark repellents; modern artificial shark repellents date to at least the 1940s, with the United States Navy using them in the Pacific Ocean theater of World War II.
Natural repellents
It has traditionally been believed that sharks are repelled by the smell of a dead shark; however, modern research has had mixed results.
The Pardachirus marmoratus fish (finless sole, Red Sea Moses sole) repels sharks through its secretions. The best-understood factor is pardaxin, acting as an irritant to the sharks' gills, but other chemicals have been identified as contributing to the repellent effect.
In 2017, the US Navy announced that it was developing a synthetic analog of hagfish slime with potential application as a shark repellent.
History
Some of the earliest research on shark repellents took place during the Second World War, when military services sought to minimize the risk to stranded aviators and sailors in the water. Research has continued to the present, with notable researchers including Americans Eugenie Clark, and later Samuel H. Gruber, who has conducted tests at the Bimini Sharklab on the Caribbean island of Bimini, and the Japanese scientist Kazuo Tachibana. The future celebrity chef Julia Child developed shark repellent while working for the Office of Strategic Services.
Initial work, which was based on historical research and studies at the time, focused on using the odor of another dead shark. Efforts were made to isolate the active components in dead shark bodies that repelled other sharks. Eventually, it was determined that certain copper compounds like copper acetate, in combination with other ingredients, could mimic a dead shark and drive live sharks away from human beings in the water. Building on this work, Stewart Springer and others patented a "shark repellent" consisting of a combination of copper acetate and a dark-colored dye to obscure the user.
This shark repellent, known as "Shark Chaser", was long supplied to sailors and aviators of the United States Navy, initially packaged in cake form using a water-soluble wax binder and rigged to life vests. The Navy employed Shark Chaser extensively between 1943 and 1973. It is believed that the composition does repel sharks in some situations, but not in all, with about a 70% effectiveness rating.
On the other hand, Albert Tester questioned the idea that dead shark bodies or chemicals based on them could work as shark repellent. In 1959, he prepared and tested extracts of decaying shark flesh on tiger sharks in Hawaii and blacktip sharks at Enewetak Atoll. Tester found that not only did the dead shark extracts fail to repel any sharks, but several sharks had a "weak or strong attraction" to them. Tester reported a similar failure to repel sharks by a 1959 test at Enewetak of "an alleged shark repellent, supplied by a fisherman, which contained extract of decayed shark flesh as the principal component." Research has continued into the 2000s on using extracts from dead sharks or synthesizing such chemicals.
Research
Since the 1970s, there have been studies of how the Moses sole repels sharks, with Clark and Gruber both studying it. it has not found practical use, however, as the chemicals are perishable, and the repellent had to be injected into the shark's mouth to be effective; in nature the substance is secreted on the skin and is thus ingested by sharks when they bite the sole.
Since the 1980s, there is evidence that surfactants such as sodium lauryl sulfate can act as a shark repellent at concentrations of the order of 100 parts per million. However, this does not meet the desired "cloud" deterrence level of 0.1 parts per million.
There have been validated field tests and studies to confirm the effectiveness of semiochemicals as a shark repellent. From 2005-2010, an extensive study on the effectiveness of semiochemicals as a shark repellent was conducted by scientists from SharkDefense Technologies and Seton Hall University. The study's results were published in the scientific journal Ocean & Coastal Management in 2013. The study concluded that the existence of a putative chemical shark repellent has been confirmed.
As of 2014, SharkDefense partnered with SharkTec LLC to manufacture the semiochemical in a canister as a shark repellent for consumers called Anti-Shark 100.
Recently, SharkDefense used the same semiochemicals found in SharkTec's product to reduce shark by-catch by 71% in a government grant initiative. The government agency NOAA released these findings in a report to Congress.
In 2018 independent tests were carried out on five Shark Repellent technologies using Great white sharks. Only Shark Shield’s Ocean Guardian Freedom+ Surf showed measurable results, with encounters reduced from 96% to 40%. Rpela (electrical repellent technology), SharkBanz bracelet & SharkBanz surf leash (magnetic shark repellent technology) and Chillax Wax (essential oils) showed no measurable effect on reducing shark attacks.
In popular culture
The 1947 Robb White book Secret Sea mentions a copper acetate shark repellent developed by the U.S. Navy.
In Batman: The Movie (1966) there is a scene where an exploding shark jumps from the water and grabs Batman's leg while he is hanging on the Batcopter's ladder, piloted by Robin. Batman tries to punch the shark back to the ocean, but it does not affect the shark. He's handed a canister of Oceanic Bat-Spray, making the shark open its jaw and explode.
In a 2015 a MythBusters episode, the hosts Adam Savage and Jamie Hyneman used an extract of dead sharks, and were able to drive away 10-20 Caribbean reef sharks and nurse sharks in only a few seconds on two occasions. The repellent used consisted of extracts from other species of shark bodies, and sharks did not return for over 5 minutes on both occasions.
See also
Chain mail
Bear spray
References
Pesticides
Shark attack prevention | Shark repellent | [
"Biology",
"Environmental_science"
] | 1,389 | [
"Biocides",
"Toxicology",
"Pesticides"
] |
2,230,989 | https://en.wikipedia.org/wiki/Space%20Shuttle%20thermal%20protection%20system | The Space Shuttle thermal protection system (TPS) is the barrier that protected the Space Shuttle Orbiter during the extreme heat of atmospheric reentry. A secondary goal was to protect from the heat and cold of space while in orbit.
Materials
The TPS covered essentially the entire orbiter surface, and consisted of seven different materials in varying locations based on amount of required heat protection:
Reinforced carbon–carbon (RCC), used in the nose cap, the chin area between the nose cap and nose landing gear doors, the arrowhead aft of the nose landing gear door, and the wing leading edges. Used where reentry temperature exceeded .
High-temperature reusable surface insulation (HRSI) tiles, used on the orbiter underside. Made of coated LI-900 silica ceramics. Used where reentry temperature was below 1,260 °C.
Fibrous refractory composite insulation (FRCI) tiles, used to provide improved strength, durability, resistance to coating cracking and weight reduction. Some HRSI tiles were replaced by this type.
Flexible Insulation Blankets (FIB), a quilted, flexible blanket-like surface insulation. Used where reentry temperature was below .
Low-temperature Reusable Surface Insulation (LRSI) tiles, formerly used on the upper fuselage, but were mostly replaced by FIB. Used in temperature ranges roughly similar to FIB.
Toughened unipiece fibrous insulation (TUFI) tiles, a stronger, tougher tile which came into use in 1996. Used in high and low temperature areas.
Felt reusable surface insulation (FRSI). White Nomex felt blankets on the upper payload bay doors, portions of the mid fuselage and aft fuselage sides, portions of the upper wing surface and a portion of the OMS/RCS pods. Used where temperatures stayed below .
Each type of TPS had specific heat protection, impact resistance, and weight characteristics, which determined the locations where it was used and the amount used.
The shuttle TPS had three key characteristics that distinguished it from the TPS used on previous spacecraft:
Reusable Previous spacecraft generally used ablative heat shields which burned off during reentry and so could not be reused. This insulation was robust and reliable, and the single-use nature was appropriate for a single-use vehicle. By contrast, the reusable shuttle required a reusable thermal protection system.
Lightweight Previous ablative heat shields were very heavy. For example, the ablative heat shield on the Apollo Command Module comprised about 15% of the vehicle weight. The winged shuttle had much more surface area than previous spacecraft, so a lightweight TPS was crucial.
Fragile The only known technology in the early 1970s with the required thermal and weight characteristics was also so fragile, due to the very low density, that one could easily crush a TPS tile by hand.
Purpose
[[File:Ststpstile.jpg|thumb|Discovery'''s under wing surfaces are protected by thousands of High-Temperature Reusable Insulation tiles.]]
The orbiter's aluminum structure could not withstand temperatures over without structural failure.
Aerodynamic heating during reentry would push the temperature well above this level in areas, so an effective insulator was needed.
Reentry heating
Reentry heating differs from the normal atmospheric heating associated with jet aircraft, and this governed TPS design and characteristics. The skin of high-speed jet aircraft can also become hot, but this is from frictional heating due to atmospheric friction, similar to warming one's hands by rubbing them together. The orbiter reentered the atmosphere as a blunt body by having a very high (40°) angle of attack, with its broad lower surface facing the direction of flight. Over 80% of the heating the orbiter experiences during reentry is caused by compression of the air ahead of the hypersonic vehicle, in accordance with the basic thermodynamic relation between pressure and temperature. A hot shock wave was created in front of the vehicle, which deflected most of the heat and prevented the orbiter's surface from directly contacting the peak heat. Therefore, reentry heating was largely convective heat transfer between the shock wave and the orbiter's skin through superheated plasma. The key to a reusable shield against this type of heating is very low-density material, similar to how a thermos bottle inhibits convective heat transfer.
Some high-temperature metal alloys can withstand reentry heat; they simply get hot and re-radiate the absorbed heat. This technique, called heat sink thermal protection, was planned for the X-20 Dyna-Soar winged space vehicle. However, the amount of high-temperature metal required to protect a large vehicle like the Space Shuttle Orbiter would have been very heavy and entailed a severe penalty to the vehicle's performance. Similarly, ablative TPS would be heavy, possibly disturb vehicle aerodynamics as it burned off during reentry, and require significant maintenance to reapply after each mission.
Unfortunately, TPS tile, which was originally specified never to take debris strikes during launch, in practice also needed to be closely inspected and repaired after each landing, due to damage potentially incurred during ascent, even before new on-orbit inspection policies were established following the loss of Space Shuttle Columbia. However, the average replacement rate was still low, with Discovery for example still having about 18,000 of its 24,000 tiles be the original at the end of its career.
Detailed description
The TPS was a system of different protection types, not just silica tiles. They are in two basic categories: tile TPS and non-tile TPS. The main selection criteria used the lightest weight protection capable of handling the heat in a given area. However, in some cases a heavier type was used if additional impact resistance was needed. The FIB blankets were primarily adopted for reduced maintenance, not for thermal or weight reasons.
Much of the shuttle was covered with LI-900 silica tiles, made from essentially very pure quartz sand. The insulation prevented heat transfer to the underlying orbiter aluminium skin and structure. These tiles were such poor heat conductors that one could hold one by the edges while it was still red hot.
There were about 24,300 unique tiles individually fitted on the vehicle, for which the orbiter has been called "the flying brickyard". Researchers at University of Minnesota and Pennsylvania State University are performing the atomistic simulations to obtain accurate description of interactions between atomic and molecular oxygen with silica surfaces to develop better high-temperature oxidation-protection systems for leading edges on hypersonic vehicles.
The tiles were not mechanically fastened to the vehicle, but glued. Since the brittle tiles could not flex with the underlying vehicle skin, they were glued to Nomex felt Strain Isolation Pads (SIPs) with room temperature vulcanizing (RTV) silicone adhesive, which were in turn glued to the orbiter skin. These isolated the tiles from the orbiter's structural deflections and expansions. Gluing on the 24,300 tiles required nearly two man-years of work for every flight, partly due to the fact that the glue dried quickly and new batches needed to be produced after every couple of tiles. An ad-hoc remedy that involved technicians spitting in the glue to slow down the drying process was common practice until 1988, when a tile-hazard study revealed that spit weakened the adhesive's bonding strength.
Tile types
High-temperature reusable surface insulation (HRSI)
The black HRSI tiles provided protection against temperatures up to . There were 20,548 HRSI tiles which covered the landing gear doors, external tank umbilical connection doors, and the rest of the orbiter's under surfaces. They were also used in areas on the upper forward fuselage, parts of the orbital maneuvering system pods, vertical stabilizer leading edge, elevon trailing edges, and upper body flap surface. They varied in thickness from , depending upon the heat load encountered during reentry. Except for closeout areas, these tiles were normally square. The HRSI tile was composed of high purity silica fibers. Ninety percent of the volume of the tile was empty space, giving it a very low density () making it light enough for spaceflight. The uncoated tiles were bright white in appearance and looked more like a solid ceramic than the foam-like material that they were.
The black coating on the tiles was Reaction Cured Glass (RCG) of which tetraboron silicide and borosilicate glass were some of several ingredients. RCG was applied to all but one side of the tile to protect the porous silica and to increase the heat sink properties. The coating was absent from a small margin of the sides adjacent to the uncoated (bottom) side. To waterproof the tile, dimethylethoxysilane was injected into the tiles by syringe. Densifying the tile with tetraethyl orthosilicate (TEOS) also helped to protect the silica and added additional waterproofing.
An uncoated HRSI tile held in the hand feels like a very light foam, less dense than styrofoam, and the delicate, friable material must be handled with extreme care to prevent damage. The coating feels like a thin, hard shell and encapsulates the white insulating ceramic to resolve its friability, except on the uncoated side. Even a coated tile feels very light, lighter than a same-sized block of styrofoam. As expected for silica, they are odorless and inert.
HRSI was primarily designed to withstand transition from areas of extremely low temperature (the void of space, about ) to the high temperatures of re-entry (caused by interaction, mostly compression at the hypersonic shock, between the gases of the upper atmosphere & the hull of the Space Shuttle, typically around ).
Fibrous Refractory Composite Insulation Tiles (FRCI)
The black FRCI tiles provided improved durability, resistance to coating cracking and weight reduction. Some HRSI tiles were replaced by this type.
Toughened unipiece fibrous insulation (TUFI)
A stronger, tougher tile which came into use in 1996. TUFI tiles came in high temperature black versions for use in the orbiter's underside, and lower temperature white versions for use on the upper body. While more impact resistant than other tiles, white versions conducted more heat which limited their use to the orbiter's upper body flap and main engine area. Black versions had sufficient heat insulation for the orbiter underside but had greater weight. These factors restricted their use to specific areas.
Low-temperature reusable surface insulation (LRSI)
White in color, these covered the upper wing near the leading edge. They were also used in selected areas of the forward, mid, and aft fuselage, vertical tail, and the OMS/RCS pods. These tiles protected areas where reentry temperatures are below . The LRSI tiles were manufactured in the same manner as the HRSI tiles, except that the tiles were square and had a white RCG coating made of silica compounds with shiny aluminium oxide. The white color was by design and helped to manage heat on orbit when the orbiter was exposed to direct sunlight.
These tiles were reusable for up to 100 missions with refurbishment (100 missions was also the design lifetime of each orbiter). They were carefully inspected in the Orbiter Processing Facility after each mission, and damaged or worn tiles were immediately replaced before the next mission. Fabric sheets known as gap fillers were also inserted between tiles where necessary. These allowed for a snug fit between tiles, preventing excess plasma from penetrating between them, yet allowing for thermal expansion and flexing of the underlying vehicle skin.
Prior to the introduction of FIB blankets, LRSI tiles occupied all of the areas now covered by the blankets, including the upper fuselage and the whole surface of the OMS pods. This TPS configuration was only used on Columbia and Challenger.
Non-tile TPS
Flexible Insulation Blankets/Advanced Flexible Reusable Insulation (FIB/AFRSI)
Developed after the initial delivery of Columbia and first used on the OMS pods of Challenger. This white low-density fibrous silica batting material had a quilt-like appearance, and replaced the vast majority of the LRSI tiles. They required much less maintenance than LRSI tiles yet had about the same thermal properties. After their limited use on Challenger, they were used much more extensively beginning with Discovery and replaced many of the LRSI tiles on Columbia after the loss of Challenger.
Reinforced carbon-carbon (RCC)
The light gray material which withstood reentry temperatures up to protected the wing leading edges and nose cap. Each of the orbiters' wings had 22 RCC panels about thick. T-seals between each panel allowed for thermal expansion and lateral movement between these panels and the wing.
RCC was a laminated composite material made from carbon fibres impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate was pyrolized to convert the resin to pure carbon. This was then impregnated with furfural alcohol in a vacuum chamber, then cured and pyrolized again to convert the furfural alcohol to carbon. This process was repeated three times until the desired carbon-carbon properties were achieved.
To provide oxidation resistance for reuse capability, the outer layers of the RCC were coated with silicon carbide. The silicon-carbide coating protected the carbon-carbon from oxidation. The RCC was highly resistant to fatigue loading that was experienced during ascent and entry. It was stronger than the tiles and was also used around the socket of the forward attach point of the orbiter to the External Tank to accommodate the shock loads of the explosive bolt detonation. RCC was the only TPS material that also served as structural support for part of the orbiter's aerodynamic shape: the wing leading edges and the nose cap. All other TPS components (tiles and blankets) were mounted onto structural materials that supported them, mainly the aluminium frame and skin of the orbiter.
Nomex Felt Reusable Surface Insulation (FRSI)
This white, flexible fabric offered protection at up to . FRSI covered the orbiter's upper wing surfaces, upper payload bay doors, portions of the OMS/RCS pods, and aft fuselage.
Gap fillers
Gap fillers were placed at doors and moving surfaces to minimize heating by preventing the formation of vortices. Doors and moving surfaces created open gaps in the heat protection system that had to be protected from heat. Some of these gaps were safe, but there were some areas on the heat shield where surface pressure gradients caused a crossflow of boundary layer air in those gaps.
The filler materials were made of either white AB312 fibers or black AB312 cloth covers (which contain alumina fibers). These materials were used around the leading edge of the nose cap, windshields, side hatch, wing, trailing edge of elevons, vertical stabilizer, the rudder/speed brake, body flap, and heat shield of the shuttle's main engines.
On STS-114, some of this material was dislodged and determined to pose a potential safety risk. It was possible that the gap filler could cause turbulent airflow further down the fuselage, which would result in much higher heating, potentially damaging the orbiter. The cloth was removed during a spacewalk during the mission.
Weight considerations
While reinforced carbon–carbon had the best heat protection characteristics, it was also much heavier than the silica tiles and FIBs, so it was limited to relatively small areas. In general the goal was to use the lightest weight insulation consistent with the required thermal protection. Density of each TPS type:
Total area and weight of each TPS type (used on Orbiter 102, pre-1996):
Early TPS problems
Slow tile application
Tiles often fell off and caused much of the delay in the launch of STS-1, the first shuttle mission, which was originally scheduled for 1979 but did not occur until April 1981. NASA was unused to lengthy delays in its programs, and was under great pressure from the government and military to launch soon. In March 1979 it moved the incomplete Columbia, with 7,800 of the 31,000 tiles missing, from the Rockwell International plant in Palmdale, California to Kennedy Space Center in Florida. Beyond creating the appearance of progress in the program, NASA hoped that the tiling could be finished while the rest of the orbiter was prepared. This was a mistake; some of the Rockwell tilers disliked Florida and soon returned to California, and the Orbiter Processing Facility was not designed for manufacturing and was too small for its 400 workers.
Each tile used cement that required 16 hours to cure. After the tile was affixed to the cement, a jack held it in place for another 16 hours. In March 1979 it took each worker 40 hours to install one tile; by using young, efficient college students during the summer the pace sped up to 1.8 tiles per worker per week. Thousands of tiles failed stress tests and had to be replaced. By fall NASA realized that the speed of tiling would determine the launch date. The tiles were so problematic that officials would have switched to any other thermal protection method, but none other existed.
Because it had to be ferried without all tiles the gaps were filled with material to maintain the Shuttle's aerodynamics while in transit.
Concern over "zipper effect"
The tile TPS was an area of concern during shuttle development, mainly concerning adhesion reliability. Some engineers thought a failure mode could exist whereby one tile could detach, and resulting aerodynamic pressure would create a "zipper effect" stripping off other tiles. Whether during ascent or reentry, the result would be disastrous.
Concern over debris strikes
Another problem was ice or other debris impacting the tiles during ascent. This had never been fully and thoroughly solved, as the debris had never been eliminated, and the tiles remained susceptible to damage from it. NASA's final strategy for mitigating this problem was to aggressively inspect for, assess, and address any damage that may occur, while on orbit and before reentry, in addition to on the ground between flights.
Early tile repair plans
These concerns were sufficiently great that NASA did significant work developing an emergency-use tile repair kit which the STS-1 crew could use before deorbiting. By December 1979, prototypes and early procedures were completed, most of which involved equipping the astronauts with a special in-space repair kit and a jet pack called the Manned Maneuvering Unit, or MMU, developed by Martin Marietta.
Another element was a maneuverable work platform which would secure an MMU-propelled spacewalking astronaut to the fragile tiles beneath the orbiter. The concept used electrically controlled adhesive cups which would lock the work platform into position on the featureless tile surface. About one year before the 1981 STS-1 launch, NASA decided the repair capability was not worth the additional risk and training, so discontinued development. There were unresolved problems with the repair tools and techniques; also further tests indicated the tiles were unlikely to come off. The first shuttle mission did suffer several tile losses, but they were in non-critical areas, and no "zipper effect" occurred.
Columbia accident and aftermath
On February 1, 2003, the Space Shuttle Columbia was destroyed on reentry due to a failure of the TPS. The investigation team found and reported that the probable cause of the accident was that during launch, a piece of foam debris punctured an RCC panel on the left wing's leading edge and allowed hot gases from the reentry to enter the wing and disintegrate the wing from within, leading to eventual loss of control and breakup of the shuttle.
The Space Shuttle's thermal protection system received a number of controls and modifications after the disaster. They were applied to the three remaining shuttles, Discovery, Atlantis and Endeavour in preparation for subsequent mission launches into space.
On 2005's STS-114 mission, in which Discovery made the first flight to follow the Columbia accident, NASA took a number of steps to verify that the TPS was undamaged. The Orbiter Boom Sensor System, a new extension to the Remote Manipulator System, was used to perform laser imaging of the TPS to inspect for damage. Prior to docking with the International Space Station, Discovery performed a Rendezvous Pitch Maneuver, simply a 360° backflip rotation, allowing all areas of the vehicle to be photographed from ISS. Two gap fillers were protruding from the orbiter's underside more than the nominally allowed distance, and the agency cautiously decided it would be best to attempt to remove the fillers or cut them flush rather than risk the increased heating they would cause. Even though each one protruded less than , it was believed that leaving them could cause heating increases of 25% upon reentry.
Because the orbiter did not have any handholds on its underside (as they would cause much more trouble with reentry heating than the protruding gap fillers of concern), astronaut Stephen K. Robinson worked from the ISS's robotic arm, Canadarm2. Because the TPS tiles were quite fragile, there had been concern that anyone working under the vehicle could cause more damage to the vehicle than was already there, but NASA officials felt that leaving the gap fillers alone was a greater risk. In the event, Robinson was able to pull the gap fillers free by hand, and caused no damage to the TPS on Discovery.
Tile donations
, with the impending Space Shuttle retirement, NASA was donating TPS tiles to schools, universities, and museums for the cost of shipping—US$23.40 each. About 7000 tiles were available on a first-come, first-served basis, but limited to one each per institution.
See also
Space Shuttle program
Space Shuttle Columbia disaster
Columbia Accident Investigation Board
References
"When the Space Shuttle finally flies", article written by Rick Gore. National Geographic (pp. 316–347. Vol. 159, No. 3. March 1981). http://www.datamanos2.com/columbia/natgeomar81.htmlSpace Shuttle Operator's Manual, by Kerry Mark Joels and Greg Kennedy (Ballantine Books, 1982).The Voyages of Columbia: The First True Spaceship, by Richard S. Lewis (Columbia University Press, 1984).A Space Shuttle Chronology, by John F. Guilmartin and John Mauer (NASA Johnson Space Center, 1988).Space Shuttle: The Quest Continues, by George Forres (Ian Allan, 1989).Information Summaries: Countdown! NASA Launch Vehicles and Facilities, (NASA PMS 018-B (KSC), October 1991).Space Shuttle: The History of Developing the National Space Transportation System, by Dennis Jenkins (Walsworth Publishing Company, 1996).U.S. Human Spaceflight: A Record of Achievement, 1961–1998. NASA – Monographs in Aerospace History No. 9, July 1998.Space Shuttle Thermal Protection System'' by Gary Milgrom. February, 2013. Free iTunes ebook download. https://itunes.apple.com/us/book/space-shuttle-thermal-protection/id591095660?mt=11
Notes
External links
https://web.archive.org/web/20060909094330/http://www-pao.ksc.nasa.gov/kscpao/nasafact/tps.htm
https://web.archive.org/web/20110707103505/http://ww3.albint.com/about/research/Pages/protectionSystems.aspx
http://science.ksc.nasa.gov/shuttle/technology/sts-newsref/sts_sys.html
https://web.archive.org/web/20160307090308/http://science.ksc.nasa.gov/shuttle/nexgen/Nexgen_Downloads/Shuttle_Gordon_TPS-PUBLIC_Appendix.pdf
Space Shuttle program
Thermal protection
Atmospheric entry | Space Shuttle thermal protection system | [
"Engineering"
] | 5,026 | [
"Atmospheric entry",
"Aerospace engineering"
] |
2,231,059 | https://en.wikipedia.org/wiki/Superheavy%20element | Superheavy elements, also known as transactinide elements, transactinides, or super-heavy elements, or superheavies for short, are the chemical elements with atomic number greater than 104. The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements, i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series.
Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (though more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor.
Superheavies are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements has ever been produced. Superheavies are all named after physicists and chemists or important locations involved in the synthesis of the elements.
IUPAC defines an element to exist if its lifetime is longer than 10 seconds, which is the time it takes for the atom to form an electron cloud.
The known superheavies form part of the 6d and 7p series in the periodic table. Except for rutherfordium and dubnium (and lawrencium if it is included), even the longest-lived known isotopes of superheavies have half-lives of minutes or less. The element naming controversy involved elements 102–109. Some of these elements thus used systematic names for many years after their discovery was confirmed. (Usually the systematic names are replaced with permanent names proposed by the discoverers relatively soon after a discovery has been confirmed.)
Introduction
Synthesis of superheavy nuclei
A superheavy atomic nucleus is created in a nuclear reaction that combines two other nuclei of unequal size into one; roughly, the more unequal the two nuclei in terms of mass, the greater the possibility that the two react. The material made of the heavier nuclei is made into a target, which is then bombarded by the beam of lighter nuclei. Two nuclei can only fuse into one if they approach each other closely enough; normally, nuclei (all positively charged) repel each other due to electrostatic repulsion. The strong interaction can overcome this repulsion but only within a very short distance from a nucleus; beam nuclei are thus greatly accelerated in order to make such repulsion insignificant compared to the velocity of the beam nucleus. The energy applied to the beam nuclei to accelerate them can cause them to reach speeds as high as one-tenth of the speed of light. However, if too much energy is applied, the beam nucleus can fall apart.
Coming close enough alone is not enough for two nuclei to fuse: when two nuclei approach each other, they usually remain together for about 10 seconds and then part ways (not necessarily in the same composition as before the reaction) rather than form a single nucleus. This happens because during the attempted formation of a single nucleus, electrostatic repulsion tears apart the nucleus that is being formed. Each pair of a target and a beam is characterized by its cross section—the probability that fusion will occur if two nuclei approach one another expressed in terms of the transverse area that the incident particle must hit in order for the fusion to occur. This fusion may occur as a result of the quantum effect in which nuclei can tunnel through electrostatic repulsion. If the two nuclei can stay close past that phase, multiple nuclear interactions result in redistribution of energy and an energy equilibrium.
The resulting merger is an excited state—termed a compound nucleus—and thus it is very unstable. To reach a more stable state, the temporary merger may fission without formation of a more stable nucleus. Alternatively, the compound nucleus may eject a few neutrons, which would carry away the excitation energy; if the latter is not sufficient for a neutron expulsion, the merger would produce a gamma ray. This happens in about 10 seconds after the initial nuclear collision and results in creation of a more stable nucleus. The definition by the IUPAC/IUPAP Joint Working Party (JWP) states that a chemical element can only be recognized as discovered if a nucleus of it has not decayed within 10 seconds. This value was chosen as an estimate of how long it takes a nucleus to acquire electrons and thus display its chemical properties.
Decay and detection
The beam passes through the target and reaches the next chamber, the separator; if a new nucleus is produced, it is carried with this beam. In the separator, the newly produced nucleus is separated from other nuclides (that of the original beam and any other reaction products) and transferred to a surface-barrier detector, which stops the nucleus. The exact location of the upcoming impact on the detector is marked; also marked are its energy and the time of the arrival. The transfer takes about 10 seconds; in order to be detected, the nucleus must survive this long. The nucleus is recorded again once its decay is registered, and the location, the energy, and the time of the decay are measured.
Stability of a nucleus is provided by the strong interaction. However, its range is very short; as nuclei become larger, its influence on the outermost nucleons (protons and neutrons) weakens. At the same time, the nucleus is torn apart by electrostatic repulsion between protons, and its range is not limited. Total binding energy provided by the strong interaction increases linearly with the number of nucleons, whereas electrostatic repulsion increases with the square of the atomic number, i.e. the latter grows faster and becomes increasingly important for heavy and superheavy nuclei. Superheavy nuclei are thus theoretically predicted and have so far been observed to predominantly decay via decay modes that are caused by such repulsion: alpha decay and spontaneous fission. Almost all alpha emitters have over 210 nucleons, and the lightest nuclide primarily undergoing spontaneous fission has 238. In both decay modes, nuclei are inhibited from decaying by corresponding energy barriers for each mode, but they can be tunneled through.
Alpha particles are commonly produced in radioactive decays because the mass of an alpha particle per nucleon is small enough to leave some energy for the alpha particle to be used as kinetic energy to leave the nucleus. Spontaneous fission is caused by electrostatic repulsion tearing the nucleus apart and produces various nuclei in different instances of identical nuclei fissioning. As the atomic number increases, spontaneous fission rapidly becomes more important: spontaneous fission partial half-lives decrease by 23 orders of magnitude from uranium (element 92) to nobelium (element 102), and by 30 orders of magnitude from thorium (element 90) to fermium (element 100). The earlier liquid drop model thus suggested that spontaneous fission would occur nearly instantly due to disappearance of the fission barrier for nuclei with about 280 nucleons. The later nuclear shell model suggested that nuclei with about 300 nucleons would form an island of stability in which nuclei will be more resistant to spontaneous fission and will primarily undergo alpha decay with longer half-lives. Subsequent discoveries suggested that the predicted island might be further than originally anticipated; they also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei.
Alpha decays are registered by the emitted alpha particles, and the decay products are easy to determine before the actual decay; if such a decay or a series of consecutive decays produces a known nucleus, the original product of a reaction can be easily determined. (That all decays within a decay chain were indeed related to each other is established by the location of these decays, which must be in the same place.) The known nucleus can be recognized by the specific characteristics of decay it undergoes such as decay energy (or more specifically, the kinetic energy of the emitted particle). Spontaneous fission, however, produces various nuclei as products, so the original nuclide cannot be determined from its daughters.
The information available to physicists aiming to synthesize a superheavy element is thus the information collected at the detectors: location, energy, and time of arrival of a particle to the detector, and those of its decay. The physicists analyze this data and seek to conclude that it was indeed caused by a new element and could not have been caused by a different nuclide than the one claimed. Often, provided data is insufficient for a conclusion that a new element was definitely created and there is no other explanation for the observed effects; errors in interpreting data have been made.
History
Early predictions
The heaviest element known at the end of the 19th century was uranium, with an atomic mass of about 240 (now known to be 238) amu. Accordingly, it was placed in the last row of the periodic table; this fueled speculation about the possible existence of elements heavier than uranium and why A = 240 seemed to be the limit. Following the discovery of the noble gases, beginning with argon in 1895, the possibility of heavier members of the group was considered. Danish chemist Julius Thomsen proposed in 1895 the existence of a sixth noble gas with Z = 86, A = 212 and a seventh with Z = 118, A = 292, the last closing a 32-element period containing thorium and uranium. In 1913, Swedish physicist Johannes Rydberg extended Thomsen's extrapolation of the periodic table to include even heavier elements with atomic numbers up to 460, but he did not believe that these superheavy elements existed or occurred in nature.
In 1914, German physicist Richard Swinne proposed that elements heavier than uranium, such as those around Z = 108, could be found in cosmic rays. He suggested that these elements may not necessarily have decreasing half-lives with increasing atomic number, leading to speculation about the possibility of some longer-lived elements at Z = 98–102 and Z = 108–110 (though separated by short-lived elements). Swinne published these predictions in 1926, believing that such elements might exist in Earth's core, iron meteorites, or the ice caps of Greenland where they had been locked up from their supposed cosmic origin.
Discoveries
Work performed from 1961 to 2013 at four labs – Lawrence Berkeley National Laboratory in the US, the Joint Institute for Nuclear Research in the USSR (later Russia), the GSI Helmholtz Centre for Heavy Ion Research in Germany, and Riken in Japan – identified and confirmed the elements lawrencium to oganesson according to the criteria of the IUPAC–IUPAP Transfermium Working Groups and subsequent Joint Working Parties. These discoveries complete the seventh row of the periodic table. The next two elements, ununennium (Z = 119) and unbinilium (Z = 120), have not yet been synthesized. They would begin an eighth period.
List of elements
103 Lawrencium, Lr, for Ernest Lawrence; sometimes but not always included
104 Rutherfordium, Rf, for Ernest Rutherford
105 Dubnium, Db, for the town of Dubna, near Moscow
106 Seaborgium, Sg, for Glenn T. Seaborg
107 Bohrium, Bh, for Niels Bohr
108 Hassium, Hs, for Hassia (Hesse), location of Darmstadt
109 Meitnerium, Mt, for Lise Meitner
110 Darmstadtium, Ds, for Darmstadt)
111 Roentgenium, Rg, for Wilhelm Röntgen
112 Copernicium, Cn, for Nicolaus Copernicus
113 Nihonium, Nh, for Nihon (Japan), location of the Riken institute
114 Flerovium, Fl, for Russian physicist Georgy Flyorov
115 Moscovium, Mc, for Moscow
116 Livermorium, Lv, for Lawrence Livermore National Laboratory
117 Tennessine, Ts, for Tennessee, location of Oak Ridge National Laboratory
118 Oganesson, Og, for Russian physicist Yuri Oganessian
Characteristics
Due to their short half-lives (for example, the most stable known isotope of seaborgium has a half-life of 14 minutes, and half-lives decrease with increasing atomic number) and the low yield of the nuclear reactions that produce them, new methods have had to be created to determine their gas-phase and solution chemistry based on very small samples of a few atoms each. Relativistic effects become very important in this region of the periodic table, causing the filled 7s orbitals, empty 7p orbitals, and filling 6d orbitals to all contract inward toward the atomic nucleus. This causes a relativistic stabilization of the 7s electrons and makes the 7p orbitals accessible in low excitation states.
Elements 103 to 112, lawrencium to copernicium, form the 6d series of transition elements. Experimental evidence shows that elements 103–108 behave as expected for their position in the periodic table, as heavier homologs of lutetium through osmium. They are expected to have ionic radii between those of their 5d transition metal homologs and their actinide pseudohomologs: for example, Rf is calculated to have ionic radius 76 pm, between the values for Hf (71 pm) and Th (94 pm). Their ions should also be less polarizable than those of their 5d homologs. Relativistic effects are expected to reach a maximum at the end of this series, at roentgenium (element 111) and copernicium (element 112). Nevertheless, many important properties of the transactinides are still not yet known experimentally, though theoretical calculations have been performed.
Elements 113 to 118, nihonium to oganesson, should form a 7p series, completing the seventh period in the periodic table. Their chemistry will be greatly influenced by the very strong relativistic stabilization of the 7s electrons and a strong spin–orbit coupling effect "tearing" the 7p subshell apart into two sections, one more stabilized (7p, holding two electrons) and one more destabilized (7p, holding four electrons). Lower oxidation states should be stabilized here, continuing group trends, as both the 7s and 7p electrons exhibit the inert-pair effect. These elements are expected to largely continue to follow group trends, though with relativistic effects playing an increasingly larger role. In particular, the large 7p splitting results in an effective shell closure at flerovium (element 114) and a hence much higher than expected chemical activity for oganesson (element 118).
Element 118 is the last element that has been synthesized. The next two elements, 119 and 120, should form an 8s series and be an alkali and alkaline earth metal respectively. The 8s electrons are expected to be relativistically stabilized, so that the trend toward higher reactivity down these groups will reverse and the elements will behave more like their period 5 homologs, rubidium and strontium. The 7p orbital is still relativistically destabilized, potentially giving these elements larger ionic radii and perhaps even being able to participate chemically. In this region, the 8p electrons are also relativistically stabilized, resulting in a ground-state 8s8p valence electron configuration for element 121. Large changes are expected to occur in the subshell structure in going from element 120 to element 121: for example, the radius of the 5g orbitals should drop drastically, from 25 Bohr units in element 120 in the excited [Og] 5g 8s configuration to 0.8 Bohr units in element 121 in the excited [Og] 5g 7d 8s configuration, in a phenomenon called "radial collapse". Element 122 should add either a further 7d or a further 8p electron to element 121's electron configuration. Elements 121 and 122 should be similar to actinium and thorium respectively.
At element 121, the superactinide series is expected to begin, when the 8s electrons and the filling 8p, 7d, 6f, and 5g subshells determine the chemistry of these elements. Complete and accurate calculations are not available for elements beyond 123 because of the extreme complexity of the situation: the 5g, 6f, and 7d orbitals should have about the same energy level, and in the region of element 160 the 9s, 8p, and 9p orbitals should also be about equal in energy. This will cause the electron shells to mix so that the block concept no longer applies very well, and will also result in novel chemical properties that will make positioning these elements in a periodic table very difficult.
Beyond superheavy elements
It has been suggested that elements beyond Z = 126 be called beyond superheavy elements. Other sources refer to elements around Z = 164 as hyperheavy elements.
See also
Bose–Einstein condensate (also known as Superatom)
Island of stability
Notes
References
Bibliography
pp. 030001-1–030001-17, pp. 030001-18–030001-138, Table I. The NUBASE2016 table of nuclear and decay properties
Nuclear physics
Sets of chemical elements
Synthetic elements | Superheavy element | [
"Physics",
"Chemistry"
] | 3,635 | [
"Synthetic materials",
"Synthetic elements",
"Radioactivity",
"Nuclear physics"
] |
2,231,292 | https://en.wikipedia.org/wiki/Universally%20measurable%20set | In mathematics, a subset of a Polish space is universally measurable if it is measurable with respect to every complete probability measure on that measures all Borel subsets of . In particular, a universally measurable set of reals is necessarily Lebesgue measurable (see below).
Every analytic set is universally measurable. It follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set is universally measurable.
Finiteness condition
The condition that the measure be a probability measure; that is, that the measure of itself be 1, is less restrictive than it may appear. For example, Lebesgue measure on the reals is not a probability measure, yet every universally measurable set is Lebesgue measurable. To see this, divide the real line into countably many intervals of length 1; say, N0=[0,1), N1=[1,2), N2=[-1,0), N3=[2,3), N4=[-2,-1), and so on. Now letting μ be Lebesgue measure, define a new measure ν by
Then easily ν is a probability measure on the reals, and a set is ν-measurable if and only if it is Lebesgue measurable. More generally a universally measurable set must be measurable with respect to every sigma-finite measure that measures all Borel sets.
Example contrasting with Lebesgue measurability
Suppose is a subset of Cantor space ; that is, is a set of infinite sequences of zeroes and ones. By putting a binary point before such a sequence, the sequence can be viewed as a real number between 0 and 1 (inclusive), with some unimportant ambiguity. Thus we can think of as a subset of the interval [0,1], and evaluate its Lebesgue measure, if that is defined. That value is sometimes called the coin-flipping measure of , because it is the probability of producing a sequence of heads and tails that is an element of upon flipping a fair coin infinitely many times.
Now it follows from the axiom of choice that there are some such without a well-defined Lebesgue measure (or coin-flipping measure). That is, for such an , the probability that the sequence of flips of a fair coin will wind up in is not well-defined. This is a pathological property of that says that is "very complicated" or "ill-behaved".
From such a set , form a new set by performing the following operation on each sequence in : Intersperse a 0 at every even position in the sequence, moving the other bits to make room. Although is not intuitively any "simpler" or "better-behaved" than , the probability that the sequence of flips of a fair coin will be in is well-defined. Indeed, to be in , the coin must come up tails on every even-numbered flip, which happens with probability zero.
However is not universally measurable. To see that, we can test it against a biased coin that always comes up tails on even-numbered flips, and is fair on odd-numbered flips. For a set of sequences to be universally measurable, an arbitrarily biased coin may be used (even one that can "remember" the sequence of flips that has gone before) and the probability that the sequence of its flips ends up in the set must be well-defined. However, when is tested by the coin we mentioned (the one that always comes up tails on even-numbered flips, and is fair on odd-numbered flips), the probability to hit is not well defined (for the same reason why cannot be tested by the fair coin). Thus, is not universally measurable.
References
Descriptive set theory
Determinacy
Measure theory | Universally measurable set | [
"Mathematics"
] | 798 | [
"Game theory",
"Determinacy"
] |
2,231,589 | https://en.wikipedia.org/wiki/Pseudorandom%20ensemble | In cryptography, a pseudorandom ensemble is a family of variables meeting the following criteria:
Let be a uniform ensemble
and be an ensemble. The ensemble is called pseudorandom if and
are indistinguishable in polynomial time.
References
Goldreich, Oded (2001). Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press. . Fragments available at the author's web site.
Algorithmic information theory
Pseudorandomness
Cryptography | Pseudorandom ensemble | [
"Mathematics",
"Engineering"
] | 98 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
2,231,692 | https://en.wikipedia.org/wiki/Bohm%20diffusion | The diffusion of plasma across a magnetic field was conjectured to follow the Bohm diffusion scaling as indicated from the early plasma experiments of very lossy machines. This predicted that the rate of diffusion was linear with temperature and inversely linear with the strength of the confining magnetic field.
The rate predicted by Bohm diffusion is much higher than the rate predicted by classical diffusion, which develops from a random walk within the plasma. The classical model scaled inversely with the square of the magnetic field. If the classical model is correct, small increases in the field lead to much longer confinement times. If the Bohm model is correct, magnetically confined fusion would not be practical.
Early fusion energy machines appeared to behave according to Bohm's model, and by the 1960s there was a significant stagnation within the field. The introduction of the tokamak in 1968 was the first evidence that the Bohm model did not hold for all machines. Bohm predicts rates that are too fast for these machines, and classical too slow; studying these machines has led to the neoclassical diffusion concept.
Description
Bohm diffusion is characterized by a diffusion coefficient equal to
where B is the magnetic field strength, T is the electron gas temperature, e is the elementary charge, kB is the Boltzmann constant.
History
It was first observed in 1949 by David Bohm, E. H. S. Burhop, and Harrie Massey while studying magnetic arcs for use in isotope separation. It has since been observed that many other plasmas follow this law. Fortunately there are exceptions where the diffusion rate is lower, otherwise there would be no hope of achieving practical fusion energy. In Bohm's original work he notes that the fraction 1/16 is not exact; in particular "the exact value of [the diffusion coefficient] is uncertain within a factor of 2 or 3." Lyman Spitzer considered this fraction as a factor related to plasma instability.
Approximate derivation
Generally diffusion can be modeled as a random walk of steps of length and time . If the diffusion is collisional, then is the mean free path and is the inverse of the collision frequency. The diffusion coefficient D can be expressed variously as
where is the velocity between collisions.
In a magnetized plasma, the collision frequency is usually small compared to the gyrofrequency, so that the step size is the gyroradius and the step time is the collision time, , which is related to the collision frequency through , leading to (classical diffusion).
On the other hand, if the collision frequency is larger than the gyrofrequency, then the particles can be considered to move freely with the thermal velocity vth between collisions, and the diffusion coefficient takes the form . In this regime, the diffusion is maximum when the collision frequency is equal to the gyrofrequency, in which case . Substituting , and (the cyclotron frequency), we arrive at
which is the Bohm scaling. Considering the approximate nature of this derivation, the missing 1/16 in front is no cause for concern.
Bohm diffusion is typically greater than classical diffusion. The fact that classical diffusion and Bohm diffusion scale as different powers of the magnetic field is often used to distinguish between the two.
Further research
In light of the calculation above, it is tempting to think of Bohm diffusion as classical diffusion with an anomalous collision rate that maximizes the transport, but the physical picture is different. Anomalous diffusion is the result of turbulence. Regions of higher or lower electric potential result in eddies because the plasma moves around them with the E-cross-B drift velocity equal to E/B. These eddies play a similar role to the gyro-orbits in classical diffusion, except that the physics of the turbulence can be such that the decorrelation time is approximately equal to the turn-over time, resulting in Bohm scaling. Another way of looking at it is that the turbulent electric field is approximately equal to the potential perturbation divided by the scale length , and the potential perturbation can be expected to be a sizeable fraction of the kBT/e. The turbulent diffusion constant is then independent of the scale length and is approximately equal to the Bohm value.
The theoretical understanding of plasma diffusion especially the Bohm diffusion remained elusive until the 1970s when Taylor and McNamara put forward a 2d guiding center plasma model. The concepts of negative temperature state, and of the convective cells contributed much to the understanding of the diffusion. The underlying physics may be explained as follows. The process can be a transport driven by the thermal fluctuations, corresponding to the lowest possible random electric fields. The low-frequency spectrum will cause the E×B drift. Due to the long range nature of Coulomb interaction, the wave coherence time is long enough to allow virtually free streaming of particles across the field lines. Thus, the transport would be the only mechanism to limit the run of its own course and to result in a self-correction by quenching the coherent transport through the diffusive damping. To quantify these statements, we may write down the diffusive damping time as
where k⊥ is the wave number perpendicular to the magnetic field. Therefore, the step size is , and the diffusion coefficient is
It clearly yields for the diffusion a scaling law of B−1 for the two dimensional plasma. The thermal fluctuation is typically a small portion of the particle thermal energy. It is reduced by the plasma parameter
and is given by
where n0 is the plasma density, λD is the Debye length, and T is the plasma temperature. Taking and substituting the electric field by the thermal energy, we would have
The 2D plasma model becomes invalid when the parallel decoherence is significant.
An effective diffusion mechanism combining effects from the ExB drift and the cyclotron resonance was proposed, predicting a scaling law of B−3/2.
In 2015, new exact explanation for the original Bohm's experiment is reported, in which the cross-field diffusion measured at Bohm's experiment and Simon's experiment were explained by the combination of the ion gyro-center shift and the short circuit effect. The ion gyro-center shift occurs when an ion collides with a neutral to exchange the momentum; typical example is ion-neutral charge exchange reaction. The one directional shifts of gyro-centers take place when ions are in the perpendicular (to the magnetic field) drift motion such as diamagnetic drift. The electron gyro-center shift is relatively small since the electron gyro-radius is much smaller than ion's so it can be disregarded. Once ions move across the magnetic field by the gyro-center shift, this movement generates spontaneous electric unbalance between in and out of the plasma. However this electric unbalance is immediately compensated by the electron flow through the parallel path and conducting end wall, when the plasma is contained in the cylindrical structure as in Bohm's and Simon's experiments. Simon recognized this electron flow and named it as 'short circuit' effect in 1955. With the help of short circuit effect the ion flow induced by the diamagnetic drift now becomes whole plasma flux which is proportional to the density gradient since the diamagnetic drift includes pressure gradient. The diamagnetic drift can be described as , (here n is density) for approximately constant temperature over the diffusion region. When the particle flux is proportional to , the other part than is the diffusion coefficient. So naturally the diffusion is proportional to . The other front coefficient of this diffusion is a function of the ratio between the charge exchange reaction rate and the gyro frequency. A careful analysis tells this front coefficient for Bohm's experiment was in the range of 1/13 ~ 1/40. The gyro-center shift analysis also reported the turbulence induced diffusion coefficient which is responsible for the anomalous diffusion in many fusion devices; described as . This means different two diffusion mechanisms (the arc discharge diffusion such as Bohm's experiment and the turbulence induced diffusion such as in the tokamak) have been called by the same name of "Bohm diffusion".
See also
Classical diffusion
Plasma diffusion
References
Diffusion
Plasma phenomena | Bohm diffusion | [
"Physics",
"Chemistry"
] | 1,683 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Plasma physics",
"Plasma phenomena"
] |
2,231,743 | https://en.wikipedia.org/wiki/Explicit%20knowledge | Explicit knowledge (also expressive knowledge) is knowledge that can be readily articulated, conceptualized, codified, formalized, stored and accessed. It can be expressed in formal and systematical language and shared in the form of data, scientific formulae, specifications, manuals and such like. It is easily codifiable and thus transmittable without loss of integrity once the syntactical rules required for deciphering it are known. Most forms of explicit knowledge can be stored in certain media. Explicit knowledge is often seen as complementary to tacit knowledge.
Explicit knowledge is often seen as easier to formalize compared to tacit knowledge, but both are necessary for knowledge creation. Nonaka and Takeuchi introduce the SECI model as a way for knowledge creation. The SECI model involves four stages where explicit and tacit knowledge interact with each other in a spiral manner. The four stages are:
Socialization, from tacit to tacit knowledge
Externalization, from tacit to explicit knowledge
Combination, from explicit to explicit knowledge
Internalization, from explicit to tacit knowledge.
Examples
The information contained in encyclopedias and textbooks are good examples of explicit knowledge, specifically declarative knowledge. The most common forms of explicit knowledge are manuals, documents, procedures, and how-to videos. Knowledge also can be audio-visual. Engineering works and product design can be seen as other forms of explicit knowledge where human skills, motives and knowledge are externalized.
In the scholarly literature, papers presenting an up-to-date "systemization of knowledge" (SoK) on a particular area of research are valuable resources for PhD students.
See also
Descriptive knowledge
SECI model of knowledge dimensions
Tacit knowledge
References
External links
National Library for Health - Knowledge Management Specialist Library - collection of resources about auditing intellectual capital.
Knowledge
Cognitive psychology | Explicit knowledge | [
"Biology"
] | 368 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
2,231,865 | https://en.wikipedia.org/wiki/Azobisisobutyronitrile | Azobisisobutyronitrile (abbreviated AIBN) is an organic compound with the formula . This white powder is soluble in alcohols and common organic solvents but is insoluble in water. It is often used as a foamer in plastics and rubber and as a radical initiator.
As an azo initiator, radicals resulting from AIBN have multiple benefits over common organic peroxides. For example, they do not have oxygenated byproducts or much yellow discoloration. Additionally, they do not cause too much grafting and therefore are often used when making adhesives, acrylic fibers, detergents, etc.
Mechanism of decomposition
In its most characteristic reaction, AIBN decomposes, eliminating a molecule of nitrogen gas to form two 2-cyanoprop-2-yl radicals:
Because azobisisobutyronitrile readily gives off free radicals, it is often used as a radical initiator. This happens at temperatures above 40 °C, but in experiments is more commonly done at temperatures between 66 °C and 72 °C. This decomposition has a ΔG‡ of 131 kJ/mol and results in two 2-cyano-2-propyl (carbon) radicals and a molecule of nitrogen gas. The release of nitrogen gas pushes this decomposition forward due to the increase in entropy. And the 2-cyano-2-propyl radical is stabilized by the −CN group.
Chemical reactions
These radicals formed by the decomposition of AIBN can initiate free-radical polymerizations and other radical-induced reactions. For instance, a mixture of styrene and maleic anhydride in toluene will react if heated, forming the copolymer upon addition of AIBN. Another example of a radical reaction that can be initiated by AIBN is the anti-Markovnikov hydrohalogenation of alkenes. AIBN has also been used as the radical initiator for Wohl–Ziegler bromination. The AIBN-derived 2-cyano-2-propyl radical abstracts the hydrogen from tributyltin hydride. The resulting tributyltin radical can be used for removal of a bromine atom.
AIBN-derived radicals abstract a hydrogen from HBr to give a bromine radical, which can add to alkenes. This type of hydrohalogenation of an alkene proceeds with anti-Markovnikov selectivity.
Production and analogues
AIBN is produced in two steps from acetone cyanohydrin. Reaction with hydrazine gives the substituted dialkylhydrazine. In the second step, the hydrazine is oxidized to the azo derivative:
Related azo compounds behave similarly, such as 1,1′-azobis(cyclohexanecarbonitrile). Water-soluble azo initiators are also available.
Safety
AIBN is safer to use than benzoyl peroxide (another radical initiator) because the risk of explosion is far less. However, it is still considered as an explosive compound, decomposing above 65 °C. A respirator dust mask, protective gloves and safety glasses are recommended. Pyrolysis of AIBN without a trap for the formed 2-cyanopropyl radicals results in the formation of tetramethylsuccinonitrile, which is highly toxic.
References
External links
Azo compounds
Radical initiators | Azobisisobutyronitrile | [
"Chemistry",
"Materials_science"
] | 724 | [
"Functional groups",
"Radical initiators",
"Polymer chemistry",
"Reagents for organic chemistry",
"Nitriles"
] |
2,231,930 | https://en.wikipedia.org/wiki/Acentric%20factor | The acentric factor is a conceptual number introduced by Kenneth Pitzer in 1955, proven to be useful in the description of fluids. It has become a standard for the phase characterization of single and pure components, along with other state description parameters such as molecular weight, critical temperature, critical pressure, and critical volume (or critical compressibility). The acentric factor is also said to be a measure of the non-sphericity (centricity) of molecules.
Pitzer defined from the relationship
where
is the reduced saturation vapor pressure, and
is the reduced temperature.
Pitzer developed this factor by studying the vapor-pressure curves of various pure substances. Thermodynamically, the vapor-pressure curve for pure components can be mathematically described using the Clausius–Clapeyron equation.
The integrated form of equation is mainly used for obtaining vapor-pressure data mathematically. This integrated version shows that the relationship between the logarithm of vapor pressure and the reciprocal of absolute temperature is approximately linear.
For a series of fluids, as the acentric factor increases the vapor curve is "pulled" down, resulting in higher boiling points. For many monatomic fluids, at which leads to . In many cases, lies above the boiling temperature of liquids at atmosphere pressure.
Values of can be determined for any fluid from accurate experimental vapor-pressure data. The definition of gives values close to zero for the noble gases argon, krypton, and xenon. is also very close to zero for molecules which are nearly spherical. Values of correspond to vapor pressures above the critical pressure and are non-physical.
The acentric factor can be predicted analytically from some equations of state. For example, it can be easily shown from the above definition that a van der Waals fluid has an acentric factor of about −0.302024, which if applied to a real system would indicate a small, ultra-spherical molecule.
Values of some common gases
See also
Equation of state
Reduced pressure
Reduced temperature
References
Gas laws | Acentric factor | [
"Physics",
"Chemistry"
] | 419 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics",
"Gas laws"
] |
2,232,092 | https://en.wikipedia.org/wiki/Lutein | Lutein (; from Latin luteus meaning "yellow") is a xanthophyll and one of 600 known naturally occurring carotenoids. Lutein is synthesized only by plants, and like other xanthophylls is found in high quantities in green leafy vegetables such as spinach, kale and yellow carrots. In green plants, xanthophylls act to modulate light energy and serve as non-photochemical quenching agents to deal with triplet chlorophyll, an excited form of chlorophyll which is overproduced at very high light levels during photosynthesis. See xanthophyll cycle for this topic.
Animals obtain lutein by ingesting plants. In the human retina, lutein is absorbed from blood specifically into the macula lutea, although its precise role in the body is unknown. Lutein is also found in egg yolks and animal fats.
Lutein is isomeric with zeaxanthin, differing only in the placement of one double bond. Lutein and zeaxanthin can be interconverted in the body through an intermediate called meso-zeaxanthin. The principal natural stereoisomer of lutein is (3R,3R,6R)-beta,epsilon-carotene-3,3-diol. Lutein is a lipophilic molecule and is generally insoluble in water. The presence of the long chromophore of conjugated double bonds (polyene chain) provides the distinctive light-absorbing properties. The polyene chain is susceptible to oxidative degradation by light or heat and is chemically unstable in acids.
Lutein is present in plants as fatty-acid esters, with one or two fatty acids bound to the two hydroxyl-groups. For this reason, saponification (de-esterification) of lutein esters to yield free lutein may yield lutein in any ratio from 1:1 to 1:2 molar ratio with the saponifying fatty acid.
As a pigment
This xanthophyll, like its sister compound zeaxanthin, has primarily been used in food and supplement manufacturing as a colorant due to its yellow-red color. Lutein absorbs blue light and therefore appears yellow at low concentrations and orange-red at high concentrations.
Many songbirds (like golden oriole, evening grosbeak, yellow warbler, common yellowthroat and Javan green magpies, but not American goldfinch or yellow canaries) deposit lutein obtained from the diet into growing tissues to color their feathers.
Role in human eyes
Although lutein is concentrated in the macula – a small area of the retina responsible for three-color vision – the precise functional role of retinal lutein has not been determined.
Macular degeneration
In 2013, findings of the Age-Related Eye Disease Study (AREDS2) showed that a dietary supplement formulation containing lutein reduced progression of age-related macular degeneration (AMD) by 25 percent. However, lutein and zeaxanthin had no overall effect on preventing AMD, but rather "the participants with low dietary intake of lutein and zeaxanthin at the start of the study, but who took an AREDS formulation with lutein and zeaxanthin during the study, were about 25 percent less likely to develop advanced AMD compared with participants with similar dietary intake who did not take lutein and zeaxanthin."
In AREDS2, participants took one of four AREDS formulations: the original AREDS formulation, AREDS formulation with no beta-carotene, AREDS with low zinc, AREDS with no beta-carotene and low zinc. In addition, they took one of four additional supplement or combinations including lutein and zeaxanthin (10 mg and 2 mg), omega-3 fatty acids (1,000 mg), lutein/zeaxanthin and omega-3 fatty acids, or placebo. The study reported that there was no overall additional benefit from adding omega-3 fatty acids or lutein and zeaxanthin to the formulation. However, the study did find benefits in two subgroups of participants: those not given beta-carotene, and those who had little lutein and zeaxanthin in their diets. Removing beta-carotene did not curb the formulation's protective effect against developing advanced AMD, which was important given that high doses of beta-carotene had been linked to higher risk of lung cancers in smokers. It was recommended to replace beta-carotene with lutein and zeaxanthin in future formulations for these reasons.
Three subsequent meta-analyses of dietary lutein and zeaxanthin concluded that these carotenoids lower the risk of progression from early stage AMD to late stage AMD.
An updated 2023 Cochrane review of 26 studies from several countries, however, concluded that dietary supplements containing zeaxanthin and lutein alone have little effect when compared to placebo on the progression of AMD. In general, there remains insufficient evidence to assess the effectiveness of dietary or supplemental zeaxanthin or lutein in treatment or prevention of early AMD.
Cataract research
There is preliminary epidemiological evidence that increasing lutein and zeaxanthin intake lowers the risk of cataract development. Consumption of more than 2.4 mg of lutein/zeaxanthin daily from foods and supplements was significantly correlated with reduced incidence of nuclear lens opacities, as revealed from data collected during a 13- to 15-year period in one study.
Two meta-analyses confirm a correlation between high diet content or high serum concentrations of lutein and zeaxanthin and a decrease in the risk of cataract. There is only one published clinical intervention trial testing for an effect of lutein and zeaxanthin supplementation on cataracts. The AREDS2 trial enrolled subjects at risk for progression to advanced age-related macular degeneration. Overall, the group getting lutein (10 mg) and zeaxanthin (2 mg) were NOT less likely to progress to needing cataract surgery. The authors speculated that there may be a cataract prevention benefit for people with low dietary intake of lutein and zeaxanthin, but recommended more research.
In diet
Lutein is a natural part of a human diet found in orange-yellow fruits and flowers, and in leafy vegetables. According to the NHANES 2013-2014 survey, adults in the United States consume on average 1.7 mg/day of lutein and zeaxanthin combined. No recommended dietary allowance currently exists for lutein. Some positive health effects have been seen at dietary intake levels of 6–10 mg/day. The only definitive side effect of excess lutein consumption is bronzing of the skin (carotenodermia).
As a food additive, lutein has the E number E161b (INS number 161b) and is extracted from the petals of African marigold (Tagetes erecta). It is approved for use in the EU and Australia and New Zealand. In the United States lutein may not be used as a food coloring for foods intended for human consumption, but can be added to animal feed and is allowed as a human dietary supplement often in combination with zeaxanthin. Example: lutein fed to chickens will show up in skin color and egg yolk color.
Some foods contain relatively high amounts of lutein:
Safety
In humans, the Observed Safe Level (OSL) for lutein, based on a non-government organization evaluation, is 20 mg/day. Although much higher levels have been tested without adverse effects and may also be safe, the data for intakes above the OSL are not sufficient for a confident conclusion of long-term safety. Neither the U.S. Food and Drug Administration nor the European Food Safety Authority considers lutein an essential nutrient or has acted to set a tolerable upper intake level.
Commercial value
The lutein market is segmented into pharmaceutical, dietary supplement, food, pet food, and animal and fish feed. The pharmaceutical market for lutein is estimated to be about US$190 million, and the nutraceutical and food categories are estimated to be about US$110 million. Pet food and other animal applications for lutein are estimated at US$175 million annually. This includes chickens (usually in combination with other carotenoids), to get color in egg yolks, and fish farms to color the flesh closer to wild-caught color. In the dietary supplement industry, the major market for lutein is for products with claims of helping maintain eye health. Newer applications are emerging in oral and topical products for skin health. Skin health via orally consumed supplements is one of the fastest growing areas of the US$2 billion carotenoid market.
See also
Carotenoids
List of phytochemicals in food
References
External links
Carotenoids
Food antioxidants
Dietary antioxidants
Food colorings
Secondary alcohols
Cyclohexenes
E-number additives | Lutein | [
"Biology"
] | 1,905 | [
"Biomarkers",
"Carotenoids"
] |
17,440,610 | https://en.wikipedia.org/wiki/Electricity%20policy%20of%20Alberta | The electricity policy of Alberta, enacted through several agencies, is to create an electricity sector with a competitive market that attracts investors, while providing consumers with reliable and affordable electricity, as well as reducing harmful pollution to protect the environment and the health of Albertans, according to their 2022 website.
The underlying framework for the regulation of Alberta's electric industry is the Electric Utilities Act. The Act began Alberta's deregulated electricity market in 1996, where the province began to restructure its electricity market away from traditional cost-of-service regulation to a market-based system. The Act established arms-length agencies that oversee the province's electricity systemthe Alberta Electric System Operator (AESO), the Balancing Pool, the Alberta Utilities Commission (AUC), Utilities Consumer Advocate (UCA), and the Market Surveillance Administrator (MSA).
Coal used to account for 80% of all electricity generated in Alberta. By the end of 2019, with coal representing 36% of the generation mix and natural gas accounting for 54%, 89% of Alberta's electricity in Alberta was produced from fossil fuels. Eleven per cent is generated with renewables, including wind turbines, hydroelectric, geothermal, and biomass.
From 2000 until 2021, the average wholesale pool price on-peak times was approximately CA$70/MWh and CA$70/MWh during off peak times.
On August 12, 2021 the average wholesale daily pool price was CA$142/MWh representing the highest price in 20 years, according to AESO data.
Escalating prices
The price of electricity had dropped in 2015 to below 4 cents/KWh for the first time since 2003, during the economic recession when oil prices, and therefore commodity prices, had decreased. The last time electricity rates were this low, was in 2003. In 2017 another historic low was reached, 2.88 cents/kWh. By 2018 prices began to rise to the prices experienced before the 2014 economic downturn. Since the regulated rate option (RRO) which placed a price cap of 6.8 cents/kWh on electricity was scrapped by the UCP government in their fall 2019 budget, electricity rates and bills have spiked considerably. By January 2022, electricity rates and bills reached their highest price evermore than 16 cents/kWh in Edmonton and Calgary, which did not include fees for distribution and transmission.
On January 22, 2021, EDC Associates reported twenty years of success in retail competition in Alberta's electricity sector. On-peak pool prices averaged $70/MWh over the 20-year period and off-peak prices averaged $31 per megawatt-hour (MWh). The Alberta Electric System Operator (AESO) administers the Power Pool, which is the only market for all electricity sales and purchases in the province. The highest price in the Power Pool in the two decades from 2000 through 2020, was $90/MWh.
In August 2021, based on AESO data, wholesale power prices in the province increased sharply to over twice the average 2020 Alberta Power Pool price. From January to August 2021 the average pool price was $103.51/MWh; in August it was $142/MWh representing the highest annual price of electricity in twenty years.
On March 7, 2022 Premier Kenney announced an electricity rebate of $150. NDP energy critic, Kathleen Ganley, said that this was not sufficient and called on the UCP government to consider capping electricity rates, implementing a "rebate program or a reverse rate rider". Ganley said the government should amend the 2022 budget to "provide real relief". The UCP Minister of Natural Gas and Electricity responded that rate caps, which had been used previously did not increase future capacity and only provided short-term relief. They said that they were not fiscally responsible as future generations would pay a high cost for their implementation.
When considering potential hourly power pool prices, the AESO considers market fundamentals such as the impacts of carbon pricing, the retirement of electricity generators and conversions of coal generators to gas, the price of natural gas, additions of renewable energy forms to the supply and power outages in generation units or in electricity transmission. The forecast for 2021 was $98/MWh and in 2022 it was expected to decrease by 25% to $74/MWh. In making forecasts, AESO considers Alberta Internal Load (AIL). It was projected to be higher in 2021 than 2020 because of anticipated extreme weather, pandemic recovery, oil price increase, and the province's economic growth because of oil sands production.
From September 30 to December 31, 2021, TransAlta, which is one of the utility companies that dominate Alberta's generation sector, reported an increase of $405 million in profits compared to the same period in 2020.
Early history prior to deregulation
Compared to the rest of Canada, Alberta's cities were not large enough to be able to afford electrical systems until the 1880s and 1890s. Calgary became the first city to have an electrical system when the Calgary Electric Lighting Company (ELC) installed lights in 1887.
Entrepreneurs received a permit for the construction of the Edmonton Electric Lighting and Power Company on October 23, 1891, and less than two months later on December 22 sections of Edmonton had electrical light for the first time. The permit was set to expire in 1909.
In 1921, the United Farmers of Alberta (UFA) party, with origins in a small populist movement of farmers calling for publicly owned rural electrification, won a majority government, and remained in power until 1935. The estimated cost of CA$200 million was prohibitive in the 1920s. In the 1930s, Prairies were the hardest hit because of the combination of the Dust Bowl drought and the Great Depression so any plans for electrification were paused. Although all across Canada, only one in five farms had electricity by 1945, the situation for rural Albertans was complicated by the fact that the existing private power monopolies had no motivation or interest in rural electrification given the steep cost.
In 1938, the Energy Utilities Board (EUB) succeeded the Petroleum and Natural Gas Conservation Board. The AEUB later became the Energy Resources Conservation Board (ERCB) and the Alberta Energy Regulator (AER). On June 17, 2013. The AER took over energy resource development's oversight in terms of full-lifecycle regulations.
In both the oil and gas sector and the electricity sector there were advocates of public ownership to promote, and facilitate the sectors' development while protecting them from potential private interests. The province's 1940 Royal Commission on Petroleum recommended government intervention in the embryonic oil and gas industry to promote, speed up, and expand the energy sector's development while preventing "fortune hunters" from causing "chaos" through over-production. Similarly, as in the oil and gas sector, the electricity sector had its advocates of public ownership in order to accelerate and spread electrification across the province.
By 1948, electrification was a highly charged issue in Alberta as the installation of new electricity lines was slower and more costly in rural areas than in the denser cities. Alberta's governing party, the Social Credit, added a electrification plebiscite to the ballot in the 1948 Alberta general election. The two referendum choices were the existing model in which municipal power plants and privately owned firms provided electricity or a publicly owned system that would be under the administration of an Alberta Government Power Commission. This was the fourth plebiscite in Alberta's history. Those supporting the existing model with private companies against government ownership won 50.03% to 49.97% with a "razor-thin" margin.The two major cities in Alberta, Calgary and the capital, Edmonton disagreed; the majority of voters in Edmonton supported provincial control, while an even larger majority in Calgary supported the existing mix of private and municipal companies. Despite the referendum result, the government sponsored the creation of many Rural Electrification Associations, of which some still exist today.
The municipality of Edmonton, was one of the early electricity facilities to convert to natural gas from coal, when its Rossdale plant made the switch in 1955.
In 1970, construction began on the Clover Bar generating station which was owned by the newly created Edmonton Power in a merger of "Edmonton's electrical distribution and power plant departments".
In order to "achieve equalization of electrical rates by averaging the price of generation and transmission across the province", the Electric Energy Marketing Agency was established in 1982 with the Public Utilities Board setting the "price at which utilities sell electric energy to the agency".
In a provincial-federal agreement the price of natural gas was deregulated in 1986 which resulted in a drop in the price of natural gas. Alberta let the Natural Gas Protection Plan expire. In the same year, two new departments—Energy, and Forestry and Lands and Wildlife were established replacing the Alberta Department of Energy and Natural Resources.
The first coal-fired steam turbine in Alberta was the Genesee generation unit, Genesee 2, which was built in 1989 with a capacity of 410 megawatts.
Deregulation
Alberta has never owned and operated its own provincial power company, unlike most other Canadian provinces.
In the 1990s, in response to power brownouts, the Alberta government under the premiership of Ralph Klein believed that competition would increase and prices decrease if more companies were producing power in the province. He believed that deregulation would make Alberta more attractive to business. The government created a strategy of power purchase agreements (PPAs) through which the winning bids in an auction would acquire the right to provide a portion of all the power produced in Alberta from 1996 to 2016. The PPAs would make all the decisions and cover costs of constructing power generation plants as well as bearing responsibility for all the financial risks. They would sell the power back to the grid with the "risks and rewards of fluctuating prices."
The restructuring of the electric utility industry began in the 1996. Through the restructuring process Alberta became the first Canadian province to implement a deregulated electricity market.The legislation that provided the new framework to regulate the electric utility industry was the Electric Utilities Act (EUA). Further restructuring took place through amendments to the 1996 Act, in the 1997 Electric Utilities Amendment Act. In 2003, provisions under the Act established new agencies that restructured the way the industry operates. This included the Alberta Electric System Operator (AESO), the Alberta Utilities Commission (AUC), and the Market Surveillance Administrator.
With electricity generation in this deregulated market, there is competition to sell energy in the electricity market at a price that is competitively determined. Private capital builds new generation plants and owners take on financial risks. This contrasts with the vertically integrated provincial government Crown corporations in other Canadian provinces, such as BC Hydro, SaskPower, Manitoba Hydro, Hydro-Québec and, historically, Ontario Hydro, that provide some utility services, In most Canadian provinces there is a conventional cost of service regulated power system.
According to Brennan, in 2008, some generation companies own both generation and transmission in Alberta. According to Keith Provost, a former senior vice-president of Alberta Power Ltd. (now ATCO Power) who worked in the electrical utility business for decades, AESO had its own system that is vulnerable to manipulation and is not a free-market system. Instead of marketing electricity contracts for future deliveries in a regulated market, AESO had its own system that is open to manipulation and is not a free-market system. Provost said that the deregulated system caused volatility in the price of electricity, kept consumer prices high while maximizing profits to generating companies.
Since 2000, Alberta's electrical market has been an Energy Only Market (EOM) in which the electricity producer only gets paid for generating electricity. In the EOM system, decisions about where facilities will be built, which technologies and the kind of energy source to be used remains with the producer often works with private investors who assume any risks associated with those choices. It is a simple system that can lead to more wholesale electricity price volatility.
With the passage of Bill 18, the Electricity Statutes (Capacity Market Termination) Amendment Act, the United Conservative Party (UCP) terminated plans by the previous government under Rachel Notley to overhaul the electricity system, to move away from the Energy Only Market to a capacity market. In a capacity market there is less price volatility as the electricity producer is not only paid to generate power, but also to maintain a higher level of capacity to be able to respond to demand peaks.
According to the IEA, from 1999 to 2009, in most provinces in Canada changes were made to the electricity sector's structure towards some market liberalization. From province to province the approaches to changing regulation and market design differed. The report said, that competitive wholesale markets were being fostered in the 1990s as part of the liberalization process. Of all the Canadian provinces, it was only Alberta that had an effective open market at the wholesale and retail level. According to the IEA, a few dominant integrated utilities provide the bulk of electricity generation, transmission and distribution services provide. The report recommended unbundling these services.
Agencies
In 1996, Alberta began to restructure its electricity market away from traditional cost-of-service regulation to a market-based system which included the creation of arms length electricity sector agencies under the 1996 Electric Utilities Act. They were established to oversee the province's electricity system; to create an electricity system that is "reliable", "affordable", and that also reduces pollution that harms Albertans' health and the environment, while ensuring a competitive market for industry investors.
These agencies include the Alberta Electric System Operator (AESO), the Balancing Pool, the Alberta Utilities Commission (AUC), Utilities Consumer Advocate (UCA), and the Market Surveillance Administrator.
Alberta Electric System Operator (AESO)
The AESO has no industry affiliation and does not own market assets. It is an independent system operator that leads the planning and operation of the Alberta Interconnected Electric System (AIES) and the Balancing Pool. AESO facilitates open access to the grid by promoting a competitive electricity market. AESO engages with the electricity industry by consulting with retailers, electricity generators, and transmission facility owners such as AltaLink, ATCO, ENMAX, and EPCOR. AECO is governed by an independent board of directors appointed by the province's energy minister. The AESO collects and evaluates information about the industry. Penalties and fines are recommended by the MSA to be brought before the AUC.
Alberta Utilities Commission (AUC)
The Alberta Utilities Commission (AUC) replaced the Electric Utilities Board (EUB) in fully regulating utility distribution and transmission services provided by investor-owned utilities. The AUC decides penalties, rules, and takes in applications related to the electricity market. As part of the restructuring the Energy Utilities Board no longer regulated wholesale electricity prices and customers could choose their electricity retailer. The EUA stipulated all electric energy bought and sold in Alberta had to be exchanged through the Power Pool which "served as an independent, central, open access pool." It functioned as a "spot market intending to match the demand with the lowest cost supply and establish an hourly pool price." Under the Energy Utilities Board (EUB)'s newly implemented restructured tariffs in the electric utility industry, "each major utility was required to apply to "separate its generation, transmission and distribution costs".
In southern Alberta, several areas suffered a rotating electricity outage on October 25, 1998, that was investigated by the province's electricity watchdog, EUB. In response to the November 4, 1998 EUB report a new industry-government task force was created and new regulations were introduced.
Regulated Rate Option (RRO) refers to the default regulated rate for electricity or floating rate option for small business and residential consumers that did not enter into a contract with one of the thirty retail electricity providers. RROs can change monthly. AUC regulates the five investor- and municipally-owned companies that they approved to provide the Regulated Rate Option (RRO) service to AlbertansAltaGas Utilities, City of Lethbridge, Direct Energy Regulated Services (DERS), ENMAX Power. These RROs providers include Epcor Distribution and FortisAlberta for wire services, and ENMAX Power and EPCOR Energy for electricity. Based on geographic location in the province, the government has designated only one RRO electricity and natural gas provider for residential and business electricity consumers. Province-wide, there are only five RRO providers. Of these, four provide electricity and three provide natural gas. The City of Lethbridge is the RRO electricity provider for 34,000 customers in that municipality.
Alberta began to record Energy Emergency Alerts for electricity supply shortfall starting in 2000. Since then they have reported 42 EEAs, of which only two reached a level 3 in which the AESO had to call for "shedding of electricity load" or reducing service to consumers. The first occurred on July 24, 2006 and the second took place on July 9, 2012.
While the average wholesale pool price on-peak times was approximately CA$70/MWh since 2000, and CA$31/MWh during off peak times, the average price on August 12, 2021, was CA$142/MWh with an average of CA$103.51 for 2021 to date, representing the highest price in 20 years, according to AESO data.
Market Surveillance Administrator
The Market Surveillance Administrator (MSA) is the surveillance agency for the electricity market that monitors for competitive advantage. While the AESO has a role to collect information and recommend areas for evaluation, only the MSA can recommend penalties or fines to the AUC.
Balancing Pool
The Balancing Pool forecasts expenses and revenues and manages payments, and some power generation assets. It also sets out AESO's powers and responsibilities and implements policies. Among its provisions was the creation of the Power Pool of Alberta (Power Pool), a wholesale market clearing entity. Through the AESO, a spot market was created.
All electric energy that is bought and sold in the province is exchanged only through the Power Pool of Albertathe central, not-for-profit, independent, and open access entity that has been operating the competitive wholesale electricity market and the dispatch of electricity generation since its establishment in 1996.
Local distribution utilities, either investor- or municipally owned, retained the obligation to supply and the 6 largest utilities were assigned a share of the output of existing generators at a fixed price.
The Power Pool is a not for profit entity that operated the "competitive wholesale market including dispatch of generation." The Power Pool matched the lowest-priced supply with demand functioning as spot market by establishing a pool price that was revised each hour based on 60 marginal prices each minute. Only those offers accepted generate power and receive the AESO pool price. All offers accepted to receive the same price, the pool price, not the price offered." Following the creation of the Power Pool, the price of electricity rose significantly, from the lowest price in North America to the third highest by 2001.
Utilities Consumer Advocate (UCA)
The Utilities Consumer Advocate (UCA) assists consumers in understanding their bundled energy bills which include both electricity and natural gas. The provides detailed information on their constantly updated website. This includes tools to help consumers choose one of the thirty retail electricity providers using cost comparison, and to provide assistance with understanding electricity bills that are highly detailed.
Electricity generation mix
Coal-generated electricity was the backbone of Alberta's electrical sector. In 2013 coal accounted for 55% of the total, natural gas represented 35%, and renewable and alternative energy represented 11%. These cleaner sources included "wind, hydro, biomass and co-generation".
Coal
Ninety per cent of Canada's usable coal resources, including different grades of coal, ranking from lignite the lower grade to semianthracite, are found in the Western Canadian Sedimentary Basin (WCSB), which underlie the three Western provinces of Alberta, British Columbia, and Saskatchewan. Lignite, which is used mainly for electricity generation, is easy to mine and has been used in Alberta since the 1800s to produce electricity.
Coal-fired power plants burning coal to generate electricity were the "backbone" of Alberta's electricity system.
The IEA reported that Alberta had the second highest GHG emission levels in Canada (190 Mt) represented 27% of Canada's total emissions. Only Ontario was higher with 234 Mt accounting for 33% of the nation's emissions in 2006. In 2007, a new policy was introduced to reduce GHG emissions by 12% starting with large emitters like coal-fired power plants, pulp mills and oil sands projects. Alberta was the first jurisdiction in North America to introduce a carbon taxthe Specified Gas Emitters Regulation, or SGER was considered to be a success story.
Installed capacity reached 12,834 megawatts in 2009, with coal (5,692 MW) and natural gas (5,189 MW) representing the bulk of the province's generation fleet.
In 2013, Alberta's generation mix continued to be "dominated by coal" at 55%, natural gas at 35%, and renewable and alternative sources at 11%, which included "wind, hydro, biomass and co-generation", according to the 2017 International Energy Agency (IEA) report. The next biggest source of electricity came from natural gas which had increased its representation from at 29% in 2004 to 35% in 2013. By 2013, renewable and alternative energy represented 11% of the generation mix and included wind farms, hydroelectric, biomass and co-generation.
According to Alberta economists, Andrew Leach and Blake Shaffer, the percentage of Alberta generation mix supplied by coal had dropped from 50% in 2015 to 27% in 2020, without an increase in the price of electricity or a disruption of service during the 5-year transition period.
TransAlta chose to move quickly to shift from coal-fired plants to natural gas that was partially financed by Brookfield Renewable Partners' CA$750-million investment. By February 2, 2021, TransAlta had converted the first of three planned conversions. By the end of December, 2021, TransAlta had completed full conversion from thermal coal to natural gas at its Keephills Unit 3 facility, which is located near Keephills, Alberta. TransAlta retired Sundance Power Station Unit 1 in 2017, 2 in 2018, and 3 in 2020, 5 in 2021. Sundance 6 was converted to natural gas in 2021. Keephills Generating Station Unit 1 in Duffield was retired in 2021. Keephills Units 2, and 3 were converted to natural gas in 2021. Both Sheerness Unit 1 and 2 were converted to natural gas in 2021 and 2020. Provincial and federal carbon prices and carbon taxes were among the factors that turned coal into a liability instead of an asset, according to TransAlta. Sundance Power Station units 4 & 5 began operations in 2021.
Milner Power's H. R. Milner Generating Station in Grande Cache in west central Alberta was commissioned in 1972 as a coal-fired power station. In 2011, the Alberta Utilities Commission granted Milner's interim approval to expand from a 150 megawatt coal-fired facility to a 500 megawatts facility without any public hearing or notice of application. Concerns were raised by Ecojustice and the Pembina Institute as federal greenhouse gas regulations were coming in effect in 2015. By 2011, of all Canadian provinces, Alberta with its eleven coal-fired plants, had the most.
In 2023, coal accounted for just 12% of total electricity generation and was fully phased out June 16, 2024.
Natural gas
Natural gas has been a major contributor to Alberta's electricity generation mix, second only to coal for many decades. Natural gas made up 29% in 2004 and 35% in 2013. By 2023, 69% of Alberta's electricity was generated from natural gas.
Cogeneration
Cogeneration, also known as Combined Heat and Power (CHP), refers to power plants that produce both heat and electricity simultaneously. In Alberta's oil sands, steam generated for oil extraction and upgrading activities is also used to generate power. A typical oil sands cogeneration plant captures exhaust heat from the gas turbine in a boiler or steam generator, sending low-pressure steam to a neighboring bitumen plant. Cogeneration facilities tend to operate continuously regardless of the price for power to support industrial activities. Cogeneration provides a significant share of Alberta's total electricity, more than other natural gas technology types
Renewables
The wind sector, particularly in southern Alberta has seen significant growth from 1.1% of total generation in 2005, to 12% in 2023.
Hydroelectricity
In the 1910s, Alberta built hydropower facilities. But the construction of coal-fired and natural gas-fired facilities outpaced hydropower.
In the 1950s, hydroelectric power provided 50% of Alberta's electricity, but by 2010, this has decreased to 7%. In 2018, there were no proposals for hydroelectric projects.
By 2018, Alberta was behind other provinces in developing renewable energy. In the 44 years that the Progressive Conservative party was in power, oil and gas production, not renewable energy was the priority.
In their 2020 "Final Report for Alberta Utilities Commission Update on Alberta's Hydroelectricity Energy Resources", Hatch consultancy evaluated Alberta river basin's potential for development: Athabasca River Basin: 13,050 GWh; Churchill River Basin: No hydroelectric potential; Hay River Basin: 100 GWh; Milk River Basin: No hydroelectric potential; North Saskatchewan River Basin: 8,270 GWh; Peace River Basin: 19,720 GWh; Red Deer River: 340 GWh; Slave River Basin: 7,640 GWh; South Saskatchewan River Basin: 3,930 GWh.
Hydroelectricity has been Canada's biggest source of electricity historically. However, many facilities are aging and are in need of expensive repairs. The high cost of construction has often led to overruns and with many other less expensive renewable options, future hydroelectric projects should be considered with caution.
Wind
The first commercial wind farm in Canada, the TransAlta's Cowley Ridge wind plant, near Pincher Creek, Alberta was completed in 1993.
By 2006 TransAtla wind farms were constrained at the 400 megawatts of wind power, because the installation of power lines was not keeping pace with the construction of wind turbines.
Only 40% of wind turbines in Canada were commissioned before 2010. Over time they got bigger and taller, and their capacity and sophistication increased, according to the federal Natural Resources department's senior wind engineer. By 2010 wind capacity had reached 657 MW and hydroelectric capacity produced 900 MW.
By 2020, Alberta had 900 wind turbines. Only two provinces had more; Ontario had 2,663 turbines, which represented approximately 40% of Canada's total, and Quebec had 1,991.
Solar and geothermal
Bill 36, The Geothermal Resource Development Act, was introduced on October 20, 2020, to create clear policies and regulations for the "emerging industry" to encourage investment in geothermal resource development in Alberta. There is over 388,500 MW untapped geothermal generation in Alberta. In 2020, Alberta's total installed generating capacity was 16,515.13 MW by way of comparison.
Terrapin Geothermics' CA$90-million-dollar Greenview Geothermal Power Plant (Alberta No. 1) in the Municipal District of Greenview No. 16, which is expected to be online by 2023, received CA$25.45 million in funding from Natural Resources Canada (NRCan). The facility will be the first to produce geothermal energy in Alberta.
Hydrogen
Pennsylvania-based Air Products is constructing a CA$1.3-billion "net-zero hydrogen energy complex" near Edmonton which when completed in 2024 will use natural gas to produce the clean-burning hydrogen fuel. Air Products already has three hydrogen facilities in the province. Hydrogen will be used to generate electricity.
Environmental policies
One of the Alberta's government's major legislations in terms of jurisdiction over the Energy Resources and Conservation Board (ERCB) was the 1960 Gas Utilities Act.
In 1961, new provincial air-quality standards were introduced limiting hydrogen sulphide and sulphur dioxide emissions.
In response to the Rio de Janeiro June 1992 United Nations Conference on Environment and Development, Canada and over 160 other nations agree to work towards sustainable development by limiting greenhouse gas emissions that impact global climate change.
In 1994, Alberta's Department of Environmental Protection was created with the merger of two departments, the Department of Forestry and the Department of Land and Wildlife. The Department of Energy was divided into 5 newly created divisions.
In 1995 the Alberta Energy and Utilities Board (AEUB) was established through a merger of Public Utilities Board merged with the Energy Resources and Conservation Board (ERCB) to increase efficiency and to streamline the process of regulating energy and utilities. ERCB was previously the Petroleum and Natural Gas Conservation Board. ERCB became the Alberta Energy Regulator in 2013.
As of 2008, Alberta's electricity sector was the most carbon-intensive of all Canadian provinces and territories, with total emissions of 55.9 million tonnes of equivalent in 2008, accounting for 47% of all Canadian emissions in the electricity and heat generation sector.
By 2013, shale gas had become a significant part of the gas supply. A 2012 Natural Resources Canada study concluded that environmental impacts from shale gas in terms of GHG emissions were significantly less than those of coal. which corroborated findings in the United States.
Based on the December 2020 IEA's tenth edition of their annual market report on coal, globally the shift towards clean energy away from carbon-intensive fuels, such as coal, to reduce GHG emissions, accelerated. The IEA report said the demand for coal had peaked globally in 2013. Factors that contributed to the decrease in global demand, included the increase in the production of gas as part of the United States shale revolution, the accelerated increase in wind and solar energy production, and increase in enactment of public policies related to climate change. In 2017 and 2018 there was a brief rebound in the demand for coal. Although the global share of electricity generation only fell from 40% in 2009 to 36.5% in 2019, most of coal-generators were in India and China.
Market components
Alberta's electricity market consists of six fundamental components and features.
Electricity generation sector
Seventeen firms supply electricity into the grid. Five of those providers—ATCO Power, Enmax, Capital Power Corporation, TransAlta and TransCanada Corp.—supply about 80% of the province's generation capacity.
The generation sector in Alberta is dominated by TransAlta (formerly Calgary Power), ENMAX, and Capital Power Corporation, a spin-off of Edmonton's municipally owned company EPCOR. Utility companies in Alberta also include the wind generating Bullfrog Power, TransAlta Corporation, Alberta Power limited, AltaLink, ATCO Power and FortisAlberta. Although 5,700 megawatts of new generation was added and 1,470 megawatts from old plants were retired between 1998 and 2009, coal still accounted for 73.8% of utility-generated power in 2007, followed by natural gas, with 20.6%.
Calgary-based utility company TransAlta reported an increase of $405 million in the three-month period from September 30 to December 31, 2021, compared to 2020.
Wire
Alberta's transmission grid, owned in sections by companies like TransAlta, AltaLink and ATCO Electric, then carries electricity produced by generating providers to wholesale electricity purchasers or retailers. Connections to BC, Saskatchewan and Montana allow imports and exports of competitive power.
Wholesale purchasers
There are about 160 wholesale electricity purchasers, many of which are also resellers to other end-users like ENMAX, EPCOR, FortisAlberta, and Direct Energy.
Supply
From 1998 to 2008, more than 4,700 megawatts (MW) of new generation were added to the province's power supply.
Although 5,700 megawatts of new generation was added and 1,470 megawatts from old plants were retired between 1998 and 2009, coal still accounted for 73.8% of utility-generated power in 2007, followed by natural gas, with 20.6%.
Demand
In 2017, Alberta was the fourth highest consumer of electricity per capita in Canada representing "consumption of "28% more than the national average" with an "annual electricity consumption per capita" of 18.7 megawatt hours (MW.h). Demand for electricity had grown by 22% between 2005 and 2017.
During the COVID-19 pandemic, annual demand for electricity decreased in 2020 and expanded by about 3% by 2021, as the province's economy recovered.
Residential sector
The residential sector includes home heating and cooling systems, household appliances, water heaters, and lighting.
Retail consumers have the option to buy electricity at competitive prices from third-party sellers like Just Energy or at regulated prices through the local utility like ENMAX and EPCOR.
Electricity costs for end-users
According to Statista in 2021, compared to other Canadian provinces and territories, the electricity costs for end-users in Alberta at 16.6 cents per kWh, was below the average of 17.9 cents per kWh. The highest rates were in the Northwest Territories and Nunavut at 38.2 and 37.6. The lowest costs were in Québec at 7.3. Manitoba at 9.9, British Columbia at 12.6, New Brunswick at 12.7, Ontario at 13, and Newfoundland and Labrador at 13.8 were all lower than Alberta. Statista said Québec's electricity was less expensive because of the number of hydroelectric dams throughout the province. The NWTs and Nunavut pay the most because of their remote location which often rely diesel fuel to generate electricity.
A 2013 study compared the unit price of electricity in major cities in Canada and the United States. Calgary's unit price was 14.81 cents per kWh, compared to 6.87 cents per kWh in Montreal, 15.45 in Halifax. In April 2013, Calgary ranked third (with an average monthly payment of $216 based on monthly consumption of 1,000 kWh) and Edmonton fourth ($202 a month) in Canada compared to other cities in terms of high electricity bills. Halifax placed first and worst in Canada at $225 a month. Compared to other cities in North America, Calgary and Edmonton placed seventh and eighth in terms of the highest power costs. Vancouver, BC was among the least expensive ($130 a month). In Alberta, energy spending (without gasoline costs) represents 2.3% of total household spending.
Following the restructuring and deregulation that began in 1996 electricity rates for consumers increased disproportionately to the cost of generating electricity. The cost of generating electricity was approximately 3.5 cents per kilowatt hour in 2000. The average price for consumers was over 13 cents per kWh. Provost said that electricity generators' revenue increased by about CA$2 or CA$ billion annually because consumers paid more for electricity.
In response to consumers complaints about high prices in 2001, the government implemented a Regulated Rate Option (RRO), as a means to shield consumers from price volatility.
Electricity rates in Alberta dropped to less than 4 cents per kWh in 2015.
An historic low in electricity prices in the province was reached in 2017, when they dropped to 2.88 cents/kWh.
Under the electricity price cap that had regulated electricity rates introduced by the NDP, consumers who had the regulated rate option (RRO) with the regulated rate option (RRO) paid a maximum of 6.8 cents per kWh.
Since 2018, electricity rates in Alberta have been steadily climbing.
Premier Jason Kenney scrapped the Alberta electricity price cap that had regulated electricity rates in the fall 2019 provincial budget.
In January 2022 electricity rates reached a record high of more than above 16 cents/kWh in Edmonton and Calgary. Transmission and distributions fees were added on top of the electricity rate.
Commercial sector
Commercial sector includes commercial heating and cooling systems as well as lighting in commercial buildings and offices.
The commercial sector consumed 17.2 TW.h of electricity in 2017 and the residential sector consumed 10.3 TW.h.
Industrial sectors
The industrial sector includes mining activities, such as oil sands, coal-mining, manufacturing activities, construction and forestry. Industrial consumers account for approximately 28% of electricity consumed in Ontario. This consumption is projected to remain stable.
Cross border wholesale market
Alberta imports and exports according to market conditions with Montana and neighbouring provinces, British Columbia and Saskatchewan. BC and Saskatchewan have agreements with Alberta called "interties" through which the Available Transfer Capability (ATC) is specified.
Despite the vast differences in market design and because of large differences in the mix of generation assets, the electricity systems of Alberta and British Columbia enjoy a unique symbiotic relationship. B.C. may provide a market for Alberta's off-peak surplus and a peaking supply for Alberta's crunch periods. The investment climate in Alberta has attracted a steady stream of private investors-funded generation projects since 1996. This is one of the reasons Alberta's electricity system has provided reliable, sustainable power even during periods of rapid economic growth.
Alberta and neighbouring British Columbia are buyers and sellers of each other's power. Historically, commercial parties in Alberta import energy during peak demand period. Similarly, exports from Alberta frequently occur during off-peak periods (weekends, evenings, or statutory holidays when demand in Alberta diminishes or when there is an abundance of wind energy during off-peak periods). This energy trade confers benefits on both provinces.
The power trade between the two provinces is based in part on geography. Alberta historically has had coal and natural gas, while B.C.'s generation is largely hydro-electric.
Whether for reasons of temporary high demand, short supply or both, commercial parties in Alberta buy electricity from its western neighbour through Alberta Electric System Operator. By contrast, commercial parties might export electricity in Alberta during off-peak periods. During that period, B.C. uses that power to reduce its hydroelectric generation or that energy is wheeled through to the Pacific Northwest wholesale electricity market.
Commercial parties in Alberta buy electricity from B.C. during periods of peak consumption, on unusually cold or hot days or when a larger-than-normal number of generators are down for maintenance. Historically, British Columbia bought electricity from Alberta during off-peak periods. More recently, purchases from Alberta tend to take place when there is an abundance of wind generation during periods of low demand in Alberta. This trade benefits both provinces to make use of their generating and storage capacity and use assets more efficiently. Also, it puts competitive pressure on power prices in both provinces.
Electricity imports from Alberta represent just 3% of all imports into B.C. In fact, B.C. exports six times as much as it imports from Alberta, which helps to substantially reduce greenhouse gas emissions there.
See also
Hydro-Québec's electricity transmission system
Electricity policy of Ontario
Coal in Alberta
List of generating stations in Alberta
Notes
External links
Canadian Wind Turbine Database
Citations
References
Energy policy
Economy of Alberta
Politics of Alberta
Electricity Policy
Energy policy of Canada | Electricity policy of Alberta | [
"Environmental_science"
] | 8,018 | [
"Environmental social science",
"Energy policy"
] |
17,440,612 | https://en.wikipedia.org/wiki/Physical%20configuration%20audit | In computer engineering, a physical configuration audit (PCA) is the formal examination of the "as-built" configuration of a configuration item (CI) against its technical documentation to establish or verify the CI's product baseline. The PCA is used to examine the actual configuration of the CI that is representative of the product configuration, in order to verify that the related design documentation matches the design of the deliverable CI. It is also used to validate many of the supporting processes that the contractor uses in the production of the CI. This is also used to verify that any elements of the CI that were redesigned after the completion of the functional configuration audit also meet the requirements of the CI's performance specification. Additional PCAs may be accomplished later during CI production if circumstances such as the following apply:
The original production line is "shut down" for several years and then production is restarted.
The production contract for manufacture of a CI with a fairly complex, or difficult-to-manufacture, design is awarded to a new contractor or vendor.
Re-auditing in these circumstances is advisable regardless of whether the contractor or the government controls the detail production design.
Software
PCA is one of the practices used in software configuration management for software configuration auditing. The purpose of the software PCA is to ensure that the design and reference documentation is consistent with the as-built software product.
References
External links
Defense Acquisition Guidebook on Physical Configuration Audit
Configuration management | Physical configuration audit | [
"Engineering"
] | 294 | [
"Software engineering",
"Systems engineering",
"Configuration management",
"Software engineering stubs"
] |
17,441,166 | https://en.wikipedia.org/wiki/Gateway-to-Gateway%20Protocol | The Gateway-to-Gateway Protocol (GGP) is an obsolete protocol defined for routing datagrams between Internet gateways. It was first outlined in 1982.
The Gateway-to-Gateway Protocol was designed as an Internet Protocol (IP) datagram service similar to the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). However, it is classified as an Internet Layer protocol.
GGP uses a minimum hop algorithm, in which it measures distance in router hops. A router is defined to be zero hops from directly connected networks, one hop from networks that are reachable through one other gateway. The protocol implements a distributed shortest-path methodology, and therefore requires global convergence of the routing tables after any change of link connectivity in the network.
Each GGP message has a field header that identifies the message type and the format of the remaining fields. Because only core routers participated in GGP, and because core routers were controlled by a central authority, other routers could not interfere with the exchange.
See also
Distance-vector routing protocol
Link-state routing protocol
Router Information Protocol
References
Internet layer protocols | Gateway-to-Gateway Protocol | [
"Technology"
] | 235 | [
"Computing stubs",
"Computer network stubs"
] |
17,442,209 | https://en.wikipedia.org/wiki/Drift%20current | In condensed matter physics and electrochemistry, drift current is the electric current, or movement of charge carriers, which is due to the applied electric field, often stated as the electromotive force over a given distance. When an electric field is applied across a semiconductor material, a current is produced due to the flow of charge carriers.
The drift velocity is the average velocity of the charge carriers in the drift current. The drift velocity, and resulting current, is characterized by the mobility; for details, see electron mobility (for solids) or electrical mobility (for a more general discussion).
See drift–diffusion equation for the way that the drift current, diffusion current, and carrier generation and recombination are combined into a single equation.
Overview
Drift current is the electric current caused by particles getting pulled by an electric field. The term is most commonly used in the context of electrons and holes in semiconductors, although the same concept also applies to metals, electrolytes, and so on.
Drift current is caused by the electric force: Charged particles get pushed by an electric field. Electrons, being negatively charged, get pushed in the opposite direction to the electric field, while holes get pushed in the same direction as the electric field, but the resulting conventional current points in the same direction as the electric field in both cases.
If an electric field is applied to an electron in a vacuum, the electron will accelerate faster and faster, in approximately a straight line. A drift current looks very different than that up close. Typically, electrons are moving randomly in all directions (Brownian motion), frequently changing direction when they collide with grain boundaries or other disturbances. Between collisions, the electric field subtly accelerates them in one direction. So over time, they move at the drift velocity on average, but at any instant the electrons are moving at the (typically much faster) thermal velocity.
The amount of drift current depends on the concentration of charge carriers and their mobility in the material or medium.
Drift current versus diffusion current
Drift current frequently occurs at the same time as diffusion current; the following table compares the two forms of current:
{| class="wikitable"
|-
! scope="col" width="450px" | Drift current
! scope="col" width="450px" | Diffusion current
|-
| Drift current is caused by electric fields.
| Diffusion current is caused by variation in the carrier concentration.
|-
| Direction of the drift current is always in the direction of the electric field.
| Direction of the diffusion current depends on the gradient of the carrier concentration.
|-
| Obeys Ohm's law:
| Obeys Fick's law:
|}
Drift current in a p-n junction diode
In a p-n junction diode, electrons and holes are the minority charge carriers in the p-region and the n-region, respectively. In an unbiased junction, due to the diffusion of charge carriers, the diffusion current, which flows from the p to n region, is exactly balanced by the equal and opposite drift current.
The drift current in an unbiased junction is caused by the field formed due to the redistribution of charge carriers, the ionised donor and acceptor atoms additional electrons and holes are lost from diffusion, hence leaving positive and negative ions. These ions in the crystal lattice result in a charge disparity, creating a built in electric field.
In a biased p-n junction, the drift current is independent of the biasing, as the number of minority carriers is independent of the biasing voltages. But as minority charge carriers can be thermally generated, drift current is temperature dependent.
When an electric field is applied across the semiconductor material, the charge carriers attain a certain drift velocity . This combined effect of movement of the charge carriers constitutes a current known as "drift current". Drift current density due to the charge carriers such as free electrons and holes is the current passing through a square centimeter area perpendicular to the direction of flow.
(i) Drift current density , due to free electrons is given by:
(ii) Drift current density , due to holes is given by:
Where:
- Number of free electrons per cubic centimeter
- Number of holes per cubic centimeter
– Mobility of electrons in
– Mobility of holes in
– Applied electric field intensity in
– Charge of an electron = 1.6 × 10−19 coulomb
References
Charge carriers
Electric current | Drift current | [
"Physics",
"Materials_science"
] | 901 | [
"Physical phenomena",
"Physical quantities",
"Charge carriers",
"Electrical phenomena",
"Condensed matter physics",
"Electric current",
"Wikipedia categories named after physical quantities"
] |
17,443,233 | https://en.wikipedia.org/wiki/PHOSIDA | The PHOsphorylation SIte DAtabase PHOSIDA integrates thousands of high-confidence in vivo phosphosites identified in various species on the basis of mass spectrometry technology. For each phosphosite, PHOSIDA lists matching kinase motifs, predicted secondary structures, conservation patterns, and its dynamic regulation upon stimulus or other treatments such as kinase inhibition, for example. It includes phosphoproteomes of various organisms ranging from eukaryotes such as human and yeast to bacteria such as Escherichia coli and Lactococcus lactis. Even the phosphoproteome of an archaean organism, namely Halobacterium salinarium, is available. The integration of phosphoproteomes identified in organisms, which cover the phylogenetic tree representatively, enables to examine phosphorylation events from a global point of view including conservation and evolutionary preservation in time.
Moreover, PHOSIDA also predicts phosphosites on the basis of support vector machines.
References
External links
Official website
Phosphorus
Chemical databases
Biological databases
Post-translational modification | PHOSIDA | [
"Chemistry",
"Biology"
] | 231 | [
"Gene expression",
"Chemical databases",
"Biochemical reactions",
"Bioinformatics",
"Post-translational modification",
"Biological databases"
] |
17,443,489 | https://en.wikipedia.org/wiki/Tact%20%28psychology%29 | Tact is a term that B.F. Skinner used to describe a verbal operant which is controlled by a nonverbal stimulus (such as an object, event, or property of an object) and is maintained by nonspecific social reinforcement (praise).
Less technically, a tact is a label. For example, a child may see their pet dog and say "dog"; the nonverbal stimulus (dog) evoked the response "dog" which is maintained by praise (or generalized conditioned reinforcement) "you're right, that is a dog!"
Chapter five of Skinner's Verbal Behavior discusses the tact in depth. A tact is said to "make contact with" the world, and refers to behavior that is under the control of generalized reinforcement. The controlling antecedent stimulus is nonverbal, and constitutes some portion of "the whole of the physical environment."
The tact described by Skinner includes three important and related events, known as the 3-term-contingency: a stimulus, a response, and a consequence, in this case reinforcement. A verbal response is occasioned by the presence of a stimulus, such as when you say "ball" in the presence of a ball. In this scenario, "ball" is more likely to be reinforced by the listener than saying "cat", showing the importance of the third event, reinforcement, in relation to the stimulus (ball) and response ("ball"). Although the stimulus controls the response, it is the verbal community which establishes the stimulus' control over the verbal response of the speaker. For example, a child may say "ball" in the presence of a ball (stimulus), the child's parent may respond "yes, that is a ball", (reinforcement) thereby increasing the probability that the child will say ball in the presence of a ball in the future. On the other hand, if the parent never responds to the child saying "ball" in the presence of a ball then the probability of that response will decrease in the future.
A tact may be pure or impure. For example, if the environmental stimulus evokes the response, the tact would be considered pure. If the tact is evoked by a verbal stimulus the resulting tact would be considered impure. For example, if a child is shown a picture of a dog, and emits the response "dog" this would be an example of a pure tact. If a child is shown a picture of a dog, and is given the verbal instruction "what is this?" then the response "dog" would be considered an impure tact.
The tact can be extended, as in generic, metaphorical, metonymical, solecistic, nomination, and "guessing" tact. It can also be involved in abstraction. Lowe, Horne, Harris & Randle (2002) would be one example of recent work in tacts.
Extensions
The tact is said to be capable of generic extension. Generic extension is essentially an example of stimulus generalization. The novel stimulus contains all of the relevant features of the original stimulus. For example, we may see a red car and say "car" as well as see a white car and say "car". Different makes and models of cars will all evoke the same response "car".
Tacts can be extended metaphorically; in this case the novel stimulus has only some of the defining features of the original stimulus. For example, when we describe something as "exploding with taste" by drawing the common property of an explosion with the response to our having eaten something (perhaps a strong response, or a sudden one).
Tacts can undergo metonymical extension when some irrelevant but related feature of the original stimulus controls a response. In metonymical extension, one word often replaces another; we may replace a part for a whole. For example, saying "refrigerator" when shown a picture of a kitchen, or saying "White house" in place of "President."
When controlling variables unrelated to standard or immediate reinforcement take over control of the tact, it is said to be solecistically extended. Malapropisms, solecism and catachresis are examples of this.
Skinner notes things like serial order, or conspicuous features of an object, may come to play as nominative tacts. A proper name may arise as a result of the tact. For example, a house that is haunted becomes "The Haunted House" as a nominative extension to the tact of its being haunted.
A guess may seemingly be the emission of a response in the absence of controlling stimuli. Skinner notes that this may simply be a tact under more subtle or hidden controlling variables, although this is not always the case in something like guessing the landing side of a coin toss, where the possible alternatives are fixed and there is no subtle or hidden stimuli to control responses.
Special conditions affecting stimulus control
Skinner deals with factors that interfere with, or change, generalized reinforcement. It is these conditions which, in turn, affect verbal behavior which may depend largely or entirely on generalized reinforcement. In children with developmental disabilities, tacts may need intensive training procedures to develop. Factors such as deprivation, emotional conditions and personal history may interfere with or change verbal behavior. Skinner mentions alertness, irrelevant emotional variables, "special circumstances" surrounding particular listeners or speakers, etc. (He refers to the conditions which are said to produce objective and subjective responses for example). We would now look at these as motivating operations/establishing conditions.
Under emersion conditions tacts will frequently emerge. However, in children with disabilities, more intensive training procedures are often needed.
Distortion
Distorted stimulus control may be minor as when a description (tact) is a slight exaggeration. Under stronger conditions of distortion, it may appear when the original stimulus is absent, as in the case of the response called a lie. Skinner notes that troubadours and fiction writers are perhaps both motivated by similar forms of tact distortion. Initially, they may recount real events, but as differential reinforcement affects the account we may see distortion and then total fabrication.
Tact training
Often, individuals with autism, developmental disabilities, or language delays have difficulty acquiring novel tacts. Many researchers in the field of verbal behavior and developmental disabilities have examined more intensive training procedures in order to teach tacts to these individuals. Specific types of prompts can be used in order to make a tact response more likely. For example, asking the student the question "what is this?" (this would be an example of an impure tact) has been used to prompt a correct tact response (this prompt can be faded until the learner can emit a pure tact). Echoic prompts (teacher repeats the correct answer which the learner must echo) have also been used to train tact responses. Kodak and Clements (2009) found that echoic training sessions before tact training was more effective at increasing independent tact responses.
Skinner (1957) suggested that verbal operants were functionally independent, meaning that after teaching one verbal operant the individual may not be able to emit the topographically same response under different stimulus conditions. For example, a child may be able to request water, but may not be able to tact water. Researchers are currently examining procedures that may facilitate the generalization across verbal operants. Some studies have indicated, for example, that after teaching a child to mand for items, they could then tact them as well without direct instruction. Multiple studies have found support for the emergence of tact responses without direct instruction. These teaching procedures are especially important for individuals with autism and developmental disabilities because the learner can gain additional skills without direct instruction time.
See also
Mand (psychology)
References
Behavioral concepts
Behaviorism | Tact (psychology) | [
"Biology"
] | 1,608 | [
"Behavior",
"Behavioral concepts",
"Behaviorism"
] |
17,444,003 | https://en.wikipedia.org/wiki/Diffuse%20Infrared%20Background%20Experiment | Diffuse Infrared Background Experiment (DIRBE) was an experiment on NASA's COBE mission, to survey the diffuse infrared sky. Measurements were made with a reflecting telescope with 19 cm diameter aperture. The goal was to obtain brightness maps of the universe at ten frequency bands ranging from the near to far infrared (1.25 to 240 micrometer). Also, linear polarization was measured at 1.25, 2.2, and 3.5 micrometers. During the mission, the instrument could sample half the celestial sphere each day.
Mission details
The Cosmic Background Explorer (COBE) mission was launched in November 1989. The spacecraft contained liquid helium that cooled the DIRBE instrument to below 2K to allow it to image in the infrared wavelengths. Primary observation started December 11, 1989 and ran until September 21, 1990, when the liquid helium ran out. After that date only observations in the 1.25 to 4.9 micrometer bands could be carried out, at about 20% of original sensitivity.
The DIRBE instrument was an absolute radiometer with an off-axis folded-Gregorian reflecting telescope, with 19 cm diameter aperture.
See also
COBE DIRBE
Infrared astronomy
Cosmic infrared background
List of largest infrared telescopes
References
Other infrared surveys
IRAS 1983
WISE 2010
Nancy Grace Roman Space Telescope 2027 (planned)
External links
NASA: DIRBE Overview
DIRBE data from here
Infrared imaging
Infrared telescopes
Astronomical surveys | Diffuse Infrared Background Experiment | [
"Astronomy"
] | 284 | [
"Astronomical surveys",
"Astronomical objects",
"Works about astronomy"
] |
17,444,043 | https://en.wikipedia.org/wiki/Portable%20oxygen%20concentrator | A portable oxygen concentrator (POC) is a device used to provide oxygen therapy to people that require greater oxygen concentrations than the levels of ambient air. It is similar to a home oxygen concentrator (OC), but is smaller in size and more mobile. They are small enough to carry and many are now FAA-approved for use on airplanes.
Development
Medical oxygen concentrators were developed in the late 1970s. Early manufacturers included Union Carbide and Bendix Corporation. They were initially conceived of as a method of providing a continuous source of home oxygen without the use of heavy tanks and frequent deliveries.
Beginning in the 2000s, manufacturers developed portable versions. Since their initial development, reliability has been improved, and POCs have between one and six settings which are not the same as liters per minute (LPM).
The latest models of intermittent flow only products have weighed in the range of from 2.8 to 9.9 pounds (1.3 to 4.5 kg), and continuous flow (CF) units have been between 10 and 20 pounds (4.5 to 9.0 kg).
Operation
POCs operate on the same principle as a home concentrator, pressure swing adsorption. The basic set up of a POC is a miniaturized air compressor, a cylinder filled containing the sieve, a pressure equalizing reservoir and valves and tubes.
During the first half of the first cycle the internal compressor forces this air through a system of chemical filters known as a molecular sieve. Filter surfaces are zeolite (microporous, crystalline aluminosilicate) that attract (via adsorption) nitrogen molecules more strongly than oxygen molecules – this takes the nitrogen out of the air leaving more concentrated oxygen behind. When the desired purity is reached and the first cylinder reaches roughly 20 psi the oxygen and small amounts of other gases are released into the pressure equalizing reservoir. As the pressure in the first cylinder drops the nitrogen is desorbed, the valve is closed, and the gas is vented into the ambient air. Most of the oxygen produced is delivered to the patient; part is fed back into the sieves (at greatly reduced pressure) to flush away left over nitrogen, and prepare the zeolite for the next cycle. The atmosphere contains around 21% oxygen and 78% nitrogen; the 1% remainder is a mixture of other gases which pass through this process. A POC system is functionally a nitrogen scrubber capable of consistently producing medical-grade oxygen of up to 90%.
The most important consideration for a POC is its ability to supply adequate supplementary oxygen to relieve hypoxia (oxygen deficiency) during normal activities and based on the patient's breathing cycles. Other variables include maximum oxygen purity, the number and increment of settings for adjusting oxygen flow, and battery capacity (or number of add-on batteries) and power cord options for recharging.
Pulse dose
Pulse dose (also called intermittent-flow or on-demand) POCs are the smallest units, often weighing as little as 5 pounds (2.2 kg). Their small size enables the patient to not waste energy gained from the treatment on carrying them. Here the unit intermittently administers a volume (or bolus) of oxygen in milliliters per breath (mL/breath). Their ability to conserve oxygen is key to keeping the units so compact without sacrificing the duration of oxygen supply. Most of the current POC systems provide oxygen on a pulse (on-demand) delivery and are used with a nasal cannula to deliver the oxygen to the patient.
Continuous flow
With continuous flow units, oxygen delivery is measured in LPM (liters per minute). Providing continuous flow requires a larger molecular sieve and pump/motor assembly, and additional electronics. This increases the device’s size and weight (approximately 18–20 lbs).
With on-demand or pulse flow, delivery is measured by the size (in milliliters) of the "bolus" of oxygen per breath.
Some Portable Oxygen Concentrator units offer both continuous flow as well as pulse flow oxygen.
Some uses
Medical:
Allows patients to utilize oxygen therapy 24/7 and reduce mortality as much 1.94 times less than for just overnight use.
Helps improve exercise tolerance, by allowing the user to exercise longer.
Helps increase stamina throughout day-to-day activities.
A POC is a safer option than carrying around an oxygen tank since it makes the purer gas on demand.
POC units are consistently smaller and lighter than tank-based systems and can provide a longer supply of oxygen.
Commercial:
Glass blowing industry
Skin care
Non-pressurized aircraft
Nightclub oxygen bars although doctors and the FDA have expressed some concern with this.
FAA approval
On 13 May 2009, the United States Department of Transportation (DOT) ruled that air carriers conducting passenger flights of greater capacity than 19 seats must allow travelers with a disability to use an FAA-approved POC. The DOT rules have been adopted by many international airlines. A list of POCs approved for air travel is on the FAA website.
Nighttime use
On-demand units are not advised for patients who experience oxygen desaturation due to sleep apnea, and a CPAP mask is generally advised for them.
For patients whose desaturation is due to shallow breathing, the nighttime use of POCs is a useful therapy.
Nighttime use has been facilitated by the advent of alarms and technology that detects a patient's slower breathing during sleep and adjusts the flow or bolus size accordingly.
See also
References
Medical equipment
Oxygen therapy | Portable oxygen concentrator | [
"Biology"
] | 1,140 | [
"Medical equipment",
"Medical technology"
] |
17,446,165 | https://en.wikipedia.org/wiki/T%C3%BCrksat%203A | Türksat 3A is a Turkish communications satellite, operated by Türksat. It was constructed by Thales Alenia Space, based on the Spacebus 4000B2 satellite bus, and was launched by Arianespace atop an Ariane 5ECA launch vehicle, along with the British Skynet 5C satellite, in a dual-payload launch on 12 June 2008 at 22:05:02 GMT, from ELA-3 at the Guiana Space Centre in Kourou, French Guiana.
It is part of the Turksat series of satellites, and is placed in geosynchronous orbit at 42°E to provide communications services to Turkey, Europe and the Middle East.
The contract for the construction and in-orbit delivery of Türksat 3A was announced in February 2006.
Positioned at 42°E, Türksat 3A replaced the aging Türksat 1C, which entered service in 1996. It consists of 24 Ku band transponders, nine with 36Mhz, 12 with 36 MHz and 12 with 72 MHz bandwidth. Turksat 3A was originally intended to start services at the beginning of 2008.
See also
Turksat (satellite)
References
External links
Turksat 3A Frequencies
Turksat 4A Channel List
Turksat Cable TV Service
Spacecraft launched in 2008
Communications satellites of Turkey
Communications satellites in geostationary orbit
Turksat 3A | Türksat 3A | [
"Astronomy"
] | 266 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
17,446,349 | https://en.wikipedia.org/wiki/Eberlein%E2%80%93%C5%A0mulian%20theorem | In the mathematical field of functional analysis, the Eberlein–Šmulian theorem (named after William Frederick Eberlein and Witold Lwowitsch Schmulian) is a result that relates three different kinds of weak compactness in a Banach space.
Statement
Eberlein–Šmulian theorem: If X is a Banach space and A is a subset of X, then the following statements are equivalent:
each sequence of elements of A has a subsequence that is weakly convergent in X
each sequence of elements of A has a weak cluster point in X
the weak closure of A is weakly compact.
A set A (in any topological space) can be compact in three different ways:
Sequential compactness: Every sequence from A has a convergent subsequence whose limit is in A.
Limit point compactness: Every infinite subset of A has a limit point in A.
Compactness (or Heine-Borel compactness): Every open cover of A admits a finite subcover.
The Eberlein–Šmulian theorem states that the three are equivalent on a weak topology of a Banach space.
While this equivalence is true in general for a metric space, the weak topology is not metrizable in infinite dimensional vector spaces, and so the Eberlein–Šmulian theorem is needed.
Applications
The Eberlein–Šmulian theorem is important in the theory of PDEs, and particularly in Sobolev spaces. Many Sobolev spaces are reflexive Banach spaces and therefore bounded subsets are weakly precompact by Alaoglu's theorem. Thus the theorem implies that bounded subsets are weakly sequentially precompact, and therefore from every bounded sequence of elements of that space it is possible to extract a subsequence which is weakly converging in the space. Since many PDEs only have solutions in the weak sense, this theorem is an important step in deciding which spaces of weak solutions to use in solving a PDE.
See also
Banach–Alaoglu theorem
Bishop–Phelps theorem
Mazur's lemma
James' theorem
Goldstine theorem
References
Bibliography
.
.
.
Banach spaces
Compactness theorems
Theorems in functional analysis | Eberlein–Šmulian theorem | [
"Mathematics"
] | 459 | [
"Compactness theorems",
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical analysis stubs",
"Theorems in topology",
"Theorems in functional analysis"
] |
17,446,447 | https://en.wikipedia.org/wiki/Main%20river | Main rivers () are a statutory type of watercourse in England and Wales, usually larger streams and rivers, but also some smaller watercourses. A main river is designated by being marked as such on a main river map, and can include any structure or appliance for controlling or regulating the flow of water in, into or out of a main river. Every other open watercourse in England and Wales is determined by statute as an 'ordinary watercourse'.
England
The Environment Agency carries out maintenance, improvement or construction work on main rivers to manage flood risk as part of its duties and powers as defined by the Flood and Water
Management Act 2010. The Environment Agency's powers to carry out flood defence works apply to main rivers and the sea only; they do not apply to ordinary watercourses. The Environment Agency does not have to maintain or construct new works on main rivers or the sea and it is unlikely to maintain a watercourse to improve its amenity or to stop erosion that does not increase flood risk. Main rivers in England are designated by the Environment Agency; Defra statutory guidance, issued under section 193E of the Water Resources Act 1991, advises that the Environment Agency should classify a watercourse as a main river if it is likely to have flood consequences for significant numbers of people or properties, or if it is likely to contribute to significant flooding across the catchment.
Wales
Natural Resources Wales decides which rivers are classified as main rivers, and carries out works to manage flood risk in the main river catchments.
Map
The main river maps for England and Wales from the Environment Agency are freely available to the public for viewing.
References
Bodies of water
Rivers
Flood control in the United Kingdom | Main river | [
"Environmental_science"
] | 347 | [
"Hydrology",
"Hydrology stubs"
] |
17,446,569 | https://en.wikipedia.org/wiki/Godement%20resolution | The Godement resolution of a sheaf is a construction in homological algebra that allows one to view global, cohomological information about the sheaf in terms of local information coming from its stalks. It is useful for computing sheaf cohomology. It was discovered by Roger Godement.
Godement construction
Given a topological space X (more generally, a topos X with enough points), and a sheaf F on X, the Godement construction for F gives a sheaf constructed as follows. For each point , let denote the stalk of F at x. Given an open set , define
An open subset clearly induces a restriction map , so is a presheaf. One checks the sheaf axiom easily. One also proves easily that is flabby, meaning each restriction map is surjective. The map can be turned into a functor because a map between two sheaves induces maps between their stalks. Finally, there is a canonical map of sheaves that sends each section to the 'product' of its germs. This canonical map is a natural transformation between the identity functor and .
Another way to view is as follows. Let be the set X with the discrete topology. Let be the continuous map induced by the identity. It induces adjoint direct and inverse image functors and . Then , and the unit of this adjunction is the natural transformation described above.
Because of this adjunction, there is an associated monad on the category of sheaves on X. Using this monad there is a way to turn a sheaf F into a coaugmented cosimplicial sheaf. This coaugmented cosimplicial sheaf gives rise to an augmented cochain complex that is defined to be the Godement resolution of F.
In more down-to-earth terms, let , and let denote the canonical map. For each , let denote , and let denote the canonical map. The resulting resolution is a flabby resolution of F, and its cohomology is the sheaf cohomology of F.
References
External links
Sheaf theory
Algebraic topology
Homological algebra | Godement resolution | [
"Mathematics"
] | 440 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Sheaf theory",
"Homological algebra"
] |
17,446,872 | https://en.wikipedia.org/wiki/Talbot%20effect | The Talbot effect is a diffraction effect first observed in 1836 by Henry Fox Talbot. When a plane wave is incident upon a periodic diffraction grating, the image of the grating is repeated at regular distances away from the grating plane. The regular distance is called the Talbot length, and the repeated images are called self images or Talbot images. Furthermore, at half the Talbot length, a self-image also occurs, but phase-shifted by half a period (the physical meaning of this is that it is laterally shifted by half the width of the grating period). At smaller regular fractions of the Talbot length, sub-images can also be observed. At one quarter of the Talbot length, the self-image is halved in size, and appears with half the period of the grating (thus twice as many images are seen). At one eighth of the Talbot length, the period and size of the images is halved again, and so forth creating a fractal pattern of sub images with ever-decreasing size, often referred to as a Talbot carpet. Talbot cavities are used for coherent beam combination of laser sets.
Calculation of the Talbot length
Lord Rayleigh showed that the Talbot effect was a natural consequence of Fresnel diffraction and that the Talbot length can be found by the following formula:
where is the period of the diffraction grating and is the wavelength of the light incident on the grating. However, if wavelength is comparable to grating period , this expression may lead to errors in up to 100%. In this case the exact expression derived by Lord Rayleigh should be used:
Fresnel number of the finite size Talbot grating
The number of Fresnel zones that form first Talbot self-image of the grating with period and transverse size is given by exact formula . This result is obtained via exact evaluation of Fresnel-Kirchhoff integral in the near field at distance .
The atomic Talbot effect
Due to the quantum mechanical wave nature of particles, diffraction effects have also been observed with atoms—effects which are similar to those in the case of light. Chapman et al. carried out an experiment in which a collimated beam of sodium atoms was passed through two diffraction gratings (the second used as a mask) to observe the Talbot effect and measure the Talbot length. The beam had a mean velocity of corresponding to a de Broglie wavelength of = . Their experiment was performed with 200 and gratings which yielded Talbot lengths of 4.7 and respectively. This showed that for an atomic beam of constant velocity, by using , the atomic Talbot length can be found in the same manner.
Nonlinear Talbot effect
The nonlinear Talbot effect results from self-imaging of the generated periodic intensity pattern at the output surface of the periodically poled LiTaO3 crystal. Both integer and fractional nonlinear Talbot effects were investigated.
In cubic nonlinear Schrödinger's equation , nonlinear Talbot effect of rogue waves is observed numerically.
The nonlinear Talbot effect was also realized in linear, nonlinear and highly nonlinear surface gravity water waves. In the experiment, the group observed that higher frequency periodic patterns at the fractional Talbot distance disappear. Further increase in the wave steepness lead to deviations from the established nonlinear theory, unlike in the periodic revival that occurs in the linear and nonlinear regime, in highly nonlinear regimes the wave crests exhibit self acceleration, followed by self deceleration at half the Talbot distance, thus completing a smooth transition of the periodic pulse train by half a period.
Applications of the optical Talbot effect
The optical Talbot effect can be used in imaging applications to overcome the diffraction limit (e.g. in structured illumination fluorescence microscopy).
Moreover, its capacity to generate very fine patterns is also a powerful tool in Talbot lithography.
The Talbot cavity is used for the phase-locking of the laser sets.
In experimental fluid dynamics, the Talbot effect has been implemented in Talbot interferometry to measure displacements and temperature, and deployed with laser-induced fluorescence to reconstruct free surfaces in 3D, and measure velocity.
See also
Angle-sensitive pixel
References
External links
Talbot's 1836 paper via Google Books
Rayleigh's 1881 paper via Google Books
Undergraduate thesis by Rob Wild (PDF)
Talbot effect observed over space-time for the first time
Diffraction | Talbot effect | [
"Physics",
"Chemistry",
"Materials_science"
] | 886 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
17,446,992 | https://en.wikipedia.org/wiki/Elevated%20plus%20maze | The elevated plus maze (EPM) is a test measuring anxiety in laboratory animals that usually uses rodents as a screening test for putative anxiolytic or anxiogenic compounds and as a general research tool in neurobiological anxiety research such as PTSD and TBI. The model is based on the test animal's aversion to open spaces and tendency to be thigmotaxic. In the EPM, this anxiety is expressed by the animal spending more time in the enclosed arms. The validity of the model has been criticized as non-classical clinical anxiolytics produce mixed results in the EPM test. Despite this, the model is still commonly used for screening putative anxiolytics and for general research into the brain mechanisms of anxiety.
Method
The test uses an elevated, plus-shaped (+) apparatus with two open and two enclosed arms. The behavioral model is based on the general aversion of rodents to open spaces. This aversion leads to thigmotaxis: a preference for remaining in enclosed spaces or close to the edges of a bounded space. In the EPM, this translates into the animals limiting their movement to the enclosed arms.
Anxiety reduction is indicated in the plus-maze by an increase in the proportion of time spent in the open arms (time in open arms/total time in open or closed arms) and an increase in the proportion of entries into the open arms (entries into open arms/total entries into open or closed arms). The total number of arm entries and number of closed-arm entries are sometimes used as measures of general activity. The relationship between the EPM and other tests of exploratory activity (open-field and emergence) have been analyzed in two mouse strains.
Criticism
While EPM is the most commonly employed animal behavioral model of anxiety, there are several issues concerning the validity of the model. Classical clinical anxiolytics, such as benzodiazepines (e.g., Valium), do reduce measures of anxiety in EPM. However, more novel compounds, such as 5-HT1A agonists (e.g., Buspar) give mixed results. Selective serotonin reuptake inhibitors and tricyclic antidepressants, which are commonly employed in clinical settings to treat anxiety disorders, also do not lead to a stable anxiolytic effect on EPM. This raises the possibility that EPM is a suitable model for testing GABA-related compounds, such as benzodiazepines or direct GABAA agonists, but not for other drugs. Despite this, the model is commonly employed for screening putative anxiolytics and for general research into the brain mechanisms of anxiety, likely due to the ease of employment and the vast number of studies already in the literature.
Variations
Elevated zero maze
The elevated zero maze (EZM) is an elevated circular runway with alternating open light areas and enclosed dark areas. The continuous nature of this apparatus eliminates the problem of the EPM in how to account for the animal's presence in the center area of the EPM. In the EPM test, animals may spend up to 30% of their time in the ambiguous central start area or return to it often, making it difficult to evaluate the biological significance of anxiety related behaviour. Animal will return to the central area because they are habituated to that area and associate it with being "safe".
Untreated rodents show a higher exploration of open areas in the EZM than in the EPM. This could indicate the EPM inhibits exploration, but the fact that rodents spent time in the central zone of the EPM needs to be taken into account. The EZM is more sensitive to changes than the EPM due to the baseline level in the EZM being lower than the EPM.
Elevated T maze
The elevated T maze (ETM) has three arms in the shape of the letter "T". One arm is closed and perpendicular to the other two arms which are open. This test is designed to observe anxiety effects and how it affects learning. The rodent is placed on the enclosed arm and allowed to explore. The trial ends when the rodents sets all four paws in one of the open arms. Rodents are allowed multiple trials until they learn to stay in one of the open arms for 300 seconds. This is a measurement of inhibitory avoidance. Depending on what the rodents were treated with during the training sessions, they would learn at different rates giving information on how the brain stores memories. This test can be used to assess long term memory. When a rodent has been sufficiently trained, researchers will test the rodent again after a week to observe if the rodent still remembers to stay in the enclosed arm.
Plus-maze discriminative avoidance test
Like the standard EPM, the apparatus used in the plus-maze discriminative avoidance test (PMDAT) has four arms. This test has been used to investigate interactions between aversive memory and anxiety responses in rodents. The apparatus has two open arms opposite to two enclosed arms. In this test, one of the enclosed arms is paired with aversive stimuli (e.g. bright light, loud white noise). During training, animals are placed in the apparatus facing the intercept between the open arms. Each time the animal enters the aversive enclosed arm, the aversive stimulus is presented until the animal leaves the arm. Upon a second exposure to the maze (e.g. 24 h later) the aversive stimuli is no longer presented. Retention of the aversive memory is assessed based on the relative time spent in the non-aversive arm compared to the previously aversive arm and anxiety behavior is calculated based on the time spent in the open arms during the training session.
Multivariate Concentric Square Field test (MCSF-test)
The MCSF-test is a behaviour model used to study risk assessment, risk taking, anxiety and security seeking behaviour. It has a completely different design compared to the t-maze, but instead of using a battery of different behaviour models this test can be used to measure a variety of dependent and independent variables. In this context "multivariate" is defined as that the subject has a free choice of different environments contained in the same apparatus and session.
The MCSF consists of different areas associated with risk-taking and shelter-seeking. The subject can therefore choose between locations with different qualities regarding open areas, illuminations, shelter and exploratory challenges. The arena consists of a dark room enclosed by walls and ceiling, dimly illuminated corridors, open area with moderate illumination, a hole-board area which requires a certain physical effort to reach and an elevated bridge with high illumination.
Elevated plus maze for humans using virtual reality
Using a combination of virtual reality and real-world elements, the EPM has been transferred to usage with humans. Participants who partook in the study were placed on a 3.5 by 3.5 meters wide and 0.3m high wooden cross while wearing a virtual reality headset. In the virtual environment, the real cross was matched in exact dimension and orientation. Open arms were above a 50-meter drop over open water, while the closed arms were surrounded by a stable and firm surface around the platform. Similar to the observed rodent behavior on an EPM, participants reported higher anxiety on open arms while also avoiding those more. Participants with high anxiety also exhibited higher avoidance of open arms.
See also
Hole-board test
Morris water navigation task
Vogel conflict test
References
External links
Elevated Plus Maze Academic Blog
Video of the elevated plus maze
Animal cognition
Animal testing mazes
Ethology | Elevated plus maze | [
"Biology"
] | 1,576 | [
"Behavior",
"Animals",
"Behavioural sciences",
"Animal cognition",
"Ethology"
] |
17,447,820 | https://en.wikipedia.org/wiki/Eye%20beam | In the physics inherited from Plato (although rejected by Aristotle), an eye beam generated in the eye was thought to be responsible for the sense of sight. The eye beam darted by the imagined basilisk, for instance, was the agent of its lethal power, given the technical term extramission.
The exaggerated eyes of fourth-century Roman emperors like Constantine the Great (illustration) reflect this character.<ref>L. Safran, "What Constantine saw: reflections on the Capitoline Colossus, visuality and early Christian studies" ''Millennium 3 (2006:43-73), noted in Paul Stephenson, Constantine, Roman Emperor, Christian Victor, 2010: notes 333.</ref> The concept found expression in poetry into the 17th century, most famously in John Donne's poem "The Extasie":
Our eye-beams twisted, and did thred
Our eyes, upon one double string;
So to'entergraft our hands, as yet
Was all the meanes to make us one,
And pictures in our eyes to get
Was all our propagation.
In the same period John Milton wrote, of having gone blind, "When I consider how my light is spent", meaning that he had lost the capacity to generate eye beams.
Later in the century, Newtonian optics and increased understanding of the structure of the eye rendered the old concept invalid, but it was revived as an aspect of monstrous superhuman capabilities in popular culture of the 20th century.
The emission theory of sight''' seemed to be corroborated by geometry and was reinforced by Robert Grosseteste.
In Algernon Swinburne's "Atalanta in Calydon" the conception is revived for poetic purposes, enriching the poem's pagan context in the Huntsman's invocation of Artemis:
Hear now and help, and lift no violent hand,
But favourable and fair as thine eye's beam,
Hidden and shown in heaven".
In T. S. Eliot's rose garden episode that introduces "Burnt Norton" eyebeams persist in the fusion of possible pasts and presents like unheard music:
The unheard music hidden in the shrubbery
And the unseen eyebeam crossed, for the roses
Had the look of flowers that are looked at.
The New Zealand poet Edward Tregear instanced "the lurid eye-beam of the angry Bull"— Taurus of the zodiac— among the familiar stars above the alien wilderness of New Zealand.
In computer graphics, the concept of eye beams is resurrected in ray casting in which the bouncing of rays of light cast from a viewpoint around a scene is simulated computationally.
See also
Emission theory (vision)
Notes
Obsolete theories in physics
Vision | Eye beam | [
"Physics"
] | 556 | [
"Theoretical physics",
"Obsolete theories in physics"
] |
17,448,425 | https://en.wikipedia.org/wiki/Mand%20%28psychology%29 | Mand is a term that B.F. Skinner used to describe a verbal operant in which the response is reinforced by a characteristic consequence and is therefore under the functional control of relevant conditions of deprivation or aversive stimulation. One cannot determine, based on form alone, whether a response is a mand; it is necessary to know the kinds of variables controlling a response in order to identify a verbal operant. A mand is sometimes said to "specify its reinforcement" although this is not always the case. Skinner introduced the mand as one of six primary verbal operants in his 1957 work, Verbal Behavior.
Chapter three of Skinner's work, Verbal Behavior, discusses a functional relationship called the mand. A mand is a form of verbal behavior that is controlled by deprivation, satiation, or what is now called motivating operations (MO), as well as a controlling history. An example of this would be asking for water when one is water deprived ("thirsty"). It is tempting to say that a mand describes its reinforcer, which it sometimes does. But many mands have no correspondence to the reinforcer. For example, a loud knock may be a mand "open the door" and a servant may be called by a hand clap as much as a child might "ask for milk."
Mands differ from other verbal operants in that they primarily benefit the speaker, whereas other verbal operants function primarily for the benefit of the listener. This is not to say that mand's function exclusively in favor of the speaker, however; Skinner gives the example of the advice, "Go west!" as having the potential to yield consequences which will be reinforcing to both speaker and listener. When warnings such as "Look out!" are heeded, the listener may avoid aversive stimulation.
The Lamarre & Holland (1985) study on mands would be one example of a research study in this area.
Dynamic properties
The mand form, being under the control of deprivation and stimulation, will vary in energy level. Dynamic qualities are to be understood as variations that arise as a function of multiple causes. Dynamic, in this case, as opposed to how someone reading from a text might sound if they do not simulate the normal dynamic qualities of verbal behavior. Mands tend to be permanent when they are acquired.
Extended mands
Emitting mands to objects or animals that cannot possibly supply an appropriate response would be an example of the extended mand. Telling "stop!" to someone out of earshot, perhaps in a film, who is about to hurt themselves is an example of the extended mand. Extended mands occur due to extended stimulus control. In the case of an extended mand, the listener is unable to deliver consequences that would reinforce the mand, but they have enough in common with listeners that have previously reinforced the mand that stimulus control can be inferred.
Superstitious mands
Mands directed to inanimate objects may be said to be superstitious mands. Mands to an unreliable car to "come on and start" for example may be due to a history of intermittent reinforcement.
Magical mands
A magical mand is a mand form where the consequences specified in the mand have never occurred. The form "Give me a million dollars" has never before produced a million dollars and so would be classified as a magical mand. Skinner posits that many literary mands are of the magical form. Prayer might also be analyzed as belonging in one of the above three categories, depending upon one's opinion of the likelihood and mechanism of its answer.
Clinical application
Failure to mand adequately appears to be correlated with destructive behavior. This seems to be especially true for those with developmental disabilities.
See also
Imperative mood
Tact (psychology)
References
Behaviorism | Mand (psychology) | [
"Biology"
] | 783 | [
"Behavior",
"Behaviorism"
] |
17,448,807 | https://en.wikipedia.org/wiki/Display%20Control%20Channel | Display Control Channel (DCC) is an advanced method of implementing on-screen display (OSD) technology on KVM switches. It was developed and patented by ConnectPRO, a company that has been providing KVM and networking solutions since 1992.
Overview
Traditional OSD technology used on KVM switches displays control and selection functions, such as port selection and computer connection status, by overlaying this information on top of the selected channel's existing display. This method can result in inconsistent and unreliable image quality, as the size and positioning of the OSD depend on the video resolution of each connected system.
In contrast, KVM switches utilizing DCC technology display the configuration selection on a dedicated independent video channel. This approach offers several advantages:
More secure and reliable quality of the controlling menu
Consistent display regardless of connected systems' video resolutions
Greater functionality and more programming possibilities
Applications
DCC technology is specifically designed for use with KVM (Keyboard, Video, Mouse) switches, which allow users to control multiple computers from a single set of input devices. It is particularly useful in scenarios where high-quality display and control are crucial, such as in data centers, control rooms, or multi-computer workstations.
Technology ownership
Display Control Channel is patented by ConnectPRO, a company that has been providing KVM and networking solutions since 1992.
See also
KVM switch
On-screen display
Display Data Channel
References
External links
Computer peripherals
Out-of-band management | Display Control Channel | [
"Technology"
] | 293 | [
"Computer peripherals",
"Computing stubs",
"Components"
] |
17,449,730 | https://en.wikipedia.org/wiki/3-Mercaptopyruvic%20acid | 3-Mercaptopyruvic acid is an intermediate in cysteine metabolism. It has been studied as a potential treatment for cyanide poisoning, but its half-life is too short for it to be clinically effective. Instead, prodrugs, such as sulfanegen, are being evaluated to compensate for the short half-life of 3-mercaptopyruvic acid.
See also
3-mercaptopyruvate sulfurtransferase
References
Carboxylic acids
Thiols
Alpha-keto acids | 3-Mercaptopyruvic acid | [
"Chemistry"
] | 115 | [
"Carboxylic acids",
"Thiols",
"Functional groups",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
17,450,310 | https://en.wikipedia.org/wiki/Cysteic%20acid | Cysteic acid also known as 3-sulfo--alanine is the organic compound with the formula HO3SCH2CH(NH2)CO2H. It is often referred to as cysteate, which near neutral pH takes the form −O3SCH2CH(NH3+)CO2−.
It is an amino acid generated by oxidation of cysteine, whereby a thiol group is fully oxidized to a sulfonic acid/sulfonate group. It is further metabolized via 3-sulfolactate, which converts to pyruvate and sulfite/bisulfite. The enzyme L-cysteate sulfo-lyase catalyzes this conversion. Cysteate is a biosynthetic precursor to taurine in microalgae. By contrast, most taurine in animals is made from cysteine sulfinate.
References
Alpha-Amino acids
Sulfur amino acids
Sulfonic acids | Cysteic acid | [
"Chemistry"
] | 203 | [
"Functional groups",
"Organic compounds",
"Sulfonic acids",
"Organic compound stubs",
"Organic chemistry stubs"
] |
17,450,312 | https://en.wikipedia.org/wiki/O-823 | O-823 is a drug which is a cannabinoid derivative that is used in scientific research. It is described as a mixed agonist/antagonist at the cannabinoid receptor CB1, meaning that it acts as an antagonist when co-administered alongside a more potent CB1 agonist, but exhibits weak partial agonist effects when administered by itself.
References
Cannabinoids
Benzochromenes
Hydroxyarenes
Alkyne derivatives
Nitriles | O-823 | [
"Chemistry"
] | 96 | [
"Nitriles",
"Functional groups"
] |
17,451,531 | https://en.wikipedia.org/wiki/Critical%20Assessment%20of%20Prediction%20of%20Interactions | Critical Assessment of PRediction of Interactions (CAPRI) is a community-wide experiment in modelling the molecular structure of protein complexes, otherwise known as protein–protein docking.
The CAPRI is an ongoing series of events in which researchers throughout the community attempt to dock the same proteins, as provided by the assessors. Rounds take place about every six months. Each round contains between one and six target protein–protein complexes whose structures have been recently determined experimentally. The coordinates and are held privately by the assessors, with the co-operation of the structural biologists who determined them. The CAPRI experiment is double-blind, in the sense that the submitters do not know the solved structure, and the assessors do not know the correspondence between a submission and the identity of its creator.
See also
Critical Assessment of Techniques for Protein Structure Prediction (CASP) — a similar exercise in the field of protein structure prediction
Critical Assessment of Functional Annotation (CAFA)
References
External links
CAPRI or: What is the State of Protein-Protein Docking?
List of predictions servers participating in CAPRI
ClusPro
GRAMM-X
FireDock
HADDOCK — High-Ambiguity-Driven protein–protein DOCKing
pyDockWEB — Structural prediction of protein–protein interactions
PatchDock
SmoothDock
3D-Garden — Global and Restrained Docking Exploration Nexus
SwarmDock
DOCK/PIE
SPIDER - Scoring Protein-Protein Interactions
Molecular modelling | Critical Assessment of Prediction of Interactions | [
"Chemistry"
] | 283 | [
"Molecular physics",
"Molecular and cellular biology stubs",
"Biochemistry stubs",
"Theoretical chemistry",
"Molecular modelling",
"Molecular physics stubs"
] |
17,452,289 | https://en.wikipedia.org/wiki/Baird-Parker%20agar | Baird-Parker agar is a type of agar used for the selective isolation of gram-positive Staphylococci species. It contains lithium chloride and tellurite to inhibit the growth of alternative microbial flora, while the included pyruvate and glycine promote the growth of Staphylococci. Staphylococcus colonies show up black in colour with clear zones produced around them.
History
Baird-Parker Agar From Liofilchem first published an academic article about this agar medium for the purposes of improved diagnostics and isolating coagulase-positive Staphylococci in 1962. He developed this agar medium from the tellurite-glycine formulation of Zebovitz et al and improved its reliability in isolating coagulase-positive staphylococci from foods. Baird-Parker added egg yolk emulsion as a diagnostic agent and sodium pyruvate to protect damaged cells and aid their recovery. It is now widely recommended by national and international bodies for the isolation of coagulase-positive staphylococci. Baird-Parker agar is commonly used as a method for the enumeration of coagulase-positive staphylococci (Staphylococcus aureus and other species) in food and animal feedstuffs.
References
Microbiological media | Baird-Parker agar | [
"Biology"
] | 279 | [
"Microbiological media",
"Microbiology equipment"
] |
17,452,567 | https://en.wikipedia.org/wiki/Bolton%20Hall%20%28California%29 | Bolton Hall is a historic American Craftsman-era stone building in Tujunga, Los Angeles, California. Built in 1913, Bolton Hall was originally used as a community center for the utopian community of Los Terrenitos. From 1920 until 1957, it was used as an American Legion hall, the San Fernando Valley's second public library, Tujunga City Hall, and a jail. In 1957, the building was closed. For more than 20 years, Bolton Hall remained vacant and was the subject of debates over demolition and restoration. Since 1980, the building has been operated by the Little Landers Historical Society as a local history museum.
Los Terrenitos
In the early 1900s, the area now known as Tujunga was undeveloped land, the former Rancho Tujunga. In 1913, William Ellsworth Smythe, working alongside M.V. Hartranft (they had purchased the land together), formed a utopian community called Los Terrenitos — Spanish for The Little Lands. Smythe was the leader of the Utopian Little Landers movement and had already established colonies in Idaho and San Ysidro, California. He advocated the principle that families settling on an acre or two of land could support themselves and create a flourishing community.
Town lots in Tujunga were sold to settlers at $800 for an acre of land and a life of independence.
Bolton Hall was built in 1913 by George Harris, a self-described "nature builder", rock mason, and stone sculptor. He first named it "Bolton Hall Hall", after Bolton Hall (1854–1938), a New York City progressive activist and proponent of the back-to-the-land movement. Harris urged that the hall be built solely of native materials and selected a design that he said borrowed nothing from European architecture. Harris and the Terrenitos community built the hall using granite chunks and stones from nearby fields and hillsides and from the Tujunga Wash. Stones were placed in position in the structure based on the positions in which they settled after falling from a cliff.
The spacious main room has shiny hardwood floors and a massive rock fireplace in the center. The fireplace is spanned by a mantel fashioned from a single eucalyptus tree. Beneath the mantel, Harris inscribed the words "To the Spiritual Life of Soil". The structure was built at a cost of $6,480.
When Bolton Hall opened in August 1913, the Los Angeles Times reported that it marked the "awakening of the Vale of Monte Vista" (the former name of Sunland):
First settled nearly thirty years ago, the valley has shown more life in the past six months than in all its previous history. Los Terrenitos, the settlement of "'little-landers," has made wonderful progress since its inception, five months ago, about 200 families having purchased land, not all of whom are yet on the ground. But enough are here to make it a beehive of industry. The dedication of "Bolton Hall" last Saturday aroused much enthusiasm among the "little-landers."
The Times also reported that Bolton Hall was "built to stand for ages", and it has survived the 1971 Sylmar and 1994 Northridge earthquakes without a scratch.
During the hall's early years, it hosted community meetings patterned after those held in old New England town halls. Over the next decade, it was used for church services, musical performances, lectures, motion picture shows, the Women's Club, dances, and pot-luck suppers. It also was the site of the San Fernando Valley's second public library.
City Hall
After World War I, the hall was used as an American Legion hall for several years. In 1925, Tujunga incorporated as a city, and Bolton Hall became Tujunga City Hall. In 1932, Tujunga was consolidated with the City of Los Angeles, and the building was used for the next 25 years for a variety of municipal services, including the San Fernando Valley's second public library and a jail. However, it remained known as Tujunga City Hall until its closure in 1957.
Vacancy and preservation efforts
With its closure in 1957, Bolton Hall was put up for sale. However, the city estimated it would take more than $42,000 to bring the building up to code, and two attempts to auction the site failed to draw any bids. When the city announced in 1959 that it intended to tear down the old building and convert the property into a neighborhood park, the Little Landers Historical Society of Sunland and Tujunga was formed to fight for the preservation of Bolton Hall. The group watered the trees during the summer and collected more than 400 signatures on petitions seeking to preserve the structure. The group brought in a representative of the National Trust, and the building was found to have historical value. However, the group had difficulty coming up with funding to maintain the structure and bring it up to current safety standards. Bolton Hall's future remained uncertain, and the property vacant, for more than twenty years as the "Save Bolton Hall" movement continued to have difficulty raising funds.
Historical museum
In 1976, an agreement was reached to have the City of Los Angeles renovate the building's exterior with the Little Landers Historical Society committing to renovate the interior for operation as a historical museum. At the time, the building had been kept "tightly sealed" since 1957. However, by 1979, funding was still lacking, and the Times reported that "the vacant old building houses only cobwebs and dust."
In late 1979, Los Angeles City Councilman Bob Ronka secured $169,000 in federal funds to augment $23,800 raised by the Little Landers Historical Society. With nearly $200,000 in public and private funds, the building was finally restored after it had sat vacant for more than 20 years.
The Bolton Hall Historical Museum opened in 1980. The artifacts displayed at the museum include the gavel used by the presiding officer during early town meetings, building tools used in the construction of Bolton Hall, old photographs, and the old clock from the first Tujunga Post Office. The museum also has a congratulatory letter from Bolton Hall, the New York lawyer and writer for whom the building was named. Near the front entrance of the museum, there is a tobacco-stained stone that juts out from the wall; it was used by early colonists to clean out their pipes when the building was used as a church.
The museum is open to the public every Sunday and Tuesday from 1:00 to 4:00 p.m. Admission is free.
See also
List of Registered Historic Places in Los Angeles
List of Los Angeles Historic-Cultural Monuments in the San Fernando Valley
Little Landers
Stonehurst Historic Preservation Overlay Zone
References
External links
Bolton Hall Historical Museum
City of Los Angeles Department of Parks and Recreation: Bolton Hall Historical Museum
Little Landers Historical Society.org: Bolton Hall Museum, in Tujunga, California
Buildings and structures in the San Fernando Valley
Community centers in California
History museums in California
Museums in Los Angeles
Event venues established in 1913
Los Angeles Historic-Cultural Monuments
National Register of Historic Places in the San Fernando Valley
Buildings and structures on the National Register of Historic Places in Los Angeles
American Craftsman architecture in California
Mission Revival architecture in California
1913 establishments in California
Buildings and structures on the National Register of Historic Places in Los Angeles County, California
Event venues on the National Register of Historic Places in California
Intentional communities in California
Architecture related to utopias
Stone buildings in the United States | Bolton Hall (California) | [
"Engineering"
] | 1,521 | [
"Architecture related to utopias",
"Architecture"
] |
16,194,231 | https://en.wikipedia.org/wiki/NGC%202060 | NGC 2060 is a star cluster within the Tarantula Nebula in the Large Magellanic Cloud, very close to the larger NGC 2070 cluster containing R136. It was discovered by John Herschel in 1836. It is a loose cluster approximately 10 million years old, within one of the Tarantula Nebula's superbubbles formed by the combined stellar winds of the cluster or by old supernovae.
NGC 2060 is often used synonymously for the supernova remnant N157B (30 Doradus B) which is a larger area of faint nebulosity and strong radio emission. The supernova occurred approximately 5000 years ago from our point of view. In 1998 a pulsar (named PSR J0537-6910) was discovered with the very fast rotation period of 16 milliseconds and the same approximate age as the supernova remnant. VFTS 102 is a runaway O-type main-sequence star found with NGC 2060, which is proposed to be a companion of the pulsar ejected at the time of the supernova explosion.
NGC 2060 has been identified as one of the few locations for OVz stars, stars with unusually strong HeII 468.6 nm absorption indicative of weak stellar winds and relatively low luminosity for the class. These stars are found in extremely young clusters and are thought to be a very early stage in the evolution of the most massive stars. They are also found in the much more massive NGC 2070 cluster nearby in the Tarantula Nebula.
References
External links
Supernova remnants
Dorado
Large Magellanic Cloud
2060
Tarantula Nebula | NGC 2060 | [
"Astronomy"
] | 336 | [
"Dorado",
"Constellations"
] |
16,194,390 | https://en.wikipedia.org/wiki/Mapa%20Quinatzin | The Mapa Quinatzin is a 16th-century Nahua pictorial document, consisting of three sheets of amatl paper that depict the history of Acolhuacan.
See also
Aztec codices
Codex Xolotl
References
External links
High Definition scans of the codex at the French National Library
A
16th century in the Aztec civilization
16th century in Mexico
16th century in New Spain
Pictograms | Mapa Quinatzin | [
"Mathematics"
] | 79 | [
"Symbols",
"Pictograms"
] |
16,194,485 | https://en.wikipedia.org/wiki/Supersize%20vs%20Superskinny | Supersize vs Superskinny is a British television programme on Channel 4 that featured information about dieting and extreme eating lifestyles. One of the main show features was a weekly comparison between an overweight person, and an underweight person. The two were taken to a feeding clinic, and lived together for five days (later on two days), swapping diets while supervised by Dr Christian Jessen.
Overview
The overweight person swapped diets with the underweight person. While the underweight person was suddenly given more food than they would usually eat in a few days at one meal, the overweight person was usually given tea, coffee, a small snack, or nothing sometimes. Most of the underweight people were unable to finish their meal, though occasionally the overweight people also refused or struggled to eat their meals, usually after having been in the feeding clinic for a few days. Occasionally both were allowed to leave the feeding clinic for a meal swap, if it was part of both of the participants' diets.
In earlier series, the show featured a food tube for each person. The tube contained what each person ate and drank in the span of one week.
Usually Dr Jessen used shock tactics to demonstrate how poor someone's diet was. Both participants were occasionally shown the extent of their poor diet - for example, through bags of sugar. The "superskinny" would usually be shown pictures of their body and be told about the drastic long-term health effects. In the second series, the "supersizer" was sent to meet a woman named Lisa, whose obesity had meant that she could no longer care for herself and was receiving an operation because of her weight. In later series, the "supersizer" was sent to the United States to visit someone that was heavier than they were. It was used as a shock tactic to show the "supersizer" what they could become if they did not stop their unhealthy lifestyles.
The show also featured Anna Richardson in the 1st, 2nd, and 3rd series, who in the first series examined new methods to lose weight by trying diets she found on the Internet, some of which had shocking side effects. For example, Anna attempted Laser lipolysis, which went drastically wrong and resulted in severe bruising. Also, she discovered Diabulimia and spoke to Isabelle Caro, a French actress, renowned for her underweight figure and anorexia campaign. In the second series, Anna recruited a group of "flab-fighters" - women who wanted to lose weight and whose weight was tracked weekly - and she visited Los Angeles to discover ways A-listers would lose weight. The same series also saw a group of four anorexic women attempt to overcome their eating disorder through eating and preparing foods they would usually avoid with the help of a leading eating disorder specialist. In later series, formerly anorexic journalist Emma Woolf interviewed a number of people who had experienced the effects of eating disorders.
The second, third and fourth series also introduced a section whereby a group of people recovering from eating disorders (the second and third series featured people exclusively suffering from anorexia nervosa, while the fourth included a mixture of eating disorders) were overwatched by a specialist psychiatrist and dietician Ursula Philpot who co-presented Supersize vs Superskinny and who worked to challenge their issues with food.
During the first series in 2008, one feature involved Gillian McKeith, who tried to find a way to "ban big bums" in the UK. She tested out different exercises to tone the buttocks of different groups of ladies, and made a leader board for the most effective.
Criticism
The programme has been the subject of criticism and debate, surrounding its portrayal of eating disorders to a potentially vulnerable audience, and its influence towards public attitudes on eating disorders. Then-chief executive of Norfolk based eating disorder charity Beat, Susan Ringwood, stated in a 2012 Daily Mail article that Supersize vs Superskinny and similar programmes are "triggering", and that they are "deadly [and] not entertainment" for eating disorder sufferers, adding that "[Supersize and similar programmes] don't educate or inform anyone, and they certainly don’t make life any better for someone who has an eating disorder." Natasha Devon, director of positive body image campaign Body Gossip, expressed in 2011 blog post her belief in the "reductive" manner in which eating disorders are portrayed in the programme.
Clare Stephens, in an opinion piece written for MamaMia in 2022, expressed concern over the body shaming behaviours exhibited by the "superskinny" subjects of the programme towards their "supersize" counterparts, along with the behaviours of Christian Jessen, concluding that "at best, the show did a lot to confuse audiences about the relationships between eating and exercise and health and weight. At worst, it put its audience at risk of the vast manifestations of disordered eating that can emerge from distorted beliefs around eating, shape and weight. That impact, unfortunately, is impossible to measure".
Niamh Langton of The Oxford Blue, speaking of the "subliminal damage" caused by Supersize vs Superskinny and similar programming, stated of the show in 2021: "in its new, easily consumed form, we see the unrelenting shame which makes the show so popular. Shame, humiliation and self-loathing are the active ingredients. Shows like Supersize vs Superskinny presume that weight is something we can totally control, but weight is a characteristic, and not a behaviour. It is too easy to be drawn into such an over-simplification of weight-loss; a narrative that is neither helpful nor accurate. Regardless of what its creators may argue in its defence, Supersize vs Superskinny, as it exists on YouTube, holds no positive, constructive message. In watching each short clip or full episode, we expose ourselves to harmful content. Such viewing habits become a guilty pleasure. You tell yourself – 'I'm not like her, I could never eat that, I could never let myself get that way.' Really, you’re indoctrinating yourself with a toxic teaching: 'don't eat that or you’ll look like her' – 'you'll be 'Super' too'."
Transmissions
In 2011, a 4-episode children's version titled Supersize vs Superskinny Kids was produced and aired between 21 and 25 March 2011.
References
External links
Review, Leicester Mercury
2008 British television series debuts
2014 British television series endings
Channel 4 original programming
Eating behaviors of humans
Obesity in the United Kingdom
Television series by Banijay
British English-language television shows | Supersize vs Superskinny | [
"Biology"
] | 1,377 | [
"Eating behaviors",
"Behavior",
"Eating behaviors of humans",
"Human behavior"
] |
16,194,604 | https://en.wikipedia.org/wiki/Ilan%20Ramon%20Youth%20Physics%20Center | The Ilan Ramon Youth Physics Center was established in honor of Ilan Ramon, Israel's first astronaut. The center was established in 2007 by the Rashi foundation in order to allow high school students that are interested in physics access to high grade laboratory and astronomy equipment. The center is located at the Ben Gurion University of the Negev and hosts students of all ages and of a wide cultural variety.
The objectives of the Center are to advance the study of physics and astronomy in high school; increase the number of pupils who take Physics at matriculation level and improve their matriculation results; establish and operate physics centers in school increase the number of physics and engineering students in academic institutions.
Astronomical equipment
MEADE LX200R 16" robotic telescope, used for research.
GOTO E5 planetarium, used for showing the night sky
MEADE LightSwitch 8"
Meade LightBridge 10"
MEADE ETX-125 PE
Coronado PST solar telescope, used for daytime observations
Laboratory equipment
The laboratories are equipped with a variety of physics experiment kits, for both modern and classical physics.
See also
List of astronomical observatories
References
External links
Official site
About the winner of the first place in the international "First Step to Nobel Prize in Physics" research project competition
Astronomy education
Astronomical observatories in Israel
Ben-Gurion University of the Negev
2007 establishments in Israel
Space program of Israel | Ilan Ramon Youth Physics Center | [
"Astronomy"
] | 283 | [
"Astronomy education"
] |
16,196,429 | https://en.wikipedia.org/wiki/Intel%20P45 | The P45 Express (codenamed Eaglelake) is a mainstream desktop computer chipset from Intel released in Q2 2008. The first mainboards featuring the P45 chipset were shown at CeBIT 2008.
The P45 Express chipset supports Intel's LGA 775 socket and Core 2 Duo and Quad processors. It is a 65 nm chipset, compared to the earlier generation chipsets (P35, X38, X48) which were 90 nm.
Features
1333/1066/800 MT/s front-side bus (FSB), most motherboard manufacturers claim support up to 1600 MT/s.
PCI Express 2.0, 1 ×16 or 2 ×8 in CrossFire configuration.
Dual-channel DDR2 memory
up to 16 GiB addressable memory; officially up to 800 MHz, most motherboard manufacturers claim support up to 1200 MHz
Dual-channel DDR3 memory
up to 8 GiB addressable memory; officially up 1066 MHz, most motherboard manufacturers claim support up to 1333 MHz
ICH10 / ICH10R southbridge
Supports 45 nm processors
See also
List of Intel chipsets
References
External links
P45 Express Chipset
82P45 Memory Controller Hub
Intel P45 Express Chipset Overview
P45 | Intel P45 | [
"Technology"
] | 257 | [
"Computing stubs",
"Computer hardware stubs"
] |
16,196,899 | https://en.wikipedia.org/wiki/Behavior%20analysis%20of%20child%20development | The behavioral analysis of child development originates from John B. Watson's behaviorism.
History
In 1948, Sidney Bijou took a position as associate professor of psychology at the University of Washington and served as director of the university's Institute of Child Development. Under his leadership, the Institute added a child development clinic and nursery school classrooms where they conducted research that would later accumulate into the area that would be called "Behavior Analysis of Child Development". Skinner's behavioral approach and Kantor's interbehavioral approach were adopted in Bijou and Baer's model. They created a three-stage model of development (e.g., basic, foundational, and societal). Bijou and Baer looked at these socially determined stages, as opposed to organizing behavior into change points or cusps (behavioral cusp). In the behavioral model, development is considered a behavioral change. It is dependent on the kind of stimulus and the person's behavioral and learning function. Behavior analysis in child development takes a mechanistic, contextual, and pragmatic approach.
From its inception, the behavioral model has focused on prediction and control of the developmental process. The model focuses on the analysis of a behavior and then synthesizes the action to support the original behavior. The model was changed after Richard J. Herrnstein studied the matching law of choice behavior developed by studying of reinforcement in the natural environment. More recently, the model has focused more on behavior over time and the way that behavioral responses become repetitive. it has become concerned with how behavior is selected over time and forms into stable patterns of responding. A detailed history of this model was written by Pelaez. In 1995, Henry D. Schlinger, Jr. provided the first behavior analytic text since Bijou and Baer comprehensively showed how behavior analysis—a natural science approach to human behavior—could be used to understand existing research in child development. In addition, the quantitative behavioral developmental model by Commons and Miller is the first behavioral theory and research to address notion similar to stage.
Research methods
The methods used to analyze behavior in child development are based on several types of measurements. Single-subject research with a longitudinal study follow-up is a commonly-used approach. Current research is focused on integrating single-subject designs through meta-analysis to determine the effect sizes of behavioral factors in development. Lag sequential analysis has become popular for tracking the stream of behavior during observations. Group designs are increasingly being used. Model construction research involves latent growth modeling to determine developmental trajectories and structural equation modeling. Rasch analysis is now widely used to show sequentiality within a developmental trajectory.
A recent methodological change in the behavioral analytic theory is the use of observational methods combined with lag sequential analysis can determine reinforcement in the natural setting.
Quantitative behavioral development
The model of hierarchical complexity is a quantitative analytic theory of development. This model offers an explanation for why certain tasks are acquired earlier than others through developmental sequences and gives an explanation of the biological, cultural, organizational, and individual principles of performance. It quantifies the order of hierarchical complexity of a task based on explicit and mathematical measurements of behavior.
Research
Contingencies, uncertainty, and attachment
The behavioral model of attachment recognizes the role of uncertainty in an infant and the child's limited communication abilities. Contingent relationships are instrumental in the behavior analytic theory, because much emphasis is put on those actions that produce parents' responses.
The importance of contingency appears to be highlighted in other developmental theories, but the behavioral model recognizes that contingency must be determined by two factors: the efficiency of the action and that efficiency compared to other tasks that the infant might perform at that point. Both infants and adults function in their environments by understanding these contingent relationships. Research has shown that contingent relationships lead to emotionally satisfying relationships.
Since 1961, behavioral research has shown that there is relationship between the parents' responses to separation from the infant and outcomes of a "stranger situation.". In a study done in 2000, six infants participated in a classic reversal design (see single-subject research) study that assessed infant approach rate to a stranger. If attention was based on stranger avoidance, the infant avoided the stranger. If attention was placed on infant approach, the infant approached the stranger.
Recent meta-analytic studies of this model of attachment based on contingency found a moderate effect of contingency on attachment, which increased to a large effect size when the quality of reinforcement was considered. Other research on contingency highlights its effect on the development of both pro-social and anti-social behavior. These effects can also be furthered by training parents to become more sensitive to children's behaviors, Meta-analytic research supports the notion that attachment is operant-based learning.
An infant's sensitivity to contingencies can be affected by biological factors and environment changes. Studies show that being placed in erratic environments with few contingencies may cause a child to have conduct problems and may lead to depression. (see Behavioral Development and Depression below). Research continues to look at the effects of learning-based attachment on moral development. Some studies have shown that erratic use of contingencies by parents early in life can produce devastating long-term effects for the child.
Motor development
Since Watson developed the theory of behaviorism, behavior analysts have held that motor development represents a conditioning process. This holds that crawling, climbing, and walking displayed by infants represents conditioning of biologically innate reflexes. In this case, the reflex of stepping is the respondent behavior and these reflexes are environmentally conditioned through experience and practice. This position was criticized by maturation theorists. They believed that the stepping reflex for infants actually disappeared over time and was not "continuous". By working with a slightly different theoretical model, while still using operant conditioning, Esther Thelen was able to show that children's stepping reflex disappears as a function of increased physical weight. However, when infants were placed in water, that same stepping reflex returned. This offered a model for the continuity of the stepping reflex and the progressive stimulation model for behavior analysts.
Infants deprived of physical stimulation or the opportunity to respond were found to have delayed motor development. Under conditions of extra stimulation, the motor behavior of these children rapidly improved. Some research has shown that the use of a treadmill can be beneficial to children with motor delays including Down syndrome and cerebral palsy. Research on opportunity to respond and the building of motor development continues today.
The behavioral development model of motor activity has produced a number of techniques, including operant-based biofeedback to facilitate development with success. Some of the stimulation methods such as operant-based biofeedback have been applied as treatment to children with cerebral palsy and even spinal injury successfully. Brucker's group demonstrated that specific operant conditioning-based biofeedback procedures can be effective in establishing more efficient use of remaining and surviving central nervous system cells after injury or after birth complications (like cerebral palsy). While such methods are not a cure and gains tend to be in the moderate range, they do show ability to enhance functioning.
Imitation and verbal behavior
Behaviorists have studied verbal behavior since the 1920s. E.A. Esper (1920) studied associative models of language, which has evolved into the current language interventions of matrix training and recombinative generalization. Skinner (1957) created a comprehensive taxonomy of language for speakers. Baer, along with Zettle and Haynes (1989), provided a developmental analysis of rule-governed behavior for the listener. and for the listener Zettle and Hayes (1989) with Don Baer providing a developmental analysis of rule-governed behavior. According to Skinner, language learning depends on environmental variables, which can be mastered by a child through imitation, practice, and selective reinforcement including automatic reinforcement.
B.F. Skinner was one of the first psychologists to take the role of imitation in verbal behavior as a serious mechanism for acquisition. He identified echoic behavior as one of his basic verbal operants, postulating that verbal behavior was learned by an infant from a verbal community. Skinner's account takes verbal behavior beyond an intra-individual process to an inter-individual process. He defined verbal behavior as "behavior reinforced through the mediation of others". Noam Chomsky refuted Skinner's assumptions.
In the behavioral model, the child is prepared to contact the contingencies to "join" the listener and speaker. At the very core, verbal episodes involve the rotation of the roles as speaker and listener. These kinds of exchanges are called conversational units and have been the focus of research at Columbia's communication disorders department.
Conversational units is a measure of socialization because they consist of verbal interactions in which the exchange is reinforced by both the speaker and the listener. H.C.
Chu (1998) demonstrated contextual conditions for inducing and expanding conversational units between children with autism and non-disabled siblings in two separate experiments. The acquisition of conversational units and the expansion of verbal behavior decrease incidences of physical "aggression" in the Chu study and several other reviews suggest similar effects. The joining of the listener and speaker progresses from listener speaker rotations with others as a likely precedent for the three major components of speaker-as-own listener—say so correspondence, self-talk conversational units, and naming.
Development of self
Robert Kohelenberg and Mavis Tsai (1991) created a behavior analytic model accounting for the development of one's "self". Their model proposes that verbal processes can be used to form a stable sense of who we are through behavioral processes such as stimulus control. Kohlenberg and Tsai developed functional analytic psychotherapy to treat psychopathological disorders arising from the frequent invalidations of a child's statements such that "I" does not emerge.
Other behavior analytic models for personality disorders exist. They trace out the complex biological–environmental interaction for the development of avoidant and borderline personality disorders. They focus on Reinforcement sensitivity theory, which states that some individuals are more or less sensitive to reinforcement than others. Nelson-Grey views problematic response classes as being maintained by reinforcing consequences or through rule governance.
Socialization
Over the last few decades, studies have supported the idea that contingent use of reinforcement and punishment over extended periods of time lead to the development of both pro-social and anti-social behaviors. However research has shown that reinforcement is more effective than punishment when teaching behavior to a child. It has also been shown that modeling is more effective than "preaching" in developing pro-social behavior in children. Rewards have also been closely studied in relation to the development of social behaviors in children. The building of self-control, empathy, and cooperation has all implicated rewards as a successful tactic, while sharing has been strongly linked with reinforcement.
The development of social skills in children is largely affected in that classroom setting by both teachers and peers. Reinforcement and punishment play major roles here as well. Peers frequently reinforce each other's behavior. One of the major areas that teachers and peers influence is sex-typed behavior, while peers also largely influence modes of initiating interaction, and aggression. Peers are more likely to punish cross-gender play while at the same time reinforcing play specific to gender. . Some studies found that teachers were more likely to reinforce dependent behavior in females.
Behavioral principles have also been researched in emerging peer groups, focusing on status. Research shows that it takes different social skills to enter groups than it does to maintain or build one's status in groups. Research also suggests that neglected children are the least interactive and aversive, yet remain relatively unknown in groups.
Children with social problems do see an improvement in social skills after behavior therapy and behavior modification (see applied behavior analysis). Modeling has been successfully used to increase participation by shy and withdrawn children. Shaping of socially desirable behavior through positive reinforcement seems to have some of the most positive effects in children experiencing social problems.
Anti-social behavior
In the development of anti-social behavior, etiological models for anti-social behavior show considerable correlation with negative reinforcement and response matching (see matching law). Escape conditioning, through the use of coercive behavior, has a powerful effect on the development and use of future anti-social tactics. The use of anti-social tactics during conflicts can be negatively reinforced and eventually seen as functional for the child in moment to moment interactions.
Anti-social behaviors will also develop in children when imitation is reinforced by social approval. If approval is not given by teachers or parents, it can often be given by peers. An example of this is swearing. Imitating a parent, brother, peer, or a character on TV, a child may engage in the anti-social behavior of swearing. Upon saying it they may be reinforced by those around them which will lead to an increase in the anti-social behavior.
The role of stimulus control has also been extensively explored in the development of anti-social behavior.
Recent behavioral focus in the study of anti-social behavior has been a focus on rule-governed behavior. While correspondence for saying and doing has long been an interest for behavior analysts in normal development and typical socialization, recent conceptualizations have been built around families that actively train children in anti-social rules, as well as children who fail to develop rule control.
Developmental depression with origins in childhood
Behavioral theory of depression was outlined by Charles Ferster. A later revision was provided by Peter Lewisohn and Hyman Hops. Hops continued the work on the role of negative reinforcement in maintaining depression with Anthony Biglan. Additional factors such as the role of loss of contingent relations through extinction and punishment were taken from early work of Martin Seligman. The most recent summary and conceptual revisions of the behavioral model was provided by Johnathan Kanter. The standard model is that depression has multiple paths to develop. It can be generated by five basic processes, including: lack or loss of positive reinforcement, direct positive or negative reinforcement for depressive behavior, lack of rule-governed behavior or too much rule-governed behavior, and/or too much environmental punishment. For children, some of these variables could set the pattern for lifelong problems. For example, a child whose depressive behavior functions for negative reinforcement by stopping fighting between parents could develop a lifelong pattern of depressive behavior in the case of conflicts. Two paths that are particularly important are (1) lack or loss of reinforcement because of missing necessary skills at a developmental cusp point or (2) the failure to develop adequate rule-governed behavior. For the latter, the child could develop a pattern of always choosing the short-term small immediate reward (i.e., escaping studying for a test) at the expense of the long-term larger reward (passing courses in middle school). The treatment approach that emerged from this research is called behavioral activation.
In addition, use of positive reinforcement has been shown to improve symptoms of depression in children. Reinforcement has also been shown to improve the self-concept in children with depression comorbid with learning difficulties. Rawson and Tabb (1993) used reinforcement with 99 students (90 males and 9 females) aged from 8 to 12 with behavior disorders in a residential treatment program and showed significant reduction in depression symptoms compared to the control group.
Cognitive behavior
As children get older, direct control of contingencies is modified by the presence of rule-governed behavior. Rules serve as an establishing operation and set a motivational stage as well as a discriminative stage for behavior. While the size of the effects on intellectual development are less clear, it appears that stimulation does have a facilitative effect on intellectual ability. However, it is important to be sure not to confuse the enhancing effect with the initial causal effect. Some data exists to show that children with developmental delays take more learning trials to acquire in material.
Learned units and developmental retardation
Behavior analysts have spent considerable time measuring learning in both the classroom and at home. In these settings, the role of a lack of stimulation has often been evidenced in the development of mild and moderate intellectual disability. Recent work has focused on a model of "developmental retardation,". an area that emphasizes cumulative environmental effects and their role in developmental delays. To measure these developmental delays, subjects are given the opportunity to respond, defined as the instructional antecedent, and success is signified by the appropriate response and/or fluency in responses. Consequently, the learned unit is identified by the opportunity to respond in addition to given reinforcement.
One study employed this model by comparing students' time of instruction was in affluent schools to time of instruction in lower income schools. Results showed that lower income schools displayed approximately 15 minutes less instruction than more affluent schools due to disruptions in classroom management and behavior management. Altogether, these disruptions culminated into two years worth of lost instructional time by grade 10. The goal of behavior analytic research is to provide methods for reducing the overall number of children who fall into the retardation range of development by behavioral engineering.
Hart and Risely (1995, 1999) have completed extensive research on this topic as well. These researchers measured the rates of parent communication with children of the ages of 2–4 years and correlated this information with the IQ scores of the children at age 9. Their analyses revealed that higher parental communication with younger children was positively correlated with higher IQ in older children, even after controlling for race, class, and socio-economic status. Additionally, they concluded a significant change in IQ scores required intervention with at-risk children for approximately 40 hours per week.
Class formation
The formation of class-like behavior has also been a significant aspect in the behavioral analysis of development. . This research has provided multiple explanations to the development and formation of class-like behavior, including primary stimulus generalization, an analysis of abstraction, relational frame theory, stimulus class analysis (sometimes referred to as recombinative generalization), stimulus equivalence, and response class analysis. Multiple processes for class-like formation provide behavior analysts with relatively pragmatic explanations for common issues of novelty and generalization.
Responses are organized based upon the particular form needed to fit the current environmental challenges as well as the functional consequences. An example of large response classes lies in contingency adduction, which is an area that needs much further research, especially with a focus on how large classes of concepts shift. For example, as Piaget observed, individuals have a tendency at the pre-operational stage to have limits in their ability to preserve information(Piaget & Szeminska, 1952). While children's training in the development of conservation skills has been generally successful, complications have been noted. Behavior analysts argue that this is largely due to the number of tool skills that need to be developed and integrated. Contingency adduction offers a process by which such skills can be synthesized and which shows why it deserves further attention, particularly by early childhood interventionists.
Autism
Ferster (1961) was the first researcher to posit a behavior analytic theory for autism. Ferster's model saw autism as a by-product of social interactions between parent and child. Ferster presented an analysis of how a variety of contingencies of reinforcement between parent and child during early childhood might establish and strengthen a repertoire of behaviors typically seen in children diagnosed with autism. A similar model was proposed by Drash and Tutor (1993), who developed the contingency-shaped or behavioral incompatibility theory of autism. They identified at least six reinforcement paradigms that may contribute to significant deficiencies in verbal behavior typically characteristic of children diagnosed as autistic. They proposed that each of these paradigms may also create a repertoire of avoidance responses that could contribute to the establishment of a repertoire of behavior that would be incompatible with the acquisition of age-appropriate verbal behavior.
More recent models attribute autism to neurological and sensory models that are overly worked and subsequently produce the autistic repertoire. Lovaas and Smith (1989) proposed that children with autism have a mismatch between their nervous systems and the environment, while Bijou and Ghezzi (1999) proposed a behavioral interference theory. However, both the environmental mismatch model and the inference model were recently reviewed, and new evidence shows support for the notion that the development of autistic behaviors are due to escape and avoidance of certain types of sensory stimuli. However, most behavioral models of autism remain largely speculative due to limited research efforts.
Role in education
One of the largest impacts of behavior analysis of child development is its role in the field of education. In 1968, Siegfried Englemann used operant conditioning techniques in a combination with rule learning to produce the direct instruction curriculum. In addition, Fred S. Keller used similar techniques to develop programmed instruction. B.F. Skinner developed a programmed instruction curriculum for teaching handwriting. One of Skinner's students, Ogden Lindsley, developed a standardized semilogarithmic chart, the "Standard Behavior Chart," now "Standard Celeration Chart," used to record frequencies of behavior, and to allow direct visual comparisons of both frequencies and changes in those frequencies (termed "celeration"). The use of this charting tool for analysis of instructional effects or other environmental variables through the direct measurement of learner performance has become known as precision teaching.
Behavior analysts with a focus on behavioral development form the basis of a movement called positive behavior support (PBS). PBS has focused on building safe schools.
In education, there are many different kinds of learning that are implemented to improve skills needed for interactions later in life. Examples of this differential learning include social and language skills. According to the NWREL (Northwest Regional Educational Laboratory), too much interaction with technology will hinder a child's social interactions with others due to its potential to become an addiction and subsequently lead to anti-social behavior. In terms of language development, children will start to learn and know about 5–20 different words by 18 months old.
Critiques of behavioral approach and new developments
Behavior analytic theories have been criticized for their focus on the explanation of the acquisition of relatively simple behavior (i.e., the behavior of nonhuman species, of infants, and of individuals who are intellectually disabled or autistic) rather than of complex behavior (see Commons & Miller). Michael Commons continued behavior analysis's rejection of mentalism and the substitution of a task analysis of the particular skills to be learned. In his new model, Commons has created a behavior analytic model of more complex behavior in line with more contemporary quantitative behavior analytic models called the model of hierarchical complexity. Commons constructed the model of hierarchical complexity of tasks and their corresponding stages of performance using just three main axioms.
In the study of development, recent work has been generated regarding the combination of behavior analytic views with dynamical systems theory. The added benefit of this approach is its portrayal of how small patterns of changes in behavior in terms of principles and mechanisms over time can produce substantial changes in development.
Current research in behavior analysis attempts to extend the patterns learned in childhood and to determine their impact on adult development.
Professional organizations
The Association for Behavior Analysis International has a special interest group for the behavior analysis of child development.
Doctoral level behavior analysts who are psychologists belong to American Psychological Association's division 25: behavior analysis.
The World Association for Behavior Analysis has a certification in behavior therapy. The exam draws questions on behavioral theories of child development as well as behavioral theories of child psychopathology.
Development Stages
The early years of a child's life are most critical to their development.
In understanding child behavior there are many different types ways to analyze it. There is a multiple stage behavior model that it can be broken down into. During the earliest years it's important for parents to keep track of each little millstone to ensure the babies health. Starting with each month there are milestones that should be achieved in social/emotional, language/ communication, cognitive, and movement/physical areas. Although all babies are different and might not hit each milestone at exactly the same time, if you child's development strongly differs from the expected milestones, see a doctor to make ensure the health of your child. A child's environment is crucial to their development, any trauma that a child experiences could potentially have long term affects on their adult life. Children learn and develop best in strong nurturing environments, in which they are cared for and safe. Development doesn't end after infant and toddler stages, as they start to enter school education places an important role in social and intellectual development. School allows students to build character and broadens the horizons of developing children. It's important for parents to be aware of each step of their child's development to ensure health and safety.
See also
Applied behavior analysis
Attachment in children
Behaviorism
Behavioral cusp
Child development
Child development stages
Child psychology
Critical period
Early childhood education
Family Process (journal)
Feral child
Functional analysis (psychology)
Pedagogy
Play (activity)
Professional practice of behavior analysis
References
External links
Behavioral Development Bulletin
Journal of Early and Intensive Behavior Intervention
Child Behavioral Development Info
Behaviorism
Anti-social behaviour
Child development | Behavior analysis of child development | [
"Biology"
] | 5,088 | [
"Anti-social behaviour",
"Behavior",
"Human behavior",
"Behaviorism"
] |
16,199,733 | https://en.wikipedia.org/wiki/Liouville%27s%20formula | In mathematics, Liouville's formula, also known as the Abel–Jacobi–Liouville identity, is an equation that expresses the determinant of a square-matrix solution of a first-order system of homogeneous linear differential equations in terms of the sum of the diagonal coefficients of the system. The formula is named after the French mathematician Joseph Liouville. Jacobi's formula provides another representation of the same mathematical relationship.
Liouville's formula is a generalization of Abel's identity and can be used to prove it. Since Liouville's formula relates the different linearly independent solutions of the system of differential equations, it can help to find one solution from the other(s), see the example application below.
Statement of Liouville's formula
Consider the -dimensional first-order homogeneous linear differential equation
on an interval of the real line, where for denotes a square matrix of dimension with real or complex entries. Let denote a matrix-valued solution on , meaning that is the so-called fundamental matrix, a square matrix of dimension with real or complex entries and the derivative satisfies
Let
denote the trace of , the sum of its diagonal entries. If the trace of is a continuous function, then the determinant of satisfies
for all and in .
Example application
This example illustrates how Liouville's formula can help to find the general solution of a first-order system of homogeneous linear differential equations. Consider
on the open interval . Assume that the easy solution
is already found. Let
denote another solution, then
is a square-matrix-valued solution of the above differential equation. Since the trace of is zero for all , Liouville's formula implies that the determinant
is actually a constant independent of . Writing down the first component of the differential equation for , we obtain using () that
Therefore, by integration, we see that
involving the natural logarithm and the constant of integration . Solving equation () for and substituting for gives
which is the general solution for . With the special choice and we recover the easy solution we started with, the choice and yields a linearly independent solution. Therefore,
is a so-called fundamental solution of the system.
Proof of Liouville's formula
We omit the argument for brevity. By the Leibniz formula for determinants, the derivative of the determinant of can be calculated by differentiating one row at a time and taking the sum, i.e.
Since the matrix-valued solution satisfies the equation , we have for every entry of the matrix
or for the entire row
When we subtract from the -th row the linear combination
of all the other rows, then the value of the determinant remains unchanged, hence
for every } by the linearity of the determinant with respect to every row. Hence
by () and the definition of the trace. It remains to show that this representation of the derivative implies Liouville's formula.
Fix . Since the trace of is assumed to be continuous function on , it is bounded on every closed and bounded subinterval of and therefore integrable, hence
is a well defined function. Differentiating both sides, using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, we obtain
due to the derivative in (). Therefore, has to be constant on , because otherwise we would obtain a contradiction to the mean value theorem (applied separately to the real and imaginary part in the complex-valued case). Since , Liouville's formula follows by solving the definition of for .
References
Mathematical identities
Ordinary differential equations
Articles containing proofs | Liouville's formula | [
"Mathematics"
] | 754 | [
"Mathematical problems",
"Articles containing proofs",
"Mathematical identities",
"Mathematical theorems",
"Algebra"
] |
16,200,303 | https://en.wikipedia.org/wiki/Tetrahydroharman | Tetrahydroharman(e), also known as 1-methyl-1,2,3,4-tetrahydro-β-carboline, is a general name for one of two isomers:
(1S)-1-methyl-2,3,4,9-tetrahydro-1H-pyrido[3,4-b]indole
Calligonine ((1R)-1-methyl-2,3,4,9-tetrahydro-1H-pyrido[3,4-b]indole)
Calligonine is a major alkaloid constituent of the roots of Calligonum minimum and the bark of Elaeagnus angustifolia. When taken internally, it has the effect of substantially lowering blood pressure for an extended period of time, similar to reserpine.
References
Landolt-Börnstein
See also
Peganum harmala
Harmala alkaloid
Tryptamine alkaloids
beta-Carbolines | Tetrahydroharman | [
"Chemistry"
] | 212 | [
"Tryptamine alkaloids",
"Alkaloids by chemical classification"
] |
16,200,376 | https://en.wikipedia.org/wiki/Anchor%20plate | An anchor plate, floor plate or wall washer is a large plate or washer connected to a tie rod or bolt. Anchor plates are used on exterior walls of masonry buildings, for structural reinforcement against lateral bowing. Anchor plates are made of cast iron, sometimes wrought iron or steel, and are often made in a decorative style.
They are commonly found in many older cities, towns and villages in Europe and in more recent cities with substantial 18th- and 19th-century brick construction, such as New York, Philadelphia, St. Louis, Cincinnati, and Charleston, South Carolina; and in older earthquake-prone cities such as San Francisco, as well as across all of Europe.
One popular style is the star anchor, an anchor plate cast or wrought in the shape of a five-pointed star. Other names and styles of anchor plate include earthquake washer, triangular washer, S-iron, and T-head. In the United Kingdom, pattress plate is the term for circular restraints, tie bar being an alternative term for rectangular restraints.
Definition
According to the Oxford Dictionary of Construction, Surveying and Civil Engineering, an anchor plate "is a plate attached to a component that enables other components to be connected to it."
Although there are many types of anchors or anchorages, according to the Dictionary of Architecture and Construction, an anchor plate specifically is a "wrought-iron clamp, of Flemish origin, on the exterior side of a brick building wall that is connected to the opposite wall by a steel tie-rod to prevent the two walls from spreading apart; these clamps were often in the shape of numerals indicating the year of construction, or letters representing the owner's initials, or were simply fanciful designs."
While most types of anchors are made of only steel, anchor plates might also contain malleable or cast iron. The exterior wall washer is most often made of a cast-iron star or a flat steel plate.
History of use and studies
In Roman technology, wooden tie-beams (or tie rods) were used between arches to negate the outward horizontal forces between them. Iron tie rods would later be used as a device to reinforce arches, vaults, and cupolas constructed across Medieval Europe.
In the modern era, tie-rods are made of iron or steel, and serve to reinforce vaults, arches, and in general masonry structures. Reinforced masonry walls are strengthened through a tie-rod that connects between parallel walls at the floor-level, which creates a horizontal compression state, thereby increasing the wall's shear strength. While the current literature is very poor, some studies have been done on analysis of anchor plates and tie-rods, for example one study dealing with concrete panels, which, although a thin veneer, may also need anchor plates to help stabilize the wall.
The pressure that an anchor plate provides is constantly stiff. A study found that, as widths exceed , the advantage of having a wider plate decreased, indicating a width threshold for optimal support.
Gallery
See also
Barnstar, a purely decorative device
Tie (cavity wall), used internally within cavity walls
References
External links
Architectural elements
Historic preservation
Structural system
Visual motifs | Anchor plate | [
"Mathematics",
"Technology",
"Engineering"
] | 639 | [
"Structural engineering",
"Visual motifs",
"Building engineering",
"Symbols",
"Structural system",
"Architectural elements",
"Components",
"Architecture"
] |
16,203,229 | https://en.wikipedia.org/wiki/Nesting%20%28computing%29 | In computing science and informatics, nesting is where information is organized in layers, or where objects contain other similar objects. It almost always refers to self-similar or recursive structures in some sense.
Terminology
Nesting can mean:
nested calls:
using several levels of subroutines
recursive calls
nested levels of parentheses in arithmetic expressions
nested blocks of imperative source code such as nested if-clauses, while-clauses, repeat-until clauses etc.
information hiding:
nested function definitions with lexical scope
nested data structures such as records, objects, classes, etc.
nested virtualization, also called recursive virtualization: running a virtual machine inside another virtual machine
In spreadsheets
In a spreadsheet functions can be nested one into another, making complex formulas. The function wizard of the OpenOffice.org Calc application allows to navigate through multiple levels of nesting, letting the user to edit (and possibly correct) each one of them separately.
For example:
=IF(SUM(C8:G8)=0,"Y","N")
In this Microsoft Excel formula, the SUM function is nested inside the IF function. First, the formula calculates the sum of the numbers in the cells from C8 to G8. It then decides whether the sum is 0, and it displays the letter Y if the sum is 0, and the letter N if it is not.
Naturally, to allow the mathematical resolution of these chained (or better: nested) formulas, the inner expressions must be previously evaluated, and this outward direction is essential because the results that the internal functions return are temporarily used as entry data for the external ones.
Due to the potential accumulation of parentheses in only one code line, editing and error detecting (or debugging) can become somehow awkward. That is why modern programming environments -as well as spreadsheet programs- highlight in bold type the pair corresponding to the current editing position. The (automatic) balancing control of the opening and closing parenthesis is known as brace match checking.
In programming
Control structures
In structured programming languages, nesting is related to the enclosing of control structures one into another, usually indicated through different indentation levels within the source code, as it is shown in this simple BASIC function:
function LookupCode(sCode as string) as integer
dim iReturnValue as integer
dim sLine, sPath as string
sPath="C:\Test.dsv"
if FileExists(sPath) then
open sPath for input as #1
do while not EOF(1)
line input #1, sLine
if sCode=left(sLine, 3) then
'Action(s) to be carried out
End if
loop
close #1
End if
LookupCode=iReturnValue
end function
In this small and simple example, the conditional block “if... then... end if” is nested inside the “do while... loop” one.
Some languages such as Pascal and Ada have no restrictions on declarations depending on the nesting level, allowing precisely nested subprograms or even nested packages (Ada). Here is an example of both (simplified from a real case):
-- Getting rid of the global variables issue (cannot be used in parallel)
-- from a set of old sources, without the need to change that code's
-- logic or structure.
--
procedure Nesting_example_1 is
type Buffer_type is array(Integer range <>) of Integer;
procedure Decompress(
compressed : in Buffer_type;
decompressed: out Buffer_type
)
is
-- Here are the legacy sources, translated:
package X_Globals is
index_in, index_out: Integer;
-- *** ^ These variables are local to Decompress.
-- *** Now Decompress is task-safe.
end X_Globals;
-- Methods 1,2,3,... (specifications)
package X_Method_1 is
procedure Decompress_1;
end X_Method_1;
-- Methods 1,2,3,... (code)
package body X_Method_1 is
use X_Globals;
procedure Decompress_1 is
begin
index_in:= compressed'First;
-- Here, the decompression code, method 1
end Decompress_1;
end X_Method_1;
-- End of the legacy sources
begin
X_Method_1.Decompress_1;
end Decompress;
test_in, test_out: Buffer_type(1..10_000);
begin
Decompress(test_in, test_out);
end Nesting_example_1;
Data structures
Nested data structures are also commonly encountered in programming.
Lisp
In the functional programming languages, such as Lisp, a list data structure exists as does a simpler atom data structure.
Simple lists hold only atoms.
( A T O M S )
The atoms in the list are A, T, O, M, and S.
Nested lists hold both atoms and other lists.
( ( ( N E S T E D ) L I S T S ) ( C A N ) ( B E ) U N N E C E S S A R I L Y ( C O M P L E X ) )
See also
Flow control
For loop
Pseudocode
Structured programming
References
Computer data
Computer programming | Nesting (computing) | [
"Technology",
"Engineering"
] | 1,133 | [
"Computer programming",
"Computer data",
"Software engineering",
"Data",
"Computers"
] |
16,203,913 | https://en.wikipedia.org/wiki/Petroleum%20trap | In petroleum geology, a trap is a geological structure affecting the reservoir rock and caprock of a petroleum system allowing the accumulation of hydrocarbons in a reservoir. Traps can be of two types: stratigraphic or structural. Structural traps are the most important type of trap as they represent the majority of the world's discovered petroleum resources.
Structural traps
A structural trap is a type of geological trap that forms as a result of changes in the structure of the subsurface, due to tectonic, diapiric, gravitational, and compactional processes.
Anticlinal trap
An anticline is an area of the subsurface where the strata have been pushed into forming a domed shape. If there is a layer of impermeable rock present in this dome shape, then water-insoluble hydrocarbons can accumulate at the crest until the anticline is filled to the spill point (the highest point where hydrocarbons can escape the anticline). This type of trap is by far the most significant to the hydrocarbon industry. Anticline traps are usually long oval domes of land that can often be seen by looking at a geological map or by flying over the land.
Fault trap
A fault trap is formed by the movement of permeable and impermeable layers of rock along a fault plane. The permeable reservoir rock faults such that it is adjacent to an impermeable rock, preventing hydrocarbons from further migration. In some cases, there can be an impermeable substance along the fault surface (such as clay) that also acts to prevent migration. This is known as clay smear.
Stratigraphic trap
In a stratigraphic trap, the geometry allowing the accumulation of hydrocarbons is of sedimentary origin and has not undergone any tectonic deformation. Such traps can be found in clinoforms, in a pinching-out sedimentary structure, under an unconformity or in a structure created by the creep of an evaporite.
Salt dome trap
In a salt dome trap, masses of salt are pushed up through clastic rocks due to their greater buoyancy, eventually breaking through and rising towards the surface. This salt mass resembles an impermeable dome, and when it crosses a layer of permeable rock, in which hydrocarbons are migrating, it blocks the pathway in much the same manner as a fault trap. This is one of the reasons why there is significant focus on subsurface salt imaging, despite the many technical challenges that accompany it.
Pinch-out trap
Pinch-out traps are stratigraphic traps in which the petroleum reservoir thins around impermeable rock strata and eventually 'pinches out' with impermeable rock strata on either side, creating a trap.
Hybrid trap
Hybrid traps are the combination of two types of traps. In the case of tilted blocks, the initial reservoir geometry is the one of a fault-controlled structural trap, but the caprock is generally made by the draping sedimentation of mudstones during the oceanisation process.
See also
Petroleum reservoir
Structural geology
References
Petroleum geology | Petroleum trap | [
"Chemistry"
] | 639 | [
"Petroleum",
"Petroleum geology"
] |
5,601,086 | https://en.wikipedia.org/wiki/Glycophorin | A glycophorin is a sialoglycoprotein of the membrane of a red blood cell. It is a membrane-spanning protein and carries sugar molecules. It is heavily glycosylated (60%). Glycophorins are rich in sialic acid, which gives the red blood cells a very hydrophilic-charged coat. This enables them to circulate without adhering to other cells or vessel walls.
A particular mutation in Glycophorins is thought to produce a 40% reduction in risk of severe malaria.
Identification
After separation of red cell membranes by SDS-polyacrylamide gel electrophoresis and staining with periodic acid-Schiff staining (PAS), four glycophorins have been identified. These have been named glycophorin A, B, C, and D in order of the quantity present in the membrane, glycophorin A being the most and glycophorin D the least common. A fifth (glycophorin E) has been identified within the human genome but cannot easily be detected on routine gel staining. In total, the glycophorins constitute ~2% of the total erythrocyte membrane protein mass. These proteins are also known under different nomenclatures but they are probably best known as the glycophorins.
Family members
The following four human genes encode glycophorin proteins:
Glycophorin A
Glycophorin B
Glycophorin C
Glycophorin E
Glycophorin D is now known to be a variant of Glycophorin C.
References
External links
Glycoproteins
Single-pass transmembrane proteins | Glycophorin | [
"Chemistry"
] | 369 | [
"Glycoproteins",
"Glycobiology"
] |
5,601,258 | https://en.wikipedia.org/wiki/Chelsea%20filter | In gemmology, a Chelsea filter is a dichromatic optical filter used for identifying coloured stones.
History
The "chelsea" filter was originally devised by Anderson and Payne in 1934 of the Gem testing Laboratory of the London Chamber of Commerce & Industry. The filter was devised with the collaboration of gemmology students of the Chelsea College of Science and Technology where Basil Anderson was an instructor for the Gemmological Association of Great Britain. Since this filter allows transmission of both deep-red wavelengths around 690 nanometres and yellow-green wavelengths, around 570 nanometres, that matched emerald's emission and absorption characteristics, this filter was initially recommended to assist the discrimination between natural emerald and its simulants such as green glass, tourmaline, peridot, etc. This discrimination is possible because chromium (containing iron) and vanadium-free emeralds emit a red fluorescence when illuminated by white light that also has a content of ultraviolet wavelengths.
Synthetic emeralds were commercially introduced around 1940. These produce the same pink-red response as some emeralds through the Chelsea filter. However, although this filter is unable to predictably discriminate between natural and synthetic emerald, it has been subsequently found capable of distinguishing aquamarine, blue topaz, and their blue synthetic spinel simulants, because unlike natural gemstones, blue cobalt-containing synthetic spinels emit a red fluorescence under white light.
Chelsea Colour Filter is a UK Trademark held by The Gemmological Association of Great Britain (UK Trademark Registration No. 1473951).
Use
Hold the filter an inch or two from the eye. Light the stone with a strong incandescent light bulb or torch, not LED. The stone may appear to change colour. The filter must be held near to the eye but there is no need to hold the filter close to the stone, even items in showcases can be examined providing they are lit by strong lights.
Chelsea Filters were also used to help separate aquamarine and natural zircon from synthetic flame-fusion spinel (used extensively in "birthstone" jewelry), as both of the former absorb the red portion of the spectrum and the synthetic spinel did not.
See also
Infrared filter
Optical filter
References
External links
Chelsea Filter - Gemstone Buzz uses, procedure & precautions.
Instructions for Chelsea Filter
MineralLab & other type of filters
Gemology
Optical filters | Chelsea filter | [
"Chemistry"
] | 481 | [
"Optical filters",
"Filters"
] |
5,601,322 | https://en.wikipedia.org/wiki/High-performance%20technical%20computing | High-performance technical computing (HPTC) is the application of high performance computing (HPC) to technical, as opposed to business or scientific, problems (although the lines between the various disciplines are necessarily vague). HPTC often refers to the application of HPC to engineering problems and includes computational fluid dynamics, simulation, modeling, and seismic tomography (particularly in the petrochemical industry).
See also
Supercomputer
External links
Top 500 supercomputers
Parallel computing | High-performance technical computing | [
"Technology"
] | 99 | [
"Computing stubs"
] |
5,602,379 | https://en.wikipedia.org/wiki/Intrusion%20tolerance | Intrusion tolerance is a fault-tolerant design approach to defending information systems against malicious attacks. In that sense, it is also a computer security approach. Abandoning the conventional aim of preventing all intrusions, intrusion tolerance instead calls for triggering mechanisms that prevent intrusions from leading to a system security failure.
Distributed computing
In distributed computing there are two major variants of intrusion tolerance mechanisms: mechanisms based on redundancy, such as the Byzantine fault tolerance, as well as mechanisms based on intrusion detection as implemented in intrusion detection system) and intrusion reaction.
Intrusion-tolerant server architectures
Intrusion-tolerance has started to influence the design of server architectures in academic institutions, and industry. Examples of such server architectures include KARMA, Splunk IT Service Intelligence (ITSI), project ITUA, and the practical Byzantine Fault Tolerance (pBFT) model.
See also
Intrusion detection system evasion techniques
References
Fault tolerance
Cybersecurity engineering | Intrusion tolerance | [
"Technology",
"Engineering"
] | 187 | [
"Cybersecurity engineering",
"Computer engineering",
"Reliability engineering",
"Computer networks engineering",
"Fault tolerance"
] |
5,602,902 | https://en.wikipedia.org/wiki/Catenin%20beta-1 | Catenin beta-1, also known as β-catenin (beta-catenin), is a protein that in humans is encoded by the CTNNB1 gene.
β-Catenin is a dual function protein, involved in regulation and coordination of cell–cell adhesion and gene transcription. In humans, the CTNNB1 protein is encoded by the CTNNB1 gene. In Drosophila, the homologous protein is called armadillo. β-catenin is a subunit of the cadherin protein complex and acts as an intracellular signal transducer in the Wnt signaling pathway. It is a member of the catenin protein family and homologous to γ-catenin, also known as plakoglobin. β-Catenin is widely expressed in many tissues. In cardiac muscle, β-catenin localizes to adherens junctions in intercalated disc structures, which are critical for electrical and mechanical coupling between adjacent cardiomyocytes.
Mutations and overexpression of β-catenin are associated with many cancers, including hepatocellular carcinoma, colorectal carcinoma, lung cancer, malignant breast tumors, ovarian and endometrial cancer. Alterations in the localization and expression levels of β-catenin have been associated with various forms of heart disease, including dilated cardiomyopathy. β-Catenin is regulated and destroyed by the beta-catenin destruction complex, and in particular by the adenomatous polyposis coli (APC) protein, encoded by the tumour-suppressing APC gene. Therefore, genetic mutation of the APC gene is also strongly linked to cancers, and in particular colorectal cancer resulting from familial adenomatous polyposis (FAP).
Discovery
β-Catenin was initially discovered in the early 1990s as a component of a mammalian cell adhesion complex: a protein responsible for cytoplasmatic anchoring of cadherins. But very soon, it was realized that the Drosophila protein armadillo – implicated in mediating the morphogenic effects of Wingless/Wnt – is homologous to the mammalian β-catenin, not just in structure but also in function. Thus, β-catenin became one of the first examples of moonlighting: a protein performing more than one radically different cellular function.
Structure
Protein structure
The core of β-catenin consists of several very characteristic repeats, each approximately 40 amino acids long. Termed armadillo repeats, all these elements fold together into a single, rigid protein domain with an elongated shape – called armadillo (ARM) domain. An average armadillo repeat is composed of three alpha helices. The first repeat of β-catenin (near the N-terminus) is slightly different from the others – as it has an elongated helix with a kink, formed by the fusion of helices 1 and 2. Due to the complex shape of individual repeats, the whole ARM domain is not a straight rod: it possesses a slight curvature, so that an outer (convex) and an inner (concave) surface is formed. This inner surface serves as a ligand-binding site for the various interaction partners of the ARM domains.
The segments N-terminal and far C-terminal to the ARM domain do not adopt any structure in solution by themselves. Yet these intrinsically disordered regions play a crucial role in β-catenin function. The N-terminal disordered region contains a conserved short linear motif responsible for binding of TrCP1 (also known as β-TrCP) E3 ubiquitin ligase – but only when it is phosphorylated. Degradation of β-catenin is thus mediated by this N-terminal segment. The C-terminal region, on the other hand, is a strong transactivator when recruited onto DNA. This segment is not fully disordered: part of the C-terminal extension forms a stable helix that packs against the ARM domain, but may also engage separate binding partners. This small structural element (HelixC) caps the C-terminal end of the ARM domain, shielding its hydrophobic residues. HelixC is not necessary for β-catenin to function in cell–cell adhesion. On the other hand, it is required for Wnt signaling: possibly to recruit various coactivators, such as 14-3-3zeta. Yet its exact partners among the general transcription complexes are still incompletely understood, and they likely involve tissue-specific players. Notably, the C-terminal segment of β-catenin can mimic the effects of the entire Wnt pathway if artificially fused to the DNA binding domain of LEF1 transcription factor.
Plakoglobin (also called γ-catenin) has a strikingly similar architecture to that of β-catenin. Not only their ARM domains resemble each other in both architecture and ligand binding capacity, but the N-terminal β-TrCP-binding motif is also conserved in plakoglobin, implying common ancestry and shared regulation with β-catenin. However, plakoglobin is a very weak transactivator when bound to DNA – this is probably caused by the divergence of their C-terminal sequences (plakoglobin appears to lack the transactivator motifs, and thus inhibits the Wnt pathway target genes instead of activating them).
Partners binding to the armadillo domain
As sketched above, the ARM domain of β-catenin acts as a platform to which specific linear motifs may bind. Located in structurally diverse partners, the β-catenin binding motifs are typically disordered on their own, and typically adopt a rigid structure upon ARM domain engagement – as seen for short linear motifs. However, β-catenin interacting motifs also have a number of peculiar characteristics. First, they might reach or even surpass the length of 30 amino acids in length, and contact the ARM domain on an excessively large surface area. Another unusual feature of these motifs is their frequently high degree of phosphorylation. Such Ser/Thr phosphorylation events greatly enhance the binding of many β-catenin associating motifs to the ARM domain.
The structure of β-catenin in complex with the catenin binding domain of the transcriptional transactivation partner TCF provided the initial structural roadmap of how many binding partners of β-catenin may form interactions. This structure demonstrated how the otherwise disordered N-terminus of TCF adapted what appeared to be a rigid conformation, with the binding motif spanning many beta-catenin repeats. Relatively strong charged interaction "hot spots" were defined (predicted, and later verified, to be conserved for the β-catenin/E-cadherin interaction), as well as hydrophobic regions deemed important in the overall mode of binding and as potential therapeutic small molecule inhibitor targets against certain cancer forms. Furthermore, following studies demonstrated another peculiar characteristic, plasticity in the binding of the TCF N-terminus to beta-catenin.
Similarly, we find the familiar E-cadherin, whose cytoplasmatic tail contacts the ARM domain in the same canonical fashion. The scaffold protein axin (two closely related paralogs, axin 1 and axin 2) contains a similar interaction motif on its long, disordered middle segment. Although one molecule of axin only contains a single β-catenin recruitment motif, its partner the adenomatous polyposis coli (APC) protein contains 11 such motifs in tandem arrangement per protomer, thus capable to interact with several β-catenin molecules at once. Since the surface of the ARM domain can typically accommodate only one peptide motif at any given time, all these proteins compete for the same cellular pool of β-catenin molecules. This competition is the key to understand how the Wnt signaling pathway works.
However, this "main" binding site on the ARM domain β-catenin is by no means the only one. The first helices of the ARM domain form an additional, special protein-protein interaction pocket: This can accommodate a helix-forming linear motif found in the coactivator BCL9 (or the closely related BCL9L) – an important protein involved in Wnt signaling. Although the precise details are much less clear, it appears that the same site is used by alpha-catenin when β-catenin is localized to the adherens junctions. Because this pocket is distinct from the ARM domain's "main" binding site, there is no competition between alpha-catenin and E-cadherin or between TCF1 and BCL9, respectively. On the other hand, BCL9 and BCL9L must compete with α-catenin to access β-catenin molecules.
Function
Regulation of degradation through phosphorylation
The cellular level of β-catenin is mostly controlled by its ubiquitination and proteosomal degradation. The E3 ubiquitin ligase TrCP1 (also known as β-TrCP) can recognize β-catenin as its substrate through a short linear motif on the disordered N-terminus. However, this motif (Asp-Ser-Gly-Ile-His-Ser) of β-catenin needs to be phosphorylated on the two serines in order to be capable to bind β-TrCP. Phosphorylation of the motif is performed by Glycogen Synthase Kinase 3 alpha and beta (GSK3α and GSK3β). GSK3s are constitutively active enzymes implicated in several important regulatory processes. There is one requirement, though: substrates of GSK3 need to be pre-phosphorylated four amino acids downstream (C-terminally) of the actual target site. Thus it also requires a "priming kinase" for its activities. In the case of β-catenin, the most important priming kinase is Casein Kinase I (CKI). Once a serine-threonine rich substrate has been "primed", GSK3 can "walk" across it from C-terminal to N-terminal direction, phosphorylating every 4th serine or threonine residues in a row. This process will result in dual phosphorylation of the aforementioned β-TrCP recognition motif as well.
The beta-catenin destruction complex
For GSK3 to be a highly effective kinase on a substrate, pre-phosphorylation is not enough. There is one additional requirement: Similar to the mitogen-activated protein kinases (MAPKs), substrates need to associate with this enzyme through high-affinity docking motifs. β-Catenin contains no such motifs, but a special protein does: axin. What is more, its GSK3 docking motif is directly adjacent to a β-catenin binding motif. This way, axin acts as a true scaffold protein, bringing an enzyme (GSK3) together with its substrate (β-catenin) into close physical proximity.
But even axin does not act alone. Through its N-terminal regulator of G-protein signaling (RGS) domain, it recruits the adenomatous polyposis coli (APC) protein. APC is like a huge "Christmas tree": with a multitude of β-catenin binding motifs (one APC molecule alone possesses 11 such motifs ), it may collect as many β-catenin molecules as possible. APC can interact with multiple axin molecules at the same time as it has three SAMP motifs (Ser-Ala-Met-Pro) to bind the RGS domains found in axin. In addition, axin also has the potential to oligomerize through its C-terminal DIX domain. The result is a huge, multimeric protein assembly dedicated to β-catenin phosphorylation. This complex is usually called the beta-catenin destruction complex, although it is distinct from the proteosome machinery actually responsible for β-catenin degradation. It only marks β-catenin molecules for subsequent destruction.
Wnt signaling and the regulation of destruction
In resting cells, axin molecules oligomerize with each other through their C-terminal DIX domains, which have two binding interfaces. Thus they can build linear oligomers or even polymers inside the cytoplasm of cells. DIX domains are unique: the only other proteins known to have a DIX domain are Dishevelled and DIXDC1. (The single Dsh protein of Drosophila corresponds to three paralogous genes, Dvl1, Dvl2 and Dvl3 in mammals.) Dsh associates with the cytoplasmic regions of Frizzled receptors with its PDZ and DEP domains. When a Wnt molecule binds to Frizzled, it induces a poorly known cascade of events, that result in the exposure of dishevelled's DIX domain and the creation of a perfect binding site for axin. Axin is then titrated away from its oligomeric assemblies – the β-catenin destruction complex – by Dsh. Once bound to the receptor complex, axin will be rendered incompetent for β-catenin binding and GSK3 activity. Importantly, the cytoplasmic segments of the Frizzled-associated LRP5 and LRP6 proteins contain GSK3 pseudo-substrate sequences (Pro-Pro-Pro-Ser-Pro-x-Ser), appropriately "primed" (pre-phosphorylated) by CKI, as if it were a true substrate of GSK3. These false target sites greatly inhibit GSK3 activity in a competitive manner. This way receptor-bound axin will abolish mediating the phosphorylation of β-catenin. Since β-catenin is no longer marked for destruction, but continues to be produced, its concentration will increase. Once β-catenin levels rise high enough to saturate all binding sites in the cytoplasm, it will also translocate into the nucleus. Upon engaging the transcription factors LEF1, TCF1, TCF2 or TCF3, β-catenin forces them to disengage their previous partners: Groucho proteins. Unlike Groucho, that recruit transcriptional repressors (e.g. histone-lysine methyltransferases), β-catenin will bind transcriptional activators, switching on target genes.
Role in cell–cell adhesion
Cell–cell adhesion complexes are essential for the formation of complex animal tissues. β-catenin is part of a protein complex that form adherens junctions. These cell–cell adhesion complexes are necessary for the creation and maintenance of epithelial cell layers and barriers. As a component of the complex, β-catenin can regulate cell growth and adhesion between cells. It may also be responsible for transmitting the contact inhibition signal that causes cells to stop dividing once the epithelial sheet is complete. The E-cadherin – β-catenin – α-catenin complex is weakly associated to actin filaments. Adherens junctions require significant protein dynamics in order to link to the actin cytoskeleton,
thereby enabling mechanotransduction.
An important component of the adherens junctions are the cadherin proteins. Cadherins form the cell–cell junctional structures known as adherens junctions as well as the desmosomes. Cadherins are capable of homophilic interactions through their extracellular cadherin repeat domains, in a Ca2+-dependent manner; this can hold adjacent epithelial cells together. While in the adherens junction, cadherins recruit β-catenin molecules onto their intracellular regions. β-catenin, in turn, associates with another highly dynamic protein, α-catenin, which directly binds to the actin filaments. This is possible because α-catenin and cadherins bind at distinct sites to β-catenin. The β-catenin – α-catenin complex can thus physically form a bridge between cadherins and the actin cytoskeleton. Organization of the cadherin–catenin complex is additionally regulated through phosphorylation and endocytosis of its components.
Roles in development
β-Catenin has a central role in directing several developmental processes, as it can directly bind transcription factors and be regulated by a diffusible extracellular substance: Wnt. It acts upon early embryos to induce entire body regions, as well as individual cells in later stages of development. It also regulates physiological regeneration processes.
Early embryonic patterning
Wnt signaling and β-catenin dependent gene expression plays a critical role during the formation of different body regions in the early embryo. Experimentally modified embryos that do not express this protein will fail to develop mesoderm and initiate gastrulation.
Early embryos endomesoderm specification also involves the activation of the β-catenin dependent transcripional activity by the first morphogenetic movements of embryogenesis, though mechanotransduction processes. This feature being shared by vertebrate and arthropod bilateria, and by cnidaria, it was proposed to have been evolutionary inherited from its possible involvement in the endomesoderm specification of first metazoa.
During the blastula and gastrula stages, Wnt as well as BMP and FGF pathways will induce the antero-posterior axis formation, regulate the precise placement of the primitive streak (gastrulation and mesoderm formation) as well as the process of neurulation (central nervous system development).
In Xenopus oocytes, β-catenin is initially equally localized to all regions of the egg, but it is targeted for ubiquitination and degradation by the β-catenin destruction complex. Fertilization of the egg causes a rotation of the outer cortical layers, moving clusters of the Frizzled and Dsh proteins closer to the equatorial region. β-catenin will be enriched locally under the influence of Wnt signaling pathway in the cells that inherit this portion of the cytoplasm. It will eventually translocate to the nucleus to bind TCF3 in order to activate several genes that induce dorsal cell characteristics. This signaling results in a region of cells known as the grey crescent, which is a classical organizer of embryonic development. If this region is surgically removed from the embryo, gastrulation does not occur at all. β-Catenin also plays a crucial role in the induction of the blastopore lip, which in turn initiates gastrulation. Inhibition of GSK-3 translation by injection of antisense mRNA may cause a second blastopore and a superfluous body axis to form. A similar effect can result from the overexpression of β-catenin.
Asymmetric cell division
β-catenin has also been implicated in regulation of cell fates through asymmetric cell division in the model organism C. elegans. Similarly to the Xenopus oocytes, this is essentially the result of non-equal distribution of Dsh, Frizzled, axin and APC in the cytoplasm of the mother cell.
Stem cell renewal
One of the most important results of Wnt signaling and the elevated level of β-catenin in certain cell types is the maintenance of pluripotency. The rate of stem cells in the colon is for instance ensured by such accumulation of β-catenin, which can be stimulated by the Wnt pathway. High frequency peristaltic mechanical strains of the colon are also involved in the β-catenin dependent maintenance of homeostatic levels of colonic stem cells through processes of mechanotransduction. This feature is pathologically enhanced towards tumorigenic hyperproliferation in healthy cells compressed by pressure due genetically altered hyperproliferative tumorous cells.
In other cell types and developmental stages, β-catenin may promote differentiation, especially towards mesodermal cell lineages.
Epithelial-to-mesenchymal transition
β-Catenin also acts as a morphogen in later stages of embryonic development. Together with TGF-β, an important role of β-catenin is to induce a morphogenic change in epithelial cells. It induces them to abandon their tight adhesion and assume a more mobile and loosely associated mesenchymal phenotype. During this process, epithelial cells lose expression of proteins like E-cadherin, Zonula occludens 1 (ZO1), and cytokeratin. At the same time they turn on the expression of vimentin, alpha smooth muscle actin (ACTA2), and fibroblast-specific protein 1 (FSP1). They also produce extracellular matrix components, such as type I collagen and fibronectin. Aberrant activation of the Wnt pathway has been implicated in pathological processes such as fibrosis and cancer. In cardiac muscle development, β-catenin performs a biphasic role. Initially, the activation of Wnt/β-catenin is essential for committing mesenchymal cells to a cardiac lineage; however, in later stages of development, the downregulation of β-catenin is required.
Involvement in cardiac physiology
In cardiac muscle, β-catenin forms a complex with N-cadherin at adherens junctions within intercalated disc structures, which are responsible for electrical and mechanical coupling of adjacent cardiac cells. Studies in a model of adult rat ventricular cardiomyocytes have shown that the appearance and distribution of β-catenin is spatio-temporally regulated during the redifferentiation of these cells in culture. Specifically, β-catenin is part of a distinct complex with N-cadherin and alpha-catenin, which is abundant at adherens junctions in early stages following cardiomyocyte isolation for the reformation of cell–cell contacts. It has been shown that β-catenin forms a complex with emerin in cardiomyocytes at adherens junctions within intercalated discs; and this interaction is dependent on the presence of GSK 3-beta phosphorylation sites on β-catenin. Knocking out emerin significantly altered β-catenin localization and the overall intercalated disc architecture, which resembled a dilated cardiomyopathy phenotype.
In animal models of cardiac disease, functions of β-catenin have been unveiled. In a guinea pig model of aortic stenosis and left ventricular hypertrophy, β-catenin was shown to change subcellular localization from intercalated discs to the cytosol, despite no change in the overall cellular abundance of β-catenin. vinculin showed a similar profile of change. N-cadherin showed no change, and there was no compensatory upregulation of plakoglobin at intercalated discs in the absence of β-catenin. In a hamster model of cardiomyopathy and heart failure, cell–cell adhesions were irregular and disorganized, and expression levels of adherens junction/intercalated disc and nuclear pools of β-catenin were decreased. These data suggest that a loss of β-catenin may play a role in the diseased intercalated discs that have been associated with cardiac muscle hypertrophy and heart failure. In a rat model of myocardial infarction, adenoviral gene transfer of nonphosphorylatable, constitutively-active β-catenin decreased MI size, activated the cell cycle, and reduced the amount of apoptosis in cardiomyocytes and cardiac myofibroblasts. This finding was coordinate with enhanced expression of pro-survival proteins, survivin and Bcl-2, and vascular endothelial growth factor while promoting the differentiation of cardiac fibroblasts into myofibroblasts. These findings suggest that β-catenin can promote the regeneration and healing process following myocardial infarction. In a spontaneously-hypertensive heart failure rat model, investigators detected a shuttling of β-catenin from the intercalated disc/sarcolemma to the nucleus, evidenced by a reduction of β-catenin expression in the membrane protein fraction and an increase in the nuclear fraction. Additionally, they found a weakening in the association between glycogen synthase kinase-3β and β-catenin, which may indicate altered protein stability. Overall, results suggest that an enhanced nuclear localization of β-catenin may be important in the progression of cardiac hypertrophy.
Regarding the mechanistic role of β-catenin in cardiac hypertrophy, transgenic mouse studies have shown somewhat conflicting results regarding whether upregulation of β-catenin is beneficial or detrimental. A recent study using a conditional knockout mouse that either lacked β-catenin altogether or expressed a non-degradable form of β-catenin in cardiomyocytes reconciled a potential reason for these discrepancies. There appears to be strict control over the subcellular localization of β-catenin in cardiac muscle. Mice lacking β-catenin had no overt phenotype in the left ventricular myocardium; however, mice harboring a stabilized form of β-catenin developed dilated cardiomyopathy, suggesting that the temporal regulation of β-catenin by protein degradation mechanisms is critical for normal functioning of β-catenin in cardiac cells. In a mouse model harboring knockout of a desmosomal protein, plakoglobin, implicated in arrhythmogenic right ventricular cardiomyopathy, the stabilization of β-catenin was also enhanced, presumably to compensate for the loss of its plakoglobin homolog. These changes were coordinate with Akt activation and glycogen synthase kinase 3β inhibition, suggesting once again that the abnormal stabilization of β-catenin may be involved in the development of cardiomyopathy. Further studies employing a double knockout of plakoglobin and β-catenin showed that the double knockout developed cardiomyopathy, fibrosis and arrhythmias resulting in sudden cardiac death. Intercalated disc architecture was severely impaired and connexin 43-resident gap junctions were markedly reduced. Electrocardiogram measurements captured spontaneous lethal ventricular arrhythmias in the double transgenic animals, suggesting that the two catenins—β-catenin and plakoglobin—are critical and indispensable for mechanoelectrical coupling in cardiomyocytes.
Clinical significance
Role in depression
Whether or not a given individual's brain can deal effectively with stress, and thus their susceptibility to depression, depends on the β-catenin in each person's brain, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published November 12, 2014, in the journal Nature. Higher β-catenin signaling increases behavioral flexibility, whereas defective β-catenin signaling leads to depression and reduced stress management.
Role in cardiac disease
Altered expression profiles in β-catenin have been associated with dilated cardiomyopathy in humans. β-Catenin upregulation of expression has generally been observed in patients with dilated cardiomyopathy. In a particular study, patients with end-stage dilated cardiomyopathy showed almost doubled estrogen receptor alpha (ER-alpha) mRNA and protein levels, and the ER-alpha/beta-catenin interaction, present at intercalated discs of control, non-diseased human hearts was lost, suggesting that the loss of this interaction at the intercalated disc may play a role in the progression of heart failure. Together with BCL9 and PYGO proteins, β-catenin coordinates different aspects of heard development, and mutations in Bcl9 or Pygo in model organisms - such as the mouse and zebrafish - cause phenotypes that are very similar to human congenital heart disorders.
Involvement in cancer
β-Catenin is a proto-oncogene. Mutations of this gene are commonly found in a variety of cancers: in primary hepatocellular carcinoma, colorectal cancer, ovarian carcinoma, breast cancer, lung cancer and glioblastoma. It has been estimated that approximately 10% of all tissue samples sequenced from all cancers display mutations in the CTNNB1 gene. Most of these mutations cluster on a tiny area of the N-terminal segment of β-catenin: the β-TrCP binding motif. Loss-of-function mutations of this motif essentially make ubiquitinylation and degradation of β-catenin impossible. It will cause β-catenin to translocate to the nucleus without any external stimulus and continuously drive transcription of its target genes. Increased nuclear β-catenin levels have also been noted in basal cell carcinoma (BCC), head and neck squamous cell carcinoma (HNSCC), prostate cancer (CaP), pilomatrixoma (PTR) and medulloblastoma (MDB) These observations may or may not implicate a mutation in the β-catenin gene: other Wnt pathway components can also be faulty.
Similar mutations are also frequently seen in the β-catenin recruiting motifs of APC. Hereditary loss-of-function mutations of APC cause a condition known as familial adenomatous polyposis. Affected individuals develop hundreds of polyps in their large intestine. Most of these polyps are benign in nature, but they have the potential to transform into deadly cancer as time progresses. Somatic mutations of APC in colorectal cancer are also not uncommon. β-Catenin and APC are among the key genes (together with others, like K-Ras and SMAD4) involved in colorectal cancer development. The potential of β-catenin to change the previously epithelial phenotype of affected cells into an invasive, mesenchyme-like type contributes greatly to metastasis formation.
As a therapeutic target
Due to its involvement in cancer development, inhibition of β-catenin continues to receive significant attention. But the targeting of the binding site on its armadillo domain is not the simplest task, due to its extensive and relatively flat surface. However, for an efficient inhibition, binding to smaller "hotspots" of this surface is sufficient. This way, a "stapled" helical peptide derived from the natural β-catenin binding motif found in LEF1 was sufficient for the complete inhibition of β-catenin dependent transcription. Recently, several small-molecule compounds have also been developed to target the same, highly positively charged area of the ARM domain (CGP049090, PKF118-310, PKF115-584 and ZTM000990). In addition, β-catenin levels can also be influenced by targeting upstream components of the Wnt pathway as well as the β-catenin destruction complex. The additional N-terminal binding pocket is also important for Wnt target gene activation (required for BCL9 recruitment). This site of the ARM domain can be pharmacologically targeted by carnosic acid, for example. That "auxiliary" site is another attractive target for drug development. Despite intensive preclinical research, no β-catenin inhibitors are available as therapeutic agents yet. However, its function can be further examined by siRNA knockdown based on an independent validation. Another therapeutic approach for reducing β-catenin nuclear accumulation is via the inhibition of galectin-3. The galectin-3 inhibitor GR-MD-02 is currently undergoing clinical trials in combination with the FDA-approved dose of ipilimumab in patients who have advanced melanoma. The proteins BCL9 and BCL9L have been proposed as therapeutic targets for colorectal cancers which present hyper-activated Wnt signaling, because their deletion does not perturb normal homeostasis but strongly affects metastases behaviour.
Role in fetal alcohol syndrome
β-catenin destabilization by ethanol is one of two known pathways whereby alcohol exposure induces fetal alcohol syndrome (the other is ethanol-induced folate deficiency). Ethanol leads to β-catenin destabilization via a G-protein-dependent pathway, wherein activated Phospholipase Cβ hydrolyzes phosphatidylinositol-(4,5)-bisphosphate to diacylglycerol and inositol-(1,4,5)-trisphosphate. Soluble inositol-(1,4,5)-trisphosphate triggers calcium to be released from the endoplasmic reticulum. This sudden increase in cytoplasmic calcium activates Ca2+/calmodulin-dependent protein kinase (CaMKII). Activated CaMKII destabilizes β-catenin via a poorly characterized mechanism, but which likely involves β-catenin phosphorylation by CaMKII. The β-catenin transcriptional program (which is required for normal neural crest cell development) is thereby suppressed, resulting in premature neural crest cell apoptosis (cell death).
Interactions
β-Catenin has been shown to interact with:
APC,
AXIN1,
Androgen receptor,
CBY1,
CDH1,
CDH2,
CDH3,
CDK5R1,
CHUK,
CTNND1,
CTNNA1,
EGFR,
Emerin
ESR1
FHL2,
GSK3B,
HER2/neu,
HNF4A,
IKK2,
LEF1 including transgenically,
MAGI1,
MUC1,
NR5A1,
PCAF,
PHF17,
Plakoglobin,
PTPN14,
PTPRF,
PTPRK (PTPkappa),
PTPRT (PTPrho),
PTPRU (PCP-2),
PSEN1,
PTK7
RuvB-like 1,
SMAD7,
SMARCA4
SLC9A3R1,
USP9X, and
VE-cadherin.
XIRP1
See also
Catenin
References
Further reading
External links
"A diverse set of proteins modulate the canonical Wnt/β-catenin signaling pathway." at cancer.gov
"The role of β-catenin in signal transduction, cell fate determination and trans-differentiation" at nih.gov
"Researchers Offer First Direct Proof of How Arthritis Destroys Cartilage" at rochester.edu
Signal transduction
Catenins
Oncogenes
Armadillo-repeat-containing proteins | Catenin beta-1 | [
"Chemistry",
"Biology"
] | 7,446 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
5,603,073 | https://en.wikipedia.org/wiki/Pristichampsus | Pristichampsus (from , 'saw' and , 'crocodile') is a non-diagnostic and potentially dubious extinct genus of crocodylian from France and possibly also Kazakhstan that is part of the monotypic Pristichampsidae family. As the type species, Pristichampsus rollinatii, was based on insufficient material when described in 1831 and 1853, the taxonomic status of the genus is in doubt, and other species have been referred to other genera, primarily Boverisuchus.
History
Pristichampsus was first described and named as a species of Crocodylus, C. rollinatii, by John Edward Gray in 1831 on the basis of remains from the Lutetian Sables du Castrais Formation of France. Paul Gervais (1853) assigned this species to its own genus, creating the new combination Pristichampsus rollinatii.
Other species have been referred to this genus. The genera Boverisuchus and Weigeltisuchus from the Lutetian of Germany as well as Limnosaurus from North America were synonymized with Pristichampsus and their type species were reassigned to it. Langston (1975) found Limnosaurus to be based on non-diagnostic remains, and therefore considered it to be in its own genus, as a nomen dubium. He also reassigned Crocodylus vorax from the Lutetian of Wyoming and West Texas to Pristichampsus.
Efimov (1988) named two additional species of Pristichampsus, P. birjukovi and P. kuznetzovi from the Middle Eocene of Eastern Kazakhstan.
Following a revision of the genus Pristichampsus by Brochu (2013), P. rollinatii was found to be based on insufficiently diagnostic material and therefore is a nomen dubium. Boverisuchus was reinstated as a valid genus, and the species Weigeltisuchus geiseltalensis was considered to be synonymous with B. magnifrons. Brochu (2013) also reassigned P. vorax as the second species of Boverisuchus. According to Brochu (2013), material from the middle Eocene of Italy and Texas may represent other species of Boverisuchus.
References
Crocodilians
Terrestrial crocodylomorphs
Paleocene crocodylomorphs
Eocene crocodylomorphs
Eocene reptiles of North America
Eocene reptiles of Asia
Eocene reptiles of Europe
Fossil taxa described in 1853
Nomina dubia
Prehistoric pseudosuchian genera | Pristichampsus | [
"Biology"
] | 544 | [
"Biological hypotheses",
"Nomina dubia",
"Controversial taxa"
] |
5,603,128 | https://en.wikipedia.org/wiki/Silyl%20enol%20ether | In organosilicon chemistry, silyl enol ethers are a class of organic compounds that share the common functional group , composed of an enolate () bonded to a silane () through its oxygen end and an ethene group () as its carbon end. They are important intermediates in organic synthesis.
Synthesis
Silyl enol ethers are generally prepared by reacting an enolizable carbonyl compound with a silyl electrophile and a base, or just reacting an enolate with a silyl electrophile. Since silyl electrophiles are hard and silicon-oxygen bonds are very strong, the oxygen (of the carbonyl compound or enolate) acts as the nucleophile to form a Si-O single bond.
The most commonly used silyl electrophile is trimethylsilyl chloride. To increase the rate of reaction, trimethylsilyl triflate may also be used in the place of trimethylsilyl chloride as a more electrophilic substrate.
When using an unsymmetrical enolizable carbonyl compound as a substrate, the choice of reaction conditions can help control whether the kinetic or thermodynamic silyl enol ether is preferentially formed. For instance, when using lithium diisopropylamide (LDA), a strong and sterically hindered base, at low temperature (e.g., -78°C), the kinetic silyl enol ether (with a less substituted double bond) preferentially forms due to sterics. When using triethylamine, a weak base, the thermodynamic silyl enol ether (with a more substituted double bond) is preferred.
Alternatively, a rather exotic way of generating silyl enol ethers is via the Brook rearrangement of appropriate substrates.
Reactions
General reaction profile
Silyl enol ethers are neutral, mild nucleophiles (milder than enamines) that react with good electrophiles such as aldehydes (with Lewis acid catalysis) and carbocations. Silyl enol ethers are stable enough to be isolated, but are usually used immediately after synthesis.
Generation of lithium enolate
Lithium enolates, one of the precursors to silyl enol ethers, can also be generated from silyl enol ethers using methyllithium. The reaction occurs via nucleophilic substitution at the silicon of the silyl enol ether, producing the lithium enolate and tetramethylsilane.
C–C bond formation
Silyl enol ethers are used in many reactions resulting in alkylation, e.g., Mukaiyama aldol addition, Michael reactions, and Lewis-acid-catalyzed reactions with SN1-reactive electrophiles (e.g., tertiary, allylic, or benzylic alkyl halides). Alkylation of silyl enol ethers is especially efficient with tertiary alkyl halides, which form stable carbocations in the presence of Lewis acids like or .
Halogenation and oxidations
Halogenation of silyl enol ethers gives haloketones.
Acyloins form upon organic oxidation with an electrophilic source of oxygen such as an oxaziridine or mCPBA.
In the Saegusa–Ito oxidation, certain silyl enol ethers are oxidized to enones with palladium(II) acetate.
Sulfenylation
Reacting a silyl enol ether with PhSCl, a good and soft electrophile, provides a carbonyl compound sulfenylated at an alpha carbon. In this reaction, the trimethylsilyl group of the silyl enol ether is removed by the chloride ion released from the PhSCl upon attack of its electrophilic sulfur atom.
Hydrolysis
Hydrolysis of a silyl enol ether results in the formation of a carbonyl compound and a disiloxane. In this reaction, water acts as an oxygen nucleophile and attacks the silicon of the silyl enol ether. This leads to the formation of the carbonyl compound and a trimethylsilanol intermediate that undergoes nucleophilic substitution at silicon (by another trimethylsilanol) to give the disiloxane.
Ring contraction
Cyclic silyl enol ethers undergo regiocontrolled one-carbon ring contractions. These reactions employ electron-deficient sulfonyl azides, which undergo chemoselective, uncatalyzed [3+2] cycloaddition to the silyl enol ether, followed by loss of dinitrogen, and alkyl migration to give ring-contracted products in good yield. These reactions may be directed by substrate stereochemistry, giving rise to stereoselective ring-contracted product formation.
Silyl ketene acetals
Silyl enol ethers of esters () or carboxylic acids () are called silyl ketene acetals and have the general structure . These compounds are more nucleophilic than the silyl enol ethers of ketones ().
References
Functional groups
Organosilicon compounds
Ethers
Alkene derivatives
Silanes | Silyl enol ether | [
"Chemistry"
] | 1,128 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
5,603,283 | https://en.wikipedia.org/wiki/Cryptogenic%20species | A cryptogenic species ("cryptogenic" being derived from Greek "κρυπτός", meaning hidden, and "γένεσις", meaning origin) is a species whose origins are unknown. The cryptogenic species can be an animal or plant, including other kingdoms or domains, such as fungi, algae, bacteria, or even viruses.
In ecology, a cryptogenic species is one which may be either a native species or an introduced species, clear evidence for either origin being absent. An example is the Northern Pacific seastar (Asterias amurensis) in Alaska and Canada.
In palaeontology, a cryptogenic species is one which appears in the fossil record without clear affinities to an earlier species.
See also
Cosmopolitan distribution
Cryptozoology
References
Further reading
Ecology terminology | Cryptogenic species | [
"Biology"
] | 169 | [
"Ecology terminology"
] |
5,604,205 | https://en.wikipedia.org/wiki/Smart%20contract | A smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document events and actions according to the terms of a contract or an agreement. The objectives of smart contracts are the reduction of need for trusted intermediators, arbitration costs, and fraud losses, as well as the reduction of malicious and accidental exceptions. Smart contracts are commonly associated with cryptocurrencies, and the smart contracts introduced by Ethereum are generally considered a fundamental building block for decentralized finance (DeFi) and non-fungible token (NFT) applications.
The original Ethereum white paper by Vitalik Buterin in 2014 describes the Bitcoin protocol as a weak version of the smart contract concept as originally defined by Nick Szabo, and proposed a stronger version based on the Solidity language, which is Turing complete. Since then, various cryptocurrencies have supported programming languages which allow for more advanced smart contracts between untrusted parties.
A smart contract should not be confused with a smart legal contract, which refers to a traditional, natural-language, legally-binding agreement that has selected terms expressed and implemented in machine-readable code.
Etymology
By 1996, Nick Szabo was using the term "smart contract" to refer to contracts which would be enforced by physical property (such as hardware or software) instead of by law. Szabo described vending machines as an example of this concept. In 1998, the term was used to describe objects in rights management service layer of the system The Stanford Infobus, which was a part of Stanford Digital Library Project.
Legal status of smart contracts
A smart contract does not typically constitute a valid binding agreement at law. Proposals exist to regulate smart contracts.
Smart contracts are not legal agreements, but instead transactions which are executed automatically by a computer program or a transaction protocol, such as technological means for the automation of payment obligations such as by transferring cryptocurrencies or other tokens. Some scholars have argued that the imperative or declarative nature of programming languages would impact the legal validity of smart contracts.
Since the 2015 launch of the Ethereum blockchain, the term "smart contract" has been applied to general purpose computation that takes place on a blockchain. The US National Institute of Standards and Technology describes a "smart contract" as a "collection of code and data (sometimes referred to as functions and state) that is deployed using cryptographically signed transactions on the blockchain network". In this interpretation a smart contract is any kind of computer program which uses a blockchain. A smart contract also can be regarded as a secured stored procedure, as its execution and codified effects (like the transfer of tokens between parties) cannot be manipulated without modifying the blockchain itself. In this interpretation, the execution of contracts is controlled and audited by the platform, not by arbitrary server-side programs connecting to the platform.
In 2018, a US Senate report said: "While smart contracts might sound new, the concept is rooted in basic contract law. Usually, the judicial system adjudicates contractual disputes and enforces terms, but it is also common to have another arbitration method, especially for international transactions. With smart contracts, a program enforces the contract built into the code." States in the US which have passed legislation on the use of smart contracts include Arizona, Iowa, Nevada, Tennessee, and Wyoming.
In April 2021, the UK Jurisdiction Taskforce (UKJT) published the Digital Dispute Resolution Rules (the Digital DR Rules), which were intended to enable the rapid resolution of blockchain and crypto legal disputes in Britain.
Workings
Similar to a transfer of value on a blockchain, deployment of a smart contract on a blockchain occurs by sending a transaction from a wallet for the blockchain. The transaction includes the compiled code for the smart contract as well as a special receiver address. That transaction must then be included in a block that is added to the blockchain, at which point the smart contract's code will execute to establish the initial state of the smart contract. Byzantine fault-tolerant algorithms secure the smart contract in a decentralized way from attempts to tamper with it. Once a smart contract is deployed, it cannot be updated. Smart contracts on a blockchain can store arbitrary state and execute arbitrary computations. End clients interact with a smart contract through transactions. Such transactions with a smart contract can invoke other smart contracts. These transactions might result in changing the state and sending coins from one smart contract to another or from one account to another.
The most popular blockchain for running smart contracts is Ethereum. On Ethereum, smart contracts are typically written in a Turing-complete programming language called Solidity, and compiled into low-level bytecode to be executed by the Ethereum Virtual Machine. Due to the halting problem and other security problems, Turing-completeness is considered to be a risk and is deliberately avoided by languages like Vyper. Some of the other smart contract programming languages missing Turing-completeness are Simplicity, Scilla, Ivy and Bitcoin Script. However, measurements in 2020 using regular expressions showed that only 35.3% of 53,757 Ethereum smart contracts at that time included recursions and loops — constructs connected to the halting problem.
Several languages are designed to enable formal verification: Bamboo, IELE, Simplicity, Michelson (can be verified with Coq), Liquidity (compiles to Michelson), Scilla, DAML and Pact.
Processes on a blockchain are generally deterministic in order to ensure Byzantine fault tolerance. Nevertheless, real world application of smart contracts, such as lotteries and casinos, require secure randomness. In fact, blockchain technology reduces the costs for conducting of a lottery and is therefore beneficial for the participants. Randomness on blockchain can be implemented by using block hashes or timestamps, oracles, commitment schemes, special smart contracts like RANDAO and Quanta, as well as sequences from mixed strategy Nash equilibria.
Applications
In 1998, Szabo proposed that smart contract infrastructure can be implemented by replicated asset registries and contract execution using cryptographic hash chains and Byzantine fault-tolerant replication. Askemos implemented this approach in 2002 using Scheme (later adding SQLite) as the contract script language.
One proposal for using Bitcoin for replicated asset registration and contract execution is called "colored coins". Replicated titles for potentially arbitrary forms of property, along with replicated contract execution, are implemented in different projects.
, UBS was experimenting with "smart bonds" that use the bitcoin blockchain in which payment streams could hypothetically be fully automated, creating a self-paying instrument.
Inheritance wishes could hypothetically be implemented automatically upon registration of a death certificate by means of smart contracts. Birth certificates can also work together with smart contracts.
Chris Snook of Inc.com suggests smart contracts could also be used to handle real estate transactions and could be used in the field of title records and in the public register.
Seth Oranburg and Liya Palagashvili argue that smart contracts could also be used in employment contracts, especially temporary employment contracts, which according to them would benefit the employer.
Security issues
The transactions data from a blockchain-based smart contract is visible to all users in the blockchain. The data provides cryptographic view of the transactions, however, this leads to a situation where bugs, including security holes, are visible to all yet may not be quickly fixed. Such an attack, difficult to fix quickly, was successfully executed on The DAO in June 2016, draining approximately million worth of Ether at the time, while developers attempted to come to a solution that would gain consensus. The DAO program had a time delay in place before the hacker could remove the funds; a hard fork of the Ethereum software was done to claw back the funds from the attacker before the time limit expired. Other high-profile attacks include the Parity multisignature wallet attacks, and an integer underflow/overflow attack (2018), totaling over million.
Issues in Ethereum smart contracts, in particular, include ambiguities and easy-but-insecure constructs in its contract language Solidity, compiler bugs, Ethereum Virtual Machine bugs, attacks on the blockchain network, the immutability of bugs and that there is no central source documenting known vulnerabilities, attacks and problematic constructs.
Difference from smart legal contracts
Smart legal contracts are distinct from smart contracts. As mentioned above, a smart contract is not necessarily legally enforceable as a contract. On the other hand, a smart legal contract has all the elements of a legally enforceable contract in the jurisdiction in which it can be enforced and it can be enforced by a court or tribunal. Therefore, while every smart legal contract will contain some elements of a smart contract, not every smart contract will be a smart legal contract.
There is no formal definition of a smart legal contract in the legal industry.
A Ricardian contract is a type of smart legal contract.
See also
Code and Other Laws of Cyberspace
Decentralized application
Ethereum
Regulation by algorithms
Regulation of algorithms
Ricardian contract (a design pattern to capture the intent of the agreement of parties)
Loan
Secure multiparty computation
Transparency
References
Blockchains
Computer law
Contract law
Cryptocurrencies | Smart contract | [
"Technology"
] | 1,925 | [
"Computer law",
"Computing and society"
] |
5,604,680 | https://en.wikipedia.org/wiki/Leeuwenhoek%20Lecture | The Leeuwenhoek Lecture is a prize lecture of the Royal Society to recognize achievement in microbiology. The prize was originally given in 1950 and awarded annually, but from 2006 to 2018 was given triennially. From 2018 it will be awarded biennially.
The prize is named after the Dutch microscopist Antonie van Leeuwenhoek and was instituted in 1948 from a bequest from George Gabb. A gift of £2000 is associated with the lecture.
Leeuwenhoek Lecturers
The following is a list of Leeuwenhoek Lecture award winners along with the title of their lecture:
21st Century
2024 Joanne Webster, for her achievements in advancing control of disease in humans and animals caused by parasites in Asia and Africa
2022 Sjors Scheres, for ground-breaking contributions and innovations in image analysis and reconstruction methods in electron cryo-microscopy, enabling the structure determination of complex macromolecules of fundamental biological and medical importance to atomic resolution
2020 Geoffrey L. Smith, for his studies of poxviruses which has had major impact in wider areas, notably vaccine development, biotechnology, host-pathogen interactions and innate immunity
2018 Sarah Cleaveland, Can we make rabies history? Realising the value of research for the global elimination of rabies
2015 Jeffrey Errington, for his seminal discoveries in relation to the cell cycle and cell morphogenesis in bacteria
2012 Brad Amos, How new science is transforming the optical microscope
2010 Robert Gordon Webster, Pandemic Influenza: one flu over the cuckoo's nest
2006 Richard Anthony Crowther, Microscopy goes cold: frozen viruses reveal their structural secrets.
2005 Keith Chater, Streptomyces inside out: a new perspective on the bacteria that provide us with antibiotics.
2004 David Sherratt, A bugs life
2003 Brian Spratt, Bacterial populations and bacterial disease
2002 Stephen West, DNA repair from microbes to man
2001 Robin Weiss, From Pan to pandemic: animal to human infections
20th Century
2000 Howard Dalton, The natural and unnatural history of methane-oxidising bacteria
1999 Peter C. Doherty, Killer T cells and virus infections
1998 George A.M. Cross, The genetics and cell biology of antigenic variation in trypanosomes
1997 Peter Biggs, Mareks disease, tumours and prevention
1996 Julian Davies, Microbial molecular diversity - function, evolution and applications
1995 John Guest, Adaptation to life without oxygen
1994 Keith Vickerman, The opportunistic parasite
1993 Fred Brown, Peptide vaccines, dream or reality.
1992 John Postgate, Bacterial evolution and the nitrogen-fixing plant
1991 Harry Smith, The influence of the host on microbes that cause disease
1990 John Skehel, How enveloped viruses enter cells
1989 Piet Borst, Antigenic variation in African trypanosomes
1988 Alfred Rupert Hall, Antoni van Leeuwenhoek (1632-1723) and Anglo-Dutch collaboration
1987 David Alan Hopwood, Towards an understanding of gene switching in streptomyces, the basis of sporulation and antibiotic production
1986 William Fleming Hoggan Jarrett, Environmental carcinogens and paillomaviruses in the pathogenesis of cancer.
1985 Kenneth Murray, A molecular biologist's view of viral hepatitis
1984 William Duncan Paterson Stewart, The functional organisation of nitrogen-fixing cyanobacteria.
1983 Michael Anthony Epstein, A prototype vaccine to prevent Epstein-Barr (E.B.) virus-associated tumours.
1982 Hamao Umezawa, Studies of microbial products in rising to the challenge of curing cancer
1981 Frank William Ernest Gibson, The biochemical and genetic approach to the study of bioenergetics with the use of Escherichia coli: progress and prospects.
1980 David Arthur John Tyrrell, Is it a virus?
1979 Patricia Hannah Clarke, Experiments in microbial evolution: new enzymes, new metabolic activities.
1978 Hugh John Forster Cairns, Bacteria as proper subjects for cancer research.
1977 Francois Jacob, Mouse teratocarcinoma and mouse embryo.
1976 Geoffrey Herbert Beale, The varied contributions of protozoa to genetical knowledge
1975 Joel Mandelstam, Bacterial sporulation: a problem in the biochemistry and genetics of a primitive development system.
1974 Renato Dulbecco, The control of cell growth regulation by tumour-inducing viruses: a challenging problem.
1973 Aaron Klug, The structure and assembly of regular viruses
1972 Hans Leo Kornberg, Carbohydrate transport by micro-organisms
1971 Michael George Parke Stoker, Tumour viruses and the sociology of fibroblasts
1970 Philip Herries Gregory, Airborne microbes: their significance and distribution
1969 Jacques Lucien Monod, Cellular and molecular cybernetics.
1968 Gordon Elliott Fogg, The physiology of an algal nuisance
1967 James Baddiley, Teichoic acids and the molecular structure of bacterial walls
1966 Percy Wragg Brian, Obligate parasitism in fungi
1965 William Hayes, Some controversial aspects of bacterial sexuality
1964 Donald Devereux Woods, A pattern of research with two bacterial growth factors
1963 Norman Wingate Pirie, The size of small organisms
1962 Guido Pontecorvo, Microbial genetics: achievements and prospects
1961 Frank John Fenner, Interactions between poxviruses
1960 Andre Michel Lwoff, Viral functions
1959 Frederick Charles Bawden, Viruses: retrospect and prospect
1958 David Keilin, The problem of anabiosis or latent life: history and current concepts
1957 Wilson Smith, Virus-host cell interactions
1956 Ernest Frederick Gale, The biochemical organization of the bacterial cell
1955 Henry Gerard Thornton, The ecology of micro-organisms in soil.
1954 Juda Hirsch Quastel, Soil metabolism
1953 Kenneth Manley Smith, Some aspects of the behaviour of certain viruses in their hosts and of their development in the cell.
1952 Albert Jan Kluyver, The changing appraisal of the microbe
1951 Christopher Howard Andrewes, The place of viruses in nature
1950 Paul Gordon Fildes, The development of microbiology.
References
Antonie van Leeuwenhoek
Biology education in the United Kingdom
Microbiology
Royal Society lecture series
1950 establishments in the United Kingdom
Recurring events established in 1950 | Leeuwenhoek Lecture | [
"Chemistry",
"Biology"
] | 1,260 | [
"Microbiology",
"Microscopy"
] |
5,604,790 | https://en.wikipedia.org/wiki/Batman%3A%20Contagion | Contagion is a story arc that ran through the various Batman comic book series. It concerns the outbreak of a lethal disease in Gotham City, and Batman's attempts to combat it. The events of this story led into Batman: Legacy and Batman: Cataclysm, which itself leads into Batman: No Man's Land. It ran from March through April 1996.
Much of the plot centers around a gated community in the middle of Gotham City, whose wealthy residents believe they can protect themselves from the plague by sealing themselves inside, only to discover that one of their number is the plague's first carrier. In this the story parallels the plot of Edgar Allan Poe's short story The Masque of the Red Death, which one of the community's dying residents mentions.
Plot
Batman: Shadow of the Bat #48
Azrael sends a report to Batman that a plague connected to The Sacred Order of Saint Dumas is on its way to Gotham City. Robin finds that a private plane has just landed in Gotham from Africa. The passenger, Peter Maris, enters his exclusive condominium complex, Babylon Towers. Believing that the plague is about to infect Gotham, he proposes to the other homeowners that they dismiss their servants and seal the building, which is entirely self-sufficient, and they agree. What Maris does not know is that he is already infected, and has passed the infection to his pilot, who joins the servants entering the city. Batman infiltrates a military facility where the Ebola Gulf-A virus, also known as "the Clench", was being studied. The military's head of research, General Derwent, has been accidentally infected and is slowly dying.
Detective Comics #695 / Robin #27 / Catwoman #31
Batman and Robin trace the original source of the outbreak to Babylon Towers, eavesdropping while Peter Maris, now realizing that he is infected, tells the other residents about a survivor from a previous outbreak in Greenland. The residents post an enormous reward for his live capture. As a result, when Robin and Alfred arrive in Canada to find him, Catwoman and a bounty hunter named Tracker are already there. The two pairs' initial clash is halted by Azrael, whom Batman asked to keep an eye on Robin. While they are trying to get the survivor to safety, they are ambushed by gunmen. The survivor dies, but gives Robin a sample of his blood, along with the names of two other possible survivors.
Azrael #15
Azrael joins forces with Tracker and Catwoman, while Robin returns to Gotham City with the blood. They track the second survivor, a Chinese gangster, to a yacht and overpower his guards. On their unfortune, the gangster has come to regard his survival of the plague as proof that he is immortal, and demonstrates his immortality by stabbing himself in the throat with a knife, lingering just long enough to realize that he was wrong.
Batman #529
Gotham City has been quarantined. Robin and Azrael deliver blood samples from the two survivors to Batman, who attempts to create an antidote. Believing he has succeeded, Batman has Poison Ivy released from Arkham Asylum to deliver the cure to Babylon Towers, since she is immune to all toxins and diseases. Nightwing and Huntress join forces to try and quell riots outside Babylon Towers, while the Gotham Police Department, completely disgusted with the ineffective leadership of Mayor Krol and his toady, Commissioner Andy Howe, ask James Gordon to resume his old post.
Batman: Shadow of the Bat #49
Inside Babylon Towers, Poison Ivy amuses herself by soliciting bids for the antidote from the richest and most desperate residents. Batman and Gordon both enter the complex to re-capture her and quell the unrest inside, learning that the antidote does not work. Outside, Robin's contact with one of the rioters leads to him being infected.
Detective Comics #696
Nightwing rushes Robin to the Batcave and puts him under Alfred's care. The state governor, fed up with Krol's inaction, declares martial law and sends in the National Guard. Batman re-captures Ivy and escapes Babylon Towers with her and Gordon, before the mob burns it to the ground.
The Batman Chronicles #4
Batman learns of a secret government entity that might be able to provide a cure for the plague, but Hitman kills its agent first. Huntress tries to find one of her students who lost his entire family to the plague, and finds him already dying of it. Delirious from fever, Tim Drake dreams of a life in which his mother is still alive, and his parents and girlfriend know about him being Robin.
Catwoman #32
Catwoman picks up the trail of the third and last survivor from the Greenland outbreak, finding her in Florida and taking her back to Gotham City. She is disappointed to find Babylon Towers has burned to the ground, meaning that there is no one left alive to pay the bounty – and even more disappointed when Batman informs her that no cure can be synthesized from any of the survivors' blood.
Azrael #16
Seeing the plague virus through the microscope, Azrael realizes that he has seen it before, meaning it was engineered and unleashed by the Order of St. Dumas. Searching through the texts Brian took from the Order, Lilhy is able to translate the formula for the antidote. Azrael rushes it to Gotham City, evading assassins sent by the Order to stop him, and delivers the formula to a hospital.
Robin #28
Nightwing returns to the Batcave and finds Tim cured. Batman reminds them that they still have to restore order in the city, even after Marion Grange replaces Armand Krol and reinstates Gordon as Police Commissioner. Despite almost dying from the plague, Robin joins Batman and Nightwing in aiding the police against a local gang, trying to hold an infected family for ransom by blocking off the medical authorities. This display of resolve pierces Catwoman's normal cynicism, and she lends a hand as well, after which Tim returns home.
Collected editions
A trade paperback collecting all the issues involved in the crossover, apart from The Batman Chronicles #4, was published in 1996. In 2016, the story was collected again in an edition that did include this issue, along with Robin #29-30.
References
Biological weapons in popular culture
Viral outbreaks in comics
Ebola in popular culture
Works based on The Masque of the Red Death | Batman: Contagion | [
"Biology"
] | 1,318 | [
"Biological weapons in popular culture",
"Biological warfare"
] |
5,605,079 | https://en.wikipedia.org/wiki/Proof%20of%20the%20Euler%20product%20formula%20for%20the%20Riemann%20zeta%20function | Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas (Various Observations about Infinite Series), published by St Petersburg Academy in 1737.
The Euler product formula
The Euler product formula for the Riemann zeta function reads
where the left hand side equals the Riemann zeta function:
and the product on the right hand side extends over all prime numbers p:
Proof of the Euler product formula
This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage:
Subtracting the second equation from the first we remove all elements that have a factor of 2:
Repeating for the next term:
Subtracting again we get:
where all elements having a factor of 3 or 2 (or both) are removed.
It can be seen that the right side is being sieved. Repeating infinitely for where is prime, we get:
Dividing both sides by everything but the ζ(s) we obtain:
This can be written more concisely as an infinite product over all primes p:
To make this proof rigorous, we need only to observe that when , the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for .
The case s = 1
An interesting result can be found for ζ(1), the harmonic series:
which can also be written as,
which is,
as,
thus,
While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g.,
.
Instead, the partial products (whose numerators are primorials) may be bounded, using , as
so that divergence is clear given the double-logarithmic divergence of the inverse prime series.
(Note that Euler's original proof for inverse prime series used just the converse direction to prove the divergence of the inverse prime series based on that of the Euler product and the harmonic series.)
Another proof
Each factor (for a given prime p) in the product above can be expanded to a geometric series consisting of the reciprocal of p raised to multiples of s, as follows
When , this series converges absolutely. Hence we may take a finite number of factors, multiply them together, and rearrange terms. Taking all the primes p up to some prime number limit q, we have
where σ is the real part of s. By the fundamental theorem of arithmetic, the partial product when expanded out gives a sum consisting of those terms n−s where n is a product of primes less than or equal to q. The inequality results from the fact that therefore only integers larger than q can fail to appear in this expanded out partial product. Since the difference between the partial product and ζ(s) goes to zero when σ > 1, we have convergence in this region.
See also
Euler product
Riemann zeta function
References
John Derbyshire, Prime Obsession: Bernhard Riemann and The Greatest Unsolved Problem in Mathematics, Joseph Henry Press, 2003,
Notes
Zeta and L-functions
Article proofs
Leonhard Euler
Infinite products | Proof of the Euler product formula for the Riemann zeta function | [
"Mathematics"
] | 693 | [
"Mathematical analysis",
"Article proofs",
"Infinite products"
] |
5,605,137 | https://en.wikipedia.org/wiki/Oligomycin | Oligomycins are macrolides created by Streptomyces that are strong antibacterial agents but are often poisonous to other organisms, including humans.
Function
Oligomycins have use as antibiotics. However, in humans, they have limited or no clinical use due to their toxic effects on mitochondria and ATP synthase.
Oligomycin A is an inhibitor of ATP synthase. In oxidative phosphorylation research, it is used to prevent stage 3 (phosphorylating) respiration. Oligomycin A inhibits ATP synthase by blocking its proton channel (FO subunit), which is necessary for oxidative phosphorylation of ADP to ATP (energy production). The inhibition of ATP synthesis by oligomycin A will significantly reduce electron flow through the electron transport chain; however, electron flow is not stopped completely due to a process known as proton leak or mitochondrial uncoupling. This process is due to facilitated diffusion of protons into the mitochondrial matrix through an uncoupling protein such as thermogenin, or UCP1.
Administering oligomycin to rats can result in very high levels of lactate accumulating in the blood and urine.
References
Macrolide antibiotics
Spiro compounds
Diketones
ATP synthase inhibitors
it:Fosforilazione ossidativa#Inibitori | Oligomycin | [
"Chemistry"
] | 293 | [
"Organic compounds",
"Spiro compounds"
] |
5,605,480 | https://en.wikipedia.org/wiki/Thermostability | In materials science and molecular biology, thermostability is the ability of a substance to resist irreversible change in its chemical or physical structure, often by resisting decomposition or polymerization, at a high relative temperature.
Thermostable materials may be used industrially as fire retardants. A thermostable plastic, an uncommon and unconventional term, is likely to refer to a thermosetting plastic that cannot be reshaped when heated, than to a thermoplastic that can be remelted and recast.
Thermostability is also a property of some proteins. To be a thermostable protein means to be resistant to changes in protein structure due to applied heat.
Thermostable proteins
Most life-forms on Earth live at temperatures of less than 50 °C, commonly from 15 to 50 °C. Within these organisms are macromolecules (proteins and nucleic acids) which form the three-dimensional structures essential to their enzymatic activity. Above the native temperature of the organism, thermal energy may cause the unfolding and denaturation, as the heat can disrupt the intramolecular bonds in the tertiary and quaternary structure. This unfolding will result in loss in enzymatic activity, which is understandably deleterious to continuing life-functions. An example of such is the denaturing of proteins in albumen from a clear, nearly colourless liquid to an opaque white, insoluble gel.
Proteins capable of withstanding such high temperatures compared to proteins that cannot, are generally from microorganisms that are hyperthermophiles. Such organisms can withstand above 50 °C temperatures as they usually live within environments of 85 °C and above. Certain thermophilic life-forms exist which can withstand temperatures above this, and have corresponding adaptations to preserve protein function at these temperatures. These can include altered bulk properties of the cell to stabilize all proteins, and specific changes to individual proteins. Comparing homologous proteins present in these thermophiles and other organisms reveal some differences in the protein structure. One notable difference is the presence of extra hydrogen bonds in the thermophile's proteins—meaning that the protein structure is more resistant to unfolding. Similarly, thermostable proteins are rich in salt bridges or/and extra disulfide bridges stabilizing the structure. Other factors of protein thermostability are compactness of protein structure, oligomerization, and strength interaction between subunits.
Uses and applications
Polymerase chain reactions
Thermostable DNA polymerases such as Taq polymerase and Pfu DNA polymerase are used in polymerase chain reactions (PCR) where temperatures of 94 °C or over are used to melt DNA strands in the denaturation step of PCR. This resistance to high temperature allows for DNA polymerase to elongate DNA with a desired sequence of interest with the presence of dNTPs.
Feed additives
Enzymes are often added to animal feed to improve the health and growth of farmed animals, particularly chickens and pigs. The feed is typically treated with high pressure steam to kill bacteria such as Salmonella. Therefore the added enzymes (e.g. phytase and xylanase) must be able to withstand this thermal challenge without being irreversibly inactivated.
Protein purification
Knowledge of an enzyme's resistance to high temperatures is especially beneficial in protein purification. In the procedure of heat denaturation, one can subject a mixture of proteins to high temperatures, which will result in the denaturation of proteins that are not thermostable, and the isolation of the protein that is thermodynamically stable. One notable example of this is found in the purification of alkaline phosphatase from the hyperthermophile Pyrococcus abyssi. This enzyme is known for being heat stable at temperatures greater than 95 °C, and therefore can be partially purified by heating when heterologously expressed in E. coli. The increase in temperature causes the E. coli proteins to precipitate, while the P. abyssi alkaline phosphatase remains stably in solution.
Glycoside hydrolases
Another important group of thermostable enzymes are glycoside hydrolases. These enzymes are responsible of the degradation of the major fraction of biomass, the polysaccharides present in starch and lignocellulose. Thus, glycoside hydrolases are gaining great interest in biorefining applications in the future bioeconomy. Some examples are the production of monosaccharides for food applications as well as use as carbon source for microbial conversion in fuels (ethanol) and chemical intermediates, production of oligosaccharides for prebiotic applications and production of surfactants alkyl glycoside type. All of these processes often involve thermal treatments to facilitate the polysaccharide hydrolysis, hence give thermostable variants of glycoside hydrolases an important role in this context.
Approaches to improve thermostability of proteins
Protein engineering can be used to enhance the thermostability of proteins. A number of site-directed and random mutagenesis techniques, in addition to directed evolution, have been used to increase the thermostability of target proteins. Comparative methods have been used to increase the stability of mesophilic proteins based on comparison to thermophilic homologs. Additionally, analysis of the protein unfolding by molecular dynamics can be used to understand the process of unfolding and then design stabilizing mutations. Rational protein engineering for increasing protein thermostability includes mutations which truncate loops, increase salt bridges or hydrogen bonds, introduced disulfide bonds. In addition, ligand binding can increase the stability of the protein, particularly when purified. There are various different forces that allow for the thermostability of a particular protein. These forces include hydrophobic interactions, electrostatic interactions, and the presence of disulfide bonds. The overall amount of hydrophobicity present in a particular protein is responsible for its thermostability. Another type of force that is responsible for thermostability of a protein is the electrostatic interactions between molecules. These interactions include salt bridges and hydrogen bonds. Salt bridges are unaffected by high temperatures, therefore, are necessary for protein and enzyme stability. A third force used to increase thermostability in proteins and enzymes is the presence of disulfide bonds. They present covalent cross-linkages between the polypeptide chains. These bonds are the strongest because they're covalent bonds, making them stronger than intermolecular forces. Glycosylation is another way to improve the thermostability of proteins. Stereoelectronic effects in stabilizing interactions between carbohydrate and protein can lead to the thermostabilization of the glycosylated protein.
Cyclizing enzymes by covalently linking the N-terminus to the C-terminus has been applied to increase the thermostability of many enzymes. Intein cyclization and SpyTag/SpyCatcher cyclization have often been employed.
Thermostable toxins
Certain poisonous fungi contain thermostable toxins, such as amatoxin found in the death cap and autumn skullcap mushrooms and patulin from molds. Therefore, applying heat to these will not remove the toxicity and is of particular concern for food safety.
See also
Thermophiles
Thermus thermophilus
Thermus aquaticus
Pyrococcus furiosus
References
External links
Thermostability of Proteins
Protein structure
Toxicology
Extremophiles | Thermostability | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,588 | [
"Toxicology",
"Organisms by adaptation",
"Extremophiles",
"Bacteria",
"Structural biology",
"Environmental microbiology",
"Protein structure"
] |
5,605,670 | https://en.wikipedia.org/wiki/Narcissism | Narcissism is a self-centered personality style characterized as having an excessive preoccupation with oneself and one's own needs, often at the expense of others. Narcissism, named after the Greek mythological figure Narcissus, has evolved into a psychological concept studied extensively since the early 20th century, and it has been deemed highly relevant in various societal domains.
Narcissism exists on a continuum that ranges from normal to abnormal personality expression. While many psychologists believe that a moderate degree of narcissism is normal and healthy in humans, there are also more extreme forms, observable particularly in people who have a personality condition like narcissistic personality disorder (NPD), where one's narcissistic qualities become pathological, leading to functional impairment and psychosocial disability. It has also been discussed in dark triad studies, along with subclinical psychopathy and Machiavellianism.
Historical background
The term narcissism is derived from Narcissus, a character in Greek mythology best known from the telling in Roman poet Ovid's Metamorphoses, written in 8 CE. Book III of the poem tells the mythical story of a handsome young man, Narcissus, who spurns the advances of many potential lovers. When Narcissus rejects the nymph Echo, who was cursed to only echo the sounds that others made, the gods punish Narcissus by making him fall in love with his own reflection in a pool of water. When Narcissus discovers that the object of his love cannot love him back, he slowly pines away and dies.
The concept of excessive selfishness has been recognized throughout history. In ancient Greece, the concept was understood as hubris. Some religious movements such as the Hussites attempted to rectify what they viewed as the shattering and narcissistic cultures of recent centuries.
It was not until the late 1800s that narcissism began to be defined in psychological terms. Since that time, the term has had a significant divergence in meaning in psychology. It has been used to describe:
A sexual perversion,
A normal developmental stage,
A symptom in psychosis, and
A characteristic in several of the object relations [subtypes].
In 1889, psychiatrists Paul Näcke and Havelock Ellis used the term "narcissism", independently of each other, to describe a person who treats their own body in the same way in which the body of a sexual partner is ordinarily treated. Narcissism, in this context, was seen as a perversion that consumed a person's entire sexual life. In 1911 Otto Rank published the first clinical paper about narcissism, linking it to vanity and self-admiration.
In an essay in 1913 called "The God complex", Ernest Jones considered extreme narcissism as a character trait. He described people with the God complex as being aloof, self-important, overconfident, auto-erotic, inaccessible, self-admiring, and exhibitionistic, with fantasies of omnipotence and omniscience. He observed that these people had a high need for uniqueness.
Sigmund Freud (1914) published his theory of narcissism in a lengthy essay titled "On Narcissism: An Introduction". For Freud, narcissism refers to the individual's direction of libidinal energy toward themselves rather than objects and others. He postulated a universal "primary narcissism", that was a phase of sexual development in early infancy – a necessary intermediate stage between auto-eroticism and object-love, love for others. Portions of this 'self-love' or ego-libido are, at later stages of development, expressed outwardly, or "given off" toward others. Freud's postulation of a "secondary narcissism" came as a result of his observation of the peculiar nature of the schizophrenic's relation to themselves and the world. He observed that the two fundamental qualities of such patients were megalomania and withdrawal of interest from the real world of people and things: "the libido that has been withdrawn from the external world has been directed to the ego and thus gives rise to an attitude which may be called narcissism." It is a secondary narcissism because it is not a new creation but a magnification of an already existing condition (primary narcissism).
In 1925, Robert Waelder conceptualized narcissism as a personality trait. His definition described individuals who are condescending, feel superior to others, are preoccupied with admiration, and exhibit a lack of empathy. Waelder's work and his case study have been influential in the way narcissism and the clinical disorder narcissistic personality disorder are defined today. His patient was a successful scientist with an attitude of superiority, an obsession with fostering self-respect, and a lack of normal feelings of guilt. The patient was aloof and independent from others, had an inability to empathize with others, and was selfish sexually. Waelder's patient was also overly logical and analytical and valued abstract intellectual thought over the practical application of scientific knowledge.
Karen Horney (1939) postulated that narcissism was on a spectrum that ranged from healthy self-esteem to a pathological state.
The term entered the broader social consciousness following the publication of The Culture of Narcissism by Christopher Lasch in 1979. Since then, social media, bloggers, and self-help authors have indiscriminately applied "narcissism" as a label for the self-serving and for all domestic abusers.
Characteristics
Normal and healthy levels of narcissism
Some psychologists suggest that a moderate level of narcissism is supportive of good psychological health. Self-esteem works as a mediator between narcissism and psychological health. Elevated self-esteem, in moderation, supports resilience and ambition, but excessive self-focus can distort social relationships.
Destructive levels of narcissism
While narcissism, in and of itself, can be considered a normal personality trait, high levels of narcissistic behavior can be harmful to both self and others. Destructive narcissism is the constant exhibition of a few of the intense characteristics usually associated with pathological narcissistic personality disorder such as a "pervasive pattern of grandiosity", which is characterized by feelings of entitlement and superiority, arrogant or haughty behaviors, and a generalized lack of empathy and concern for others. On a spectrum, destructive narcissism is more extreme than healthy narcissism but not as extreme as the pathological condition.
Pathological levels of narcissism
Extremely high levels of narcissistic behavior are considered pathological. The pathological condition of narcissism is a magnified, extreme manifestation of healthy narcissism. It manifests itself in the inability to love others, lack of empathy, emptiness, boredom, and an unremitting need to search for power, while making the person unavailable to others. The clinical theorists Kernberg, Kohut, and Theodore Millon all saw pathological narcissism as a possible outcome in response to unempathetic and inconsistent early childhood interactions. They suggested that narcissists try to compensate in adult relationships. German psychoanalyst Karen Horney (1885–1952) also saw the narcissistic personality as a temperament trait molded by a certain kind of early environment.
Heritability
Heritability studies using twins have shown that narcissistic traits, as measured by standardized tests, are often inherited. Narcissism was found to have a high heritability score (0.64) indicating that the concordance of this trait in the identical twins was significantly influenced by genetics as compared to an environmental causation. It has also been shown that there is a continuum or spectrum of narcissistic traits ranging from normal to a pathological personality. Furthermore, evidence suggests that individual elements of narcissism have their own heritability score. For example, intrapersonal grandiosity has a score of 0.23, and interpersonal entitlement has a score of 0.35. While the genetic impact on narcissism levels is significant, it is not the only factor at play.
Expressions of narcissism
Primary expressions
Two primary expressions of narcissism have been identified: grandiose ("thick-skinned") and vulnerable ("thin-skinned"). Recent accounts posit that the core of narcissism is self-centred antagonism (or "entitled self-importance"), namely selfishness, entitlement, lack of empathy, and devaluation of others. Grandiosity and vulnerability are seen as different expressions of this antagonistic core, arising from individual differences in the strength of the approach and avoidance motivational systems. Some researchers have posited that genuine narcissists may fall into the vulnerable narcissism subtype, whereas grandiose narcissism might be a form of psychopathy.
Grandiose narcissism
Narcissistic grandiosity is thought to arise from a combination of the antagonistic core with temperamental boldness—defined by positive emotionality, social dominance, reward-seeking and risk-taking. Grandiosity is defined—in addition to antagonism—by a confident, exhibitionistic and manipulative self-regulatory style:
High self-esteem and a clear sense of uniqueness and superiority, with fantasies of success and power, and lofty ambitions.
Social potency, marked by exhibitionistic, authoritative, charismatic and self-promoting interpersonal behaviors.
Exploitative, self-serving relational dynamics; short-term relationship transactions defined by manipulation and privileging of personal gain over other benefits of socialization.
Vulnerable narcissism
Narcissistic vulnerability is thought to arise from a combination of the antagonistic core with temperamental reactivity—defined by negative emotionality, social avoidance, passivity and marked proneness to rage. Vulnerability is defined—in addition to antagonism—by a shy, vindictive and needy self-regulatory style:
Low and contingent self-esteem, unstable and unclear sense of self, and resentment of others' success
Social withdrawal, resulting from shame, distrust of others' intentions, and concerns over being accepted
Needy, obsessive relational dynamics; long-term relationship transactions defined by an excessive need for admiration, approval and support, and vengefulness when needs are unmet
Other expressions
Sexual
Sexual narcissism has been described as an egocentric pattern of sexual behavior that involves an inflated sense of sexual ability or sexual entitlement, sometimes in the form of extramarital affairs. This can be overcompensation for low self-esteem or an inability to sustain true intimacy.
While this behavioral pattern is believed to be more common in men than in women, it occurs in both males and females who compensate for feelings of sexual inadequacy by becoming overly proud or obsessed with their masculinity or femininity.
The controversial condition referred to as "sexual addiction" is believed by some experts to be sexual narcissism or sexual compulsivity, rather than an addictive behavior.
Parental
Narcissistic parents often see their children as extensions of themselves and encourage the children to act in ways that support the parents' emotional and self-esteem needs. Due to their vulnerability, children may be significantly affected by this behavior. To meet the parents' needs, the child may sacrifice their own wants and feelings. A child subjected to this type of parenting may struggle in adulthood with their intimate relationships.
In extreme situations, this parenting style can result in estranged relationships with the children, coupled with feelings of resentment, and in some cases, self-destructive tendencies.
Origins of narcissism in children can often come from the social learning theory. The social learning theory proposes that social behavior is learned by observing and imitating others' behavior. This suggests that children are anticipated to grow up to be narcissistic when their parents overvalue them.
Workplace
There is a compulsion of some professionals to constantly assert their competence, even when they are wrong. Professional narcissism can lead otherwise capable, and even exceptional, professionals to fall into narcissistic traps. "Most professionals work on cultivating a self that exudes authority, control, knowledge, competence and respectability. It's the narcissist in us all—we dread appearing stupid or incompetent."
Executives are often provided with potential narcissistic triggers. Inanimate triggers include status symbols like company cars, company-issued smartphone, or prestigious offices with window views; animate triggers include flattery and attention from colleagues and subordinates.
Narcissism has been linked to a range of potential leadership problems ranging from poor motivational skills to risky decision making, and in extreme cases, white-collar crime. High-profile corporate leaders that place an extreme emphasis on profits may yield positive short-term benefits for their organizations, but ultimately it drags down individual employees as well as entire companies.
Subordinates may find everyday offers of support swiftly turn them into enabling sources, unless they are very careful to maintain proper boundaries.
Studies examining the role of personality in the rise to leadership have shown that individuals who rise to leadership positions can be described as inter-personally dominant, extraverted, and socially skilled. When examining the correlation of narcissism in the rise to leadership positions, narcissists who are often inter-personally dominant, extraverted, and socially skilled, were also likely to rise to leadership but were more likely to emerge as leaders in situations where they were not known, such as in outside hires (versus internal promotions). Paradoxically, narcissism can present as characteristics that facilitate an individual's rise to leadership, and ultimately lead that person to underachieve or even to fail.
Narcissism can also create problems in the general workforce. For example, individuals high in narcissism inventories are more likely to engage in counterproductive behavior that harms organizations or other people in the workplace. Aggressive (and counterproductive) behaviors tend to surface when self-esteem is threatened. Individuals high in narcissism have fragile self-esteem and are easily threatened. One study found that employees who are high in narcissism are more likely to perceive the behaviors of others in the workplace as abusive and threatening than individuals who are low in narcissism.
Relationships
Narcissism can have a profound impact on both personal and professional relationships, often creating toxic dynamics. In romantic relationships, narcissistic individuals typically demand attention and admiration from their partner while offering little in return. They often fail to show empathy or concern for their partner’s emotional needs, focusing instead on fulfilling their own desires. The narcissist’s behavior can shift dramatically, alternating between idealizing their partner—viewing them as perfect—and devaluing them when the narcissist no longer feels validated. This inconsistency can cause emotional confusion and distress for the partner, leaving them feeling undervalued and emotionally drained.
Celebrity
Celebrity narcissism (sometimes referred to as acquired situational narcissism) is a form of narcissism that develops in late adolescence or adulthood, brought on by wealth, fame and the other trappings of celebrity. Celebrity narcissism develops after childhood, and is triggered and supported by the celebrity-obsessed society. Fans, assistants and tabloid media all play into the idea that the person really is vastly more important than other people, triggering a narcissistic problem that might have been only a tendency, or latent, and helping it to become a full-blown personality disorder. "Robert Millman says that what happens to celebrities is that they get so used to people looking at them that they stop looking back at other people." In its most extreme presentation and symptoms, it is indistinguishable from narcissistic personality disorder, differing only in its late onset and its environmental support by large numbers of fans. "The lack of social norms, controls, and of people centering them makes these people believe they're invulnerable", so that the person may suffer from unstable relationships, substance abuse or erratic behaviors.
Social media
Social media has played a significant role in shaping and amplifying narcissistic behaviors in recent years. Platforms such as Instagram and TikTok encourage users to share content that emphasizes their personal achievements and appearance, often rewarding those who gain the most likes and followers. Narcissistic individuals are more likely to use these platforms for self-promotion and validation. The trend of posting selfies and curated images is particularly prevalent among individuals who seek external approval to boost their self-esteem. The constant feedback from social media algorithms, which prioritize highly engaging content, further fuels narcissistic tendencies. While this can lead to increased attention and admiration, it can also create emotional instability. Narcissists often experience negative feelings, such as anxiety or depression, when they do not receive the validation they expect. This pressure to maintain an idealized online persona can lead to emotional distress, especially when their real-world interactions do not match the image they present online.
Dark triad
Narcissism is one of the three traits in the dark triad model. The dark triad of personality traits – narcissism, Machiavellianism, and psychopathy – shows how narcissism relates to manipulative behaviors and a lack of empathy. Narcissism has variously been correlated with both traits, though psychologists such as Delroy Paulhus and Kevin Williams see enough evidence that it is a distinct trait. However, researchers who criticize the dark triad model note that many of the theoretical characteristics that is stated to separate Psychopathy, Machiavellianism and Narcissism from each other do not appear in empirical research.
Collective narcissism
Collective narcissism is a type of narcissism where an individual has an inflated self-love of their own group. While the classic definition of narcissism focuses on the individual, collective narcissism asserts that one can have a similar excessively high opinion of a group, and that a group can function as a narcissistic entity. Collective narcissism is related to ethnocentrism; however, ethnocentrism primarily focuses on self-centeredness at an ethnic or cultural level, while collective narcissism is extended to any type of ingroup beyond just cultures and ethnicities.
Normalization of narcissistic behaviors
Some commentators contend that the American populace has become increasingly narcissistic since the end of World War II. According to sociologist Charles Derber, people pursue and compete for attention on an unprecedented scale. The profusion of popular literature about "listening" and "managing those who talk constantly about themselves" suggests its pervasiveness in everyday life. The growth of media phenomena such as "reality TV" programs and social media is generating a "new era of public narcissism".
Also supporting the contention that American culture has become more narcissistic is an analysis of US popular song lyrics between 1987 and 2007. This found a growth in the use of first-person singular pronouns, such as I, me, my, and mine, reflecting a greater focus on the self, and also of references to antisocial behavior; during the same period, there was a diminution of words reflecting a focus on others, positive emotions, and social interactions. References to narcissism and self-esteem in American popular print media have experienced vast inflation since the late 1980s. Between 1987 and 2007 direct mentions of self-esteem in leading US newspapers and magazines increased by 4,540 percent while narcissism, which had been almost non-existent in the press during the 1970s, was referred to over 5,000 times between 2002 and 2007.
Individualistic vs collectivist national cultures
Similar patterns of change in cultural production are observable in other Western states. For example, a linguistic analysis of the largest circulation Norwegian newspaper found that the use of self-focused and individualistic terms increased in frequency by 69 per cent between 1984 and 2005 while collectivist terms declined by 32 per cent.
One study looked at differences in advertising between an individualistic culture, United States, and a collectivist culture, South Korea and found that in the US there was a greater tendency to stress the distinctiveness and uniqueness of the person; whereas advertising in South Korean stressed the importance of social conformity and harmony. These cultural differences were greater than the effects of individual differences within national cultures.
Controversies
There has been an increased interest in narcissism and narcissistic personality disorder (NPD) in the last 10 years. There are areas of substantial debate that surround the subject including:
Clearly defining the difference between normal and pathological narcissism,
Understanding the role of self-esteem in narcissism,
Reaching a consensus on the classifications and definitions of sub-types such as "grandiose" and "vulnerable dimensions" or variants of these,
Understanding what are the central versus peripheral, primary versus secondary features/characteristics of narcissism,
Determining if there is consensual description,
Agreeing on the etiological factors,
Deciding what field or discipline narcissism should be studied by,
Agreeing on how it should be assessed and measured, and
Agreeing on its representation in textbooks and classification manuals.
This extent of the controversy was on public display in 2010–2013 when the committee on personality disorders for the 5th Edition (2013) of the Diagnostic and Statistical Manual of Mental Disorders recommended the removal of Narcissistic Personality from the manual. A contentious three-year debate unfolded in the clinical community with one of the sharpest critics being John G. Gunderson, the person who led the DSM personality disorders committee for the 4th edition of the manual.
See also
Compensation
Empathy
Entitlement
Grandiosity
Self-esteem
References
Further reading
)
1889 introductions
1890s neologisms
Barriers to critical thinking
Social influence
Egoism
Words and phrases derived from Greek mythology | Narcissism | [
"Biology"
] | 4,600 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
5,606,338 | https://en.wikipedia.org/wiki/AES51 | AES51 is a standard first published by the Audio Engineering Society in June 2006 that specifies a method of carrying Asynchronous Transfer Mode (ATM) cells over Ethernet physical structure intended in particular for use with AES47 to carry AES3 digital audio transport structure. The purpose of this is to provide an open standard, Ethernet based approach to the networking of linear (uncompressed) digital audio with extremely high quality-of-service alongside standard Internet Protocol connections.
This standard specifies a method, also known as "ATM-E", of carrying ATM cells over hardware specified for IEEE 802.3 (Ethernet). It is intended as a companion standard to AES47 (Transmission of digital audio over ATM networks), to provide a standard method of carrying ATM cells and real-time clock over hardware specified for Ethernet.
References
Networking standards
Broadcast engineering
Digital audio
Audio network protocols
Ethernet
Audio Engineering Society standards
Asynchronous Transfer Mode | AES51 | [
"Technology",
"Engineering"
] | 192 | [
"Broadcast engineering",
"Asynchronous Transfer Mode",
"Computer standards",
"Computer networks engineering",
"Audio Engineering Society standards",
"Electronic engineering",
"Networking standards",
"Audio engineering"
] |
5,606,669 | https://en.wikipedia.org/wiki/Phototoxin | Phototoxins are toxins that can cause allergic reactions in particularly susceptible individuals and which can cause dangerous photosensitivity in a much broader range of subjects.
Phototoxins are common in:
a variety of plants (including food plants where they may be a biological defence):
many citruses contain essential oils that are photosensitizers;
some herbal remedies (notably St John's wort, though incident rates for this plant are reportedly low);
the carrot family of Apiaceae;
some prescribed medications (such as tetracycline antibiotics); and
many essential oils, perfumes and cosmetics.
Ingested medications may cause systemic photosensitivity and topically applied medications, cosmetics and essential oils may lead to local (or perhaps systemic) photosensitivity. Para-aminobenzoic acid (PABA), found in some sunscreens, can also cause photosensitivity.
Upon exposure to light, notably light containing ultraviolet radiation, discolouration of the skin (whether as inflammation, lightening or darkening) or rashes may result. In extreme cases, blistering may also occur.
Uses
The marigold plant produces the phototoxin alpha-terthienyl, which functions as a nematicide. When exposed to near ultraviolet light, such as in sunlight, alpha-terthienyl generates the toxic singlet oxygen. Alpha-terthienyl results in damage to the respiratory, digestive and nervous system of larvae, resulting in 100% death rates in concentrations of 33 ppb. This makes it an interesting natural insecticide.
Rose bengal and other singlet oxygen generating phototoxins are also used in synthetic organic chemistry. They have also found use in photodynamic therapy, where the toxin is activated by intense light to destroy cancer cells.
References
See also
Photodermatitis
Photosensitivity in animals
Photoallergy
Toxins | Phototoxin | [
"Environmental_science"
] | 390 | [
"Toxins",
"Toxicology"
] |
5,606,690 | https://en.wikipedia.org/wiki/MPEG%20elementary%20stream | An elementary stream (ES) as defined by the MPEG communication protocol is usually the output of an audio encoder or video encoder. An ES contains only one kind of data (e.g. audio, video, or closed caption). An elementary stream is often referred to as "elementary", "data", "audio", or "video" bitstreams or streams. The format of the elementary stream depends upon the codec or data carried in the stream, but will often carry a common header when packetized into a packetized elementary stream.
Header for MPEG-2 video elementary stream
General layout of MPEG-1 audio elementary stream
The digitized sound signal is divided up into blocks of 384 samples in Layer I and 1152 samples in Layers II and III. The sound sample block is encoded within an audio frame:
header
error check
audio data
ancillary data
The header of a frame contains general information such as the MPEG Layer, the sampling frequency, the number of channels, whether the frame is CRC protected, whether the sound is the original:
Although most of this information may be the same for all frames, MPEG decided to give each audio frame such a header in order to simplify synchronization and bitstream editing.
See also
MP3
Packetized elementary stream
MPEG program stream
MPEG transport stream
External links
ISO/IEC 11172-3:1993: Information technology -- Coding of moving pictures and associated audio for digital storage media at up to about 1,5 Mbit/s -- Part 3: Audio
MPEG | MPEG elementary stream | [
"Technology"
] | 325 | [
"Multimedia",
"MPEG"
] |
5,607,325 | https://en.wikipedia.org/wiki/Indole%20alkaloid | Indole alkaloids are a class of alkaloids containing a structural moiety of indole; many indole alkaloids also include isoprene groups and are thus called terpene indole or secologanin tryptamine alkaloids. Containing more than 4100 known different compounds, it is one of the largest classes of alkaloids. Many of them possess significant physiological activity and some of them are used in medicine. The amino acid tryptophan is the biochemical precursor of indole alkaloids.
History
The action of some indole alkaloids has been known for ages. Aztecs used the psilocybin mushrooms which contain alkaloids psilocybin and psilocin. The flowering plant Rauvolfia serpentina which contains reserpine was a common medicine in India around 1000 BC. Africans used the roots of the perennial rainforest shrub Iboga, which contain ibogaine, as a stimulant. An infusion of Calabar bean seeds was given to people accused of crime in Nigeria: its rejection by stomach was regarded as a sign of innocence, otherwise, the person was killed via the action of physostigmine, which is present in the plant and which causes paralysis of the heart and lungs.
Consumption of rye and related cereals contaminated with the fungus Claviceps purpurea causes ergot poisoning and ergotism in humans and other mammals. The relationship between ergot and ergotism was established only in 1717, and the alkaloid ergotamine, one of the main active ingredients of ergot, was isolated in 1918.
The first indole alkaloid, strychnine, was isolated by Pierre Joseph Pelletier and Joseph Bienaimé Caventou in 1818 from the plants of the genus Strychnos. The correct structural formula of strychnine was determined only in 1947, although the presence of the indole nucleus in the structure of strychnine was established somewhat earlier. Indole itself was first obtained by Adolf von Baeyer in 1866 while decomposing Indigo.
Classification
Indole alkaloids are distinguished depending on their biosynthesis. The two types of indole alkaloids are isoprenoids and non-isoprenoids. The latter include terpenoid structural elements, synthesized by living organisms from dimethylallyl pyrophosphate (DMAPP) and/or isopentenyl pyrophosphate (IPP):
Non-isoprenoid:
Simple derivatives of indole
Simple derivatives of β-carboline
Pyrroloindole alkaloids
Indole-3-carbinol
Indole-3-acetic acid
Tryptamines
Carbazoles
Isoprenoid:
hemiterpenoids: ergot alkaloids
monoterpenoids.
Strictosidine
Catharanthine
Yohimbine
Vinca
Strychnine
Ellipticine
There are also purely structural classifications based on the presence of carbazole, β-carboline or other units in the carbon skeleton of the alkaloid molecule. Some 200 dimeric indole alkaloids are known with two indole groups.
Non-isoprenoid indole alkaloids
The number of known non-isoprenoid indole alkaloids is small compared to the number of indole alkaloids.
Simple indole derivatives
One of the simplest and yet widespread indole derivatives are the biogenic amines tryptamine and 5-hydroxytryptamine (serotonin). Although their assignment to the alkaloid is not universally accepted, they are both found in plants and animals. The tryptamine skeleton is part of the vast majority of indole alkaloids. For example, N,N-dimethyltryptamine (DMT), psilocin and its phosphorylated psilocybin are the simplest derivatives of tryptamine. Some simple indole alkaloids do not contain tryptamine, such as gramine and glycozoline (the latter is a derivative of carbazole). Camalexin is a simple indole alkaloid produced by the plant Arabidopsis thaliana, often used as a model for plant biology.
Simple derivatives of β-carboline
The prevalence of β-carboline alkaloids is associated with the ease of forming the β-carboline core from tryptamine in the intramolecular Mannich reaction. Simple (non-isoprenoid) β-carboline derivatives include harmine, harmaline, harmane and a slightly more complex structure of canthinone. Harmaline was first isolated in 1838 by Göbel and harmine in 1848 by Fritzche.
Pyrolo-indole alkaloids
Pyrolo-indole alkaloids form a relatively small group of tryptamine derivatives. They are produced by methylation of indole nucleus at position 3 and the subsequent nucleophilic addition at the carbon atom in positions 2 with the closure of the ethylamino group into a ring. A typical representative of this group is physostigmine, which was isolated by Jobst and Hesse in 1864.
Isoprenoid indole alkaloids
Isoprenoid indole alkaloids include residues of tryptophan or tryptamine and isoprenoid building blocks derived from the dimethylallyl pyrophosphate and isopentenyl pyrophosphate.
Ergot alkaloids
Ergot alkaloids are a class of hemiterpenoid indole alkaloids related to lysergic acid, which, in turn, is formed in a multistage reactions involving tryptophan and DMAPP. Many ergot alkaloids are amides of lysergic acid. The simplest such amide is ergine, and more complex can be distinguished into the following groups:
Water-soluble aminoalcohol derivatives, such as ergometrine and its isomer ergometrinine
Water-insoluble polypeptide derivatives:
Ergotamine group, including ergotamine, ergosine and their isomers
Ergoxine groups, including ergostine, ergoptine, ergonine and their isomers
Ergotoxine group, including ergocristine, α-ergocryptine, β-ergocryptine, ergocornine and their isomers.
Ergotinine, discovered in 1875, and ergotoxine (1906) were subsequently proven to be a mixture of several alkaloids. In pure form, the first ergot alkaloids, ergotamine and its isomer ergotaminine were isolated by Arthur Stoll in 1918.
Monoterpenoid Indoles Alkaloids or Secologanin Tryptamine Alkaloids
Most monoterpenoid alkaloids include a 9 or 10 carbon fragment (bold in image) (originating from the secologanin), and the configuration allows grouping to Corynanthe, Iboga and Aspidosperma classes. The monoterpenoid part of their carbon skeletons are illustrated below on the example of alkaloids ajmalicine and catharanthine. The circled carbon atoms are missing in the alkaloids which contain the C9 fragment instead of C10.
Corynanthe alkaloids include the unaltered skeleton of secologanin, which is modified in Iboga and Aspidosperma alkaloids. Some representative monoterpenoid indole alkaloids:
There is also a small group of alkaloids present in the plant Aristotelia – about 30 compounds, the most important of which is peduncularine – which contain a monoterpenoid C10 part originating not from secologanin.
Bisindole alkaloids
Dimers of strictosidine derivatives, loosely called bisindoles but more complicated than that. More than 200 of dimeric indole alkaloids are known. They are produced in living organisms through dimerization of monomeric indole bases, in the following reactions:
Mannich reaction (voacamine)
Michael reaction (villalstonine)
Condensation of aldehydes with amines (toxiferine, calebassine)
Oxidative coupling of tryptamines (calicantine);
Splitting of the functional group of one of the monomers (vinblastine, vincristine).
Apart from bisindole alkaloids, dimeric alkaloids exist which are formed via dimerization of the indole monomer with another type of alkaloid. An example is tubulosine consisting of indole and isoquinoline fragments.
Distribution in nature
The plants that are rich in non-isoprenoid indole alkaloids include harmal (Peganum harmala), which contains harmane, harmine and harmaline, and Calabar bean (Physostigma venenosum) containing physostigmine. Some members of the family Convolvulaceae, in particular Ipomoea violacea and Turbina corymbosa, contain ergolines and lysergamides. Despite the considerable structural diversity, most of monoterpenoid indole alkaloids is localized in three families of dicotyledon plants: Apocynaceae (genera Alstonia, Aspidosperma, Rauvolfia and Catharanthus), Rubiaceae (Corynanthe) and Loganiaceae (Strychnos).
Indole alkaloids are also present in fungi. For example, psilocybin mushrooms contains derivatives of tryptamine and Claviceps contains derivatives of lysergic acid. The skin of many toad species of the genus Bufo contains a derivative of tryptamine, bufotenin, and the skin and venom of the species Bufo alvarius (Colorado River toad) contains 5-MeO-DMT. Serotonin, which is an important neurotransmitter in mammals, can also be attributed to simple indole alkaloids.
Biosynthesis
Biogenetic precursor of all indole alkaloids is the amino acid tryptophan. For most of them, the first synthesis step is decarboxylation of tryptophan to form tryptamine. Dimethyltryptamine (DMT) is formed from tryptamine by methylation with the participation of coenzyme of S-adenosyl methionine (SAM). Psilocin is produced by spontaneous dephosphorylation of psilocybin.
In the biosynthesis of serotonin, the intermediate product is not tryptamine but 5-hydroxytryptophan, which is in turn decarboxylated to form 5-hydroxytryptamine (serotonin).
Biosynthesis of β-carboline alkaloids occurs through the formation of Schiff base from tryptamine and aldehyde (or keto acid) and subsequent intramolecular Mannich reaction, where the C(2) carbon atom of indole serves as a nucleophile. Then, the aromaticity is restored via the loss of a proton at the C(2) atom. The resulting tetrahydro-β-carboline skeleton then gradually oxidizes to dihydro-β-carboline and β-carboline. In the formation of simple β-carboline alkaloids, such as harmine and harmaline, pyruvic acid acts as the keto acid. In the synthesis of monoterpenoid indole alkaloids, secologanin plays the role of the aldehyde. Pirroloindole alkaloids are synthesized in living organisms in a similar way.
Biosynthesis of ergot alkaloids begins with the alkylation of tryptophan by dimethylallyl pyrophosphate (DMAPP), where the carbon atom C(4) in the indole nucleus plays the role of the nucleophile. The resulting 4-dimethylallyl-L-tryptophan undergoes N-methylation. Further products of biosynthesis are chanoclavine-I and agroclavine – the latter is hydroxylated to elymoclavine, which in turn oxidizes into paspalic acid. In the process of allyl rearrangement, paspalic acid is converted to lysergic acid.
Biosynthesis of monoterpenoid indole alkaloids begins with the Mannich reaction of tryptamine and secologanin; it yields strictosidine which is converted to 4,21-dehydrogeissoschizine. Then, the biosynthesis of most alkaloids containing the unperturbed monoterpenoid part (Corynanthe type) proceeds through cyclization with the formation of cathenamine and subsequent reduction to ajmalicine in the presence of nicotinamide adenine dinucleotide phosphate (NADPH). In the biosynthesis of other alkaloids, 4,21-dehydrogeissoschizine first converts into preakuammicine (an alkaloid of subtype strychnos, type Corynanthe) which gives rise to other alkaloids of subtype strychnos and of the types Iboga and Aspidosperma. Bisindole alkaloids vinblastine and vincristine are produced in the reaction involving catharanthine (alkaloid of type Iboga) and vindolin (type Aspidosperma).
Physiological activity
Indole alkaloids act on the central and peripheral nervous systems. Besides, bisindole alkaloids vinblastine and vincristine show antineoplastic effect.
Because of structural similarities with serotonin, many tryptamines can interact with serotonin 5-HT receptors. The main effect of the serotonergic psychedelics such as LSD, DMT, and psilocybin is related to them being agonists of the 5-HT2A receptors. In contrast, gramine is an antagonist of the 5-HT2A receptor.
Ergolines, such as lysergic acid, include structural elements of both tryptamine and phenylethylamine and thus act on the whole group of the 5-HT receptors, adrenoceptors (mostly of type α) and dopamine receptors (mostly type D2). So ergotamine is a partial agonist of α-adrenergic and 5-HT2 receptors, and thus narrows blood vessels and stimulates constriction of the uterus. Dihydroergotamine is more selective to α-adrenergic receptors and has a weaker effect on serotonin receptors. Ergometrine is an agonist of α-adrenergic, 5-HT2 and partly D2 receptors. Compared with other ergot alkaloids, ergometrine has a greater selectivity in stimulating the uterus. LSD, a semi-synthetic psychedelic ergoline, is an agonist of 5-HT2A, 5-HT1A and to a lesser extent D2 receptors and has a powerful psychedelic effect.
Some monoterpenoid indole alkaloids also interact with adrenoceptors. For example, ajmalicine is a selective antagonist of α1-adrenergic receptors and therefore has antihypertensive action. Yohimbine is more selective to α2 adrenoceptor; by blocking presynaptic α2-adrenoceptors, it increases the release of norepinephrine thereby raising the blood pressure. Yohimbine was used for the treatment of erectile dysfunction in men until emergence of more efficient drugs.
Some alkaloids affect the turnover of monoamines indirectly. So, harmine and harmaline are reversible selective inhibitors of monoamine oxidase-A. Reserpine reduces concentration of monoamines in presynaptic and synaptic neurons, thereby inducing antihypertensive and antipsychotic effects.
Some indole alkaloids interact with other types of receptors. Mitragynine is an agonist of the μ-opioid receptor. Harmal alkaloids are antagonists to the GABAA-receptor, and ibogaine – to NMDA-receptors. Physostigmine is a reversible acetylcholinesterase inhibitor.
Applications
Plants and fungi that contain indole alkaloids have a long history of use in traditional medicine. Rauvolfia serpentina, which contains reserpine as the active substance, was used for over 3000 years in India to treat snake bites and insanity. In medieval Europe, extracts of ergot were used in medical abortion.
Later, the plants were joined by pure preparations of indole alkaloids. Reserpine was the second (after chlorpromazine) antipsychotic drug; however, it showed relatively weak action and strong side effects, and is not used for this purpose any longer. Instead, it is prescribed as an antihypertensive drug, often in combination with other substances.
Other drugs that affect the cardiovascular system include ajmaline, which is a Class I antiarrhythmic agents, and ajmalicine, which is used in Europe as an antihypertensive drug. Physostigmine – an inhibitor of acetylcholinesterase – and its synthetic analogs are used in the treatment of glaucoma, Alzheimer's disease (rivastigmine) and myasthenia (neostigmine, pyridostigmine, distigmine). Ergot alkaloids ergometrine (ergobazin, ergonovine), ergotamine and their synthetic derivatives (methylergometrine) are applied against uterine bleeding, and bisindole alkaloids vinblastine and vincristine are antitumor agents.
Animal studies have shown that ibogaine has a potential in treating heroin, cocaine, and alcohol addictions, which is associated with the ibogaine antagonism to NMDA-receptors. Medical use of ibogaine is hindered by its legal status, as it is banned in many countries as a powerful psychedelic drug with dangerous implications of overdose. However, illegal network in Europe and United States provide ibogaine for treating drug addiction.
Since ancient times, plants containing indole alkaloids have been used as psychedelic drugs. The Aztecs used and the Mazatec people continue to use psilocybin mushrooms and the psychoactive seeds of morning glory species like Ipomoea tricolor. Amazonian tribes use the psychedelic infusion, ayahuasca, made from Psychotria viridis and Banisteriopsis caapi. Psychotria viridis contains the psychedelic drug DMT, while Banisteriopsis caapi contains harmala alkaloids, which act as monoamine oxidase inhibitors. It is believed that the main function of the harmala alkaloids in ayahuasca is to prevent the metabolization of DMT in the digestive tract and liver, so it can cross the blood–brain barrier, whereas the direct effect of harmala alkaloids on the central nervous system is minimal. The venom of the Colorado River toad, Bufo alvarius, may have used as a psychedelic drug, its active constituents being 5-MeO-DMT and bufotenin. One of the most common recreational psychedelic drugs, LSD, is a semi-synthetic ergoline (which contains the indole moiety).
References
Bibliography | Indole alkaloid | [
"Chemistry"
] | 4,185 | [
"Alkaloids by chemical classification",
"Indole alkaloids"
] |
5,607,447 | https://en.wikipedia.org/wiki/Space%20medicine | Space Medicine is a subspecialty of Emergency Medicine (Fellowship Training Pathway) which evolved from the Aerospace Medicine specialty. Space Medicine is dedicated to the prevention and treatment of medical conditions that would limit success in space operations. Space medicine focuses specifically on prevention, acute care, emergency medicine, wilderness medicine, hyper/hypobaric medicine in order to provide medical care of astronauts and spaceflight participants. The spaceflight environment poses many unique stressors to the human body, including G forces, microgravity, unusual atmospheres such as low pressure or high carbon dioxide, and space radiation. Space medicine applies space physiology, preventive medicine, primary care, emergency medicine, acute care medicine, austere medicine, public health, and toxicology to prevent and treat medical problems in space. This expertise is additionally used to inform vehicle systems design to minimize the risk to human health and performance while meeting mission objectives.
Astronautical hygiene is the application of science and technology to the prevention or control of exposure to the hazards that may cause astronaut ill health. Both these sciences work together to ensure that astronauts work in a safe environment. Medical consequences such as possible visual impairment and bone loss have been associated with human spaceflight.
In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.
History
Hubertus Strughold (1898–1987), a former Nazi physician and physiologist, was brought to the United States after World War II as part of Operation Paperclip. He first coined the term "space medicine" in 1948 and was the first and only Professor of Space Medicine at the School of Aviation Medicine (SAM) at Randolph Air Force Base, Texas. In 1949, Strughold was made director of the Department of Space Medicine at the SAM (which is now the US Air Force School of Aerospace Medicine (USAFSAM) at Wright-Patterson Air Force Base, Ohio. He played an important role in developing the pressure suit worn by early American astronauts. He was a co-founder of the Space Medicine Branch of the Aerospace Medical Association in 1950. The aeromedical library at Brooks AFB was named after him in 1977, but later renamed because documents from the Nuremberg War Crimes Tribunal linked Strughold to medical experiments in which inmates of the Dachau concentration camp were tortured and killed.
Soviet research into Space Medicine was centered at the Scientific Research Testing Institute of Aviation Medicine (NIIAM). In 1949, A.M. Vasilevsky, the Minister of Defense of the USSR, gave instructions via the initiative of Sergei Korolev to NIIAM to conduct biological and medical research. In 1951, NIIAM began to work on the first research work entitled "Physiological and hygienic substantiation of flight capabilities in special conditions", which formulated the main research tasks, the necessary requirements for pressurized cabins, life support systems, rescue and control and recording equipment. At the Korolev design bureau, they created rockets for lifting animals within 200–250 km and 500–600 km, and then began to talk about developing artificial satellites and launching a man into space. Then in 1963 the Institute for Biomedical Problems (IMBP) was founded to undertake the study of space medicine.
Animal testing
Before sending humans, space agencies used animals to study the effects of space travel on the body. After several years of failed animal recoveries, an Aerobee rocket launch in September 1951 was the first safe return of a monkey and a group of mice from near space altitudes. On 3 November 1957, Sputnik 2 became the first mission to carry a living animal to space, a dog named Laika. This flight and others suggested the possibility of safely flying in space within a controlled environment, and provided data on how living beings react to space flight. Later flights with cameras to observe the animal subjects would show in flight conditions such as high-G and zero-G. Russian tests yielded more valuable physiological data from the animal tests.
On January 31, 1961, a chimpanzee named Ham was launched into a sub-orbital flight aboard a Mercury-Redstone Launch Vehicle. The flight was meant to model the planned mission of astronaut Alan Shepard. The mission planned to reach an altitude of 115 miles, and speeds up to 4400 miles per hour. However, the actual flight reached 157 miles and a maximum speed of 5857 miles per hour. During flight, Ham experienced 6.6 minutes of weightlessness. After splashing down in the Atlantic Ocean, Ham was recovered by the USS Donner. He suffered only limited injuries during flight, only receiving a bruised nose. Ham's vital signs were monitored and collected throughout the 16 minute flight, and used to develop life support systems for later human astronauts.
Animal testing in space continues currently, with mice, ants, and other animals regularly being sent to the International Space Station. In 2014, eight ant colonies were sent to the ISS to investigate the group behavior of ants in microgravity. The ISS allows for the investigation of animal behavior without sending them in specifically designed capsules.
North American X-15
Rocket-powered aircraft North American X-15 provided an early opportunity to study the effects of a near-space environment on human physiology. At its highest operational speed and altitude, the X-15 provided approximately five minutes of weightlessness. This opportunity allowed for the development of devices to facilitate working in low pressure, high acceleration environments such as pressure suits, and telemetering systems to collect physiological data. This data and technologies allowed for better mission planning for future space missions.
Project Mercury
Space medicine was a critical factor in the United States human space program, starting with Project Mercury. The main precaution taken by Mercury astronauts to defend against high G environments like launch and reentry was a couch with seat belts to make sure astronauts were not forcibly moved from their position. Additionally, experienced pilots proved to be better able to cope with high G scenarios. One of the pressing concerns with Project Mercury's mission environment was the isolated nature of the cabin. There were deeper concerns about psychological issues than there were about physiological health effects. Substantial animal testing proved beyond a reasonable doubt to NASA engineers that spaceflight could be done safely provided a climate controlled environment.
Project Gemini
The Gemini program primarily addressed the psychological issues from isolation in space with two crewmembers. Upon returning from space, it was recorded that crewmembers experienced a loss of balance and a decrease in anaerobic ability.
Project Apollo
The Apollo program began with a substantial basis of medical knowledge and precautions from both Mercury and Gemini. The understanding of high and low G environments was well documented and the effects of isolation had been addressed with Gemini and Apollo having multiple occupants in one capsule. The primary research of the Apollo Program focused on pre-flight and post-flight monitoring. Some Apollo mission plans were postponed or altered due to some or all crewmembers contracting a communicable disease. Apollo 14 instituted a form of quarantine for crewmembers so as to curb the passing of typical illnesses. While the efficacy of the Flight Crew Health Stabilization Program was questionable as some crewmembers still contracted diseases, the program showed enough results to maintain implementation with current space programs.
Effects of space-travel
In October 2018, NASA-funded researchers found that lengthy journeys into outer space, including travel to the planet Mars, may substantially damage the gastrointestinal tissues of astronauts. The studies support earlier work that found such journeys could significantly damage the brains of astronauts, and age them prematurely.
In November 2019, researchers reported that astronauts experienced serious blood flow and clot problems while on board the International Space Station, based on a six-month study of 11 healthy astronauts. The results may influence long-term spaceflight, including a mission to the planet Mars, according to the researchers.
Blood clots
Deep vein thrombosis of the internal jugular vein of the neck was first discovered in 2020 in an astronaut on a long duration stay on the ISS, requiring treatment with blood thinners. A subsequent study of eleven astronauts found slowed blood flow in the neck veins and even reversal of blood flow in two of the astronauts. NASA is currently conducting more research to study whether these abnormalities could predispose astronauts to blood clots.
Cardiac rhythms
Heart rhythm disturbances have been seen among astronauts. Most of these have been related to cardiovascular disease, but it is not clear whether this was due to pre-existing conditions or effects of space flight. It is hoped that advanced screening for coronary disease has greatly mitigated this risk. Other heart rhythm problems, such as atrial fibrillation, can develop over time, necessitating periodic screening of crewmembers’ heart rhythms. Beyond these terrestrial heart risks, some concern exists that prolonged exposure to microgravity may lead to heart rhythm disturbances. Although this has not been observed to date, further surveillance is warranted.
Decompression illness in spaceflight
In space, astronauts use a space suit, essentially a self-contained individual spacecraft, to do spacewalks, or extra-vehicular activities (EVAs). Spacesuits are generally inflated with 100% oxygen at a total pressure that is less than a third of normal atmospheric pressure. Eliminating inert atmospheric components such as nitrogen allows the astronaut to breathe comfortably, but also have the mobility to use their hands, arms, and legs to complete required work, which would be more difficult in a higher pressure suit.
After the astronaut dons the spacesuit, air is replaced by 100% oxygen in a process called a "nitrogen purge". In order to reduce the risk of decompression sickness, the astronaut must spend several hours "pre-breathing" at an intermediate nitrogen partial pressure, in order to let their body tissues outgas nitrogen slowly enough that bubbles are not formed. When the astronaut returns to the "shirt sleeve" environment of the spacecraft after an EVA, pressure is restored to whatever the operating pressure of that spacecraft may be, generally normal atmospheric pressure. Decompression illness in spaceflight consists of decompression sickness (DCS) and other injuries due to uncompensated changes in pressure, or barotrauma.
Decompression sickness
Decompression sickness is the injury to the tissues of the body resulting from the presence of nitrogen bubbles in the tissues and blood. This occurs due to a rapid reduction in ambient pressure causing the dissolved nitrogen to come out of solution as gas bubbles within the body. In space the risk of DCS is significantly reduced by using a technique to wash out the nitrogen in the body's tissues. This is achieved by breathing 100% oxygen for a specified period of time before donning the spacesuit, and is continued after a nitrogen purge. DCS may result from inadequate or interrupted pre-oxygenation time, or other factors including the astronaut's level of hydration, physical conditioning, prior injuries and age. Other risks of DCS include inadequate nitrogen purge in the EMU, a strenuous or excessively prolonged EVA, or a loss of suit pressure. Non-EVA crewmembers may also be at risk for DCS if there is a loss of spacecraft cabin pressure.
Symptoms of DCS in space may include chest pain, shortness of breath, cough or pain with a deep breath, unusual fatigue, lightheadedness, dizziness, headache, unexplained musculoskeletal pain, tingling or numbness, extremities weakness, or visual abnormalities.
Primary treatment principles consist of in-suit repressurization to re-dissolve nitrogen bubbles, 100% oxygen to re-oxygenate tissues, and hydration to improve the circulation to injured tissues.
Barotrauma
Barotrauma is the injury to the tissues of air filled spaces in the body as a result of differences in pressure between the body spaces and the ambient atmospheric pressure. Air filled spaces include the middle ears, paranasal sinuses, lungs and gastrointestinal tract. One would be predisposed by a pre-existing upper respiratory infection, nasal allergies, recurrent changing pressures, dehydration, or a poor equalizing technique.
Positive pressure in the air filled spaces results from reduced barometric pressure during the depressurization phase of an EVA. It can cause abdominal distension, ear or sinus pain, decreased hearing, and dental or jaw pain. Abdominal distension can be treated with extending the abdomen, gentle massage and encourage passing flatus. Ear and sinus pressure can be relieved with passive release of positive pressure. Pretreatment for susceptible individuals can include oral and nasal decongestants, or oral and nasal steroids.
Negative pressure in air fill spaces results from increased barometric pressure during repressurization after an EVA or following a planned restoration of a reduced cabin pressure. Common symptoms include ear or sinus pain, decreased hearing, and tooth or jaw pain.
Treatment may include active positive pressure equalization of ears and sinuses, oral and nasal decongestants, or oral and nasal steroids, and appropriate pain medication if needed.
Decreased immune system functioning
Astronauts in space have weakened immune systems, which means that in addition to increased vulnerability to new exposures, viruses already present in the body—which would normally be suppressed—become active. In space, T-cells do not reproduce properly, and the cells that do exist are less able to
fight off infection. NASA research is measuring the change in the immune systems of its astronauts as well as performing experiments with T-cells in space.
On April 29, 2013, scientists in Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence".
In March 2019, NASA reported that latent viruses in humans may be activated during space missions, adding possibly more risk to astronauts in future deep-space missions.
Increased infection risk
A 2006 Space Shuttle experiment found that Salmonella typhimurium, a bacterium that can cause food poisoning, became more virulent when cultivated in space. On April 29, 2013, scientists in Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence". More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space. Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts.
Effects of fatigue
Human spaceflight often requires astronaut crews to endure long periods without rest. Studies have shown that lack of sleep can cause fatigue that leads to errors while performing critical tasks. Also, individuals who are fatigued often cannot determine the degree of their impairment.
Astronauts and ground crews frequently suffer from the effects of sleep deprivation and circadian rhythm disruption. Fatigue due to sleep loss, sleep shifting and work overload could cause performance errors that put space flight participants at risk of compromising mission objectives as well as the health and safety of those on board.
Loss of balance
Leaving and returning to Earth's gravity causes “space sickness,” dizziness, and loss of balance in astronauts. By studying how changes can affect balance in the human body—involving the senses, the brain, the inner ear, and blood pressure—NASA hopes to develop treatments that can be used on Earth and in space to correct balance disorders. Until then, NASA's astronauts must rely on a medication called Midodrine (an “anti-dizzy” pill that temporarily increases blood pressure), and/or promethazine to help carry out the tasks they need to do to return home safely.
Loss of bone density
Spaceflight osteopenia is the bone loss associated with human spaceflight. The metabolism of calcium is limited in microgravity and will cause calcium to leak out of bones. After a 3–4 month trip into space, it takes about 2–3 years to regain lost bone density. New techniques are being developed to help astronauts recover faster. Research in the following areas holds the potential to aid the process of growing new bone:
Diet and Exercise changes may reduce osteoporosis.
Vibration Therapy may stimulate bone growth.
Medication could trigger the body to produce more of the protein responsible for bone growth and formation.
Loss of muscle mass
In space, muscles in the legs, back, spine, and heart weaken and waste away because they no longer are needed to overcome gravity, just as people lose muscle when they age due to reduced physical activity. Astronauts rely on research in the following areas to build muscle and maintain body mass:
Exercise may build muscle if at least two hours a day is spent doing resistance training routines.
Neuromuscular Electrical Stimulation as a method to prevent muscle atrophy.
Impairment of eyesight
During long space flight missions, astronauts may develop ocular changes and visual impairment collectively known as the Space Associated Neuro-ocular Syndrome (SANS). Such vision problems may be a major concern for future deep space flight missions, including a human mission to Mars.
Loss of mental abilities and risk of Alzheimer's disease
On December 31, 2012, a NASA-supported study reported that human spaceflight may harm the brain of astronauts and accelerate the onset of Alzheimer's disease.
On 2 November 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes.
Orthostatic intolerance
Under the influence of the earth's gravity, blood and other body fluids are pulled towards the lower body when standing. When gravity is removed during space exploration, hydrostatic pressures throughout the body are removed and the resulting change in blood distribution may be similar lying down on Earth where hydrostatic differences are minimized. Upon return to earth, reduced blood volume from spaceflight results in orthostatic hypotension. Orthostatic tolerance after spaceflight has been greatly improved by fluid loading countermeasures taken by astronauts before landing.
Radiation effects
Soviet cosmonaut Valentin Lebedev, who spent 211 days in orbit during 1982 (an absolute record for stay in Earth's orbit), lost his eyesight to progressive cataract. Lebedev stated: “I suffered from a lot of radiation in space. It was all concealed back then, during the Soviet years, but now I can say that I caused damage to my health because of that flight.” On 31 May 2013, NASA scientists reported that a possible human mission to Mars may involve a great radiation risk based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011–2012.
Loss of kidney function
On 11 June 2024 researchers at the University College of London's Department of Renal Medicine reported that "Serious health risks emerge (with respect to the kidneys) the longer a person is exposed to Galactic Radiation and microgravity." In fact, based on their current research with mice, the researchers predicted that astronauts who have been exposed to micro-gravity, reduced gravity, and Galactic radiation for 3 years or so on a Mars mission may have to return to Earth while attached to dialysis machines.
Sleep disorders
Spaceflight has been observed to disrupt physiological processes that influence sleep patterns in human beings. Astronauts exhibit asynchronized cortisol rhythmicity, dampened diurnal fluctuations in body temperature, and diminished sleep quality. Sleep pattern disruption in astronauts is a form of extrinsic (environmentally caused) circadian rhythm sleep disorder.
Spaceflight analogues
Biomedical research in space is expensive and logistically and technically complicated, and thus limited. Conducting medical research in space alone will not provide humans with the depth of knowledge needed to ensure the safety of inter-planetary travelers. Complementary to research in space is the use of spaceflight analogues. Analogues are particularly useful for the study of immunity, sleep, psychological factors, human performance, habitability, and telemedicine. Examples of spaceflight analogues include confinement chambers (Mars-500), sub-aqua habitats (NEEMO), and Antarctic (Concordia Station) and Arctic FMARS and (Haughton–Mars Project) stations.
Space medicine careers
Physicians in space medicine generally work in operations or research at NASA or, more recently, space companies that are flying private or commercial astronauts or spaceflight participants.
Research physicians study specific space medical problems, such as the Space Associated Neuro-ocular Syndrome, or focus on medical capabilities for future deep space exploration missions. Research physicians do not have clinical responsibilities in the care of astronauts and thereby are often not specialty-trained in Space Medicine.
Related degrees, areas of specialization, and certifications
There are currently only 3 fellowships in Space Medicine: University of Texas at Houston, UCLA, and Harvard.
Please see Aerospace Medicine page for similar Aerospace Medicine preventative medicine training pathways.
All of the above training programs should include training in the following areas:
Acute Care Medicine
Commercial Spaceflight Training
Flight Medicine
Interventional Radiology Procedures
Human Life Support Systems for Space
Emergency Medicine
Aerospace studies
Global Health
Hyperbaric and Hypobaric Medicine
Public Health
Disaster medicine
Prehospital medicine
Wilderness and extreme medicine
Space nursing
Space nursing is the nursing specialty that studies how space travel impacts human response patterns. Similar to space medicine, the specialty also contributes to knowledge about nursing care of earthbound patients.
Medicine in flight
Sleep medicine
The use of hypnotic sleep aids is widespread among astronauts, with one 10 year long study finding that 75% and 78% of ISS and space shuttle crew members reported taking such medications while in space. Of astronauts who took hypnotic medications, frequency of use was 52% of all nights. NASA allocates 8.5 hours of 'downtime' for sleep per day for astronauts aboard the ISS, but the average duration of sleep is only 6 hours. Poor sleep quality and quantity can compromise the daytime performance and attentiveness of space crew. As such, improving nighttime sleep has been a topic of NASA-funded research for more than half a century. The following pharmacological and environmental strategies have been investigated in the context of sleep in space:
Light therapy involving exposure to visible light at varying intensities and wavelengths to entrain circadian rhythm, is key a topic of interest in NASA-funded research. Various photoreceptors in the human eye such as melanopsin, rhodopsin, and photopsin communicate with the suprachiasmatic nucleus (the master circadian pacemaker of the brain) to entrain circadian rhythm. Melanopsin photoreceptors are most sensitive to blue light wavelengths in the range of 470-490 nm (blue light). NASA has trialed and implemented rhythmic light panels on the ISS to assist entrain the circadian rhythms of astronauts. NASA is soon to test more advanced light panels that change their output light intensity and wavelengths according to time of day, with red-tinted lights (<600 nm) set to be used at night to provide visibility at 'night' and shorter wavelengths of high light intensity to be used in the 'morning' or at times where alertness and vigilance are needed.
Melatonin, a naturally occurring hormone secreted by pineal gland, has shown positive effects in reducing sleep latency in orbit.
Nonbenzodiazepines sedative-hypnotics (also known as "z drugs") such as Zolpidem, Zopiclone, and Zaleplon are the most commonly dispensed medications on the International Space Station. Despite their widespread use amongst astronauts, relatively little research has been conducted on nonbenzodiazepines in the context of spaceflight. Prior research suggests that nonbenzodiazepines may produce less residual impairment than most benzodiazepines. The shortest acting nonbenzodiazepine, Zaleplon, produces little to no cognitive impairment (at clinically relevant doses) even when dosed as little as an hour before awakening. Astronauts frequently take second doses of hypnotic drugs, the shorter duration of action of nonbenzodiazepines may be better suited to middle-of-the-night dosing
Benzodiazepines are frequently used medications in space, though less often than nonbenzodiazepine "z-drugs". The longer acting nature of some benzodiazepines used by astronauts, such as temazepam, has been cited as "non-ideal" for spaceflight use due to a high tendency of causing morning impairments.
Modafinil, a wakefulness drug, is available on the space station to mitigate the deleterious effects of sleep disruption and "optimise performance while fatigued". Modafinil has shown positive results in restoring cognitive function to baseline in the face of total sleep deprivation, though no studies examining modafinil's effects in astronauts have been conducted.
Ultrasound and space
Ultrasound is the main diagnostic imaging tool on ISS and for the foreseeable future missions. X-rays and CT scans involve radiation which is unacceptable in the space environment. Though MRI uses magnetics to create images, it is too large at present to consider as a viable option. Ultrasound, which uses sound waves to create images and comes in laptop size packages, provides imaging of a wide variety of tissues and organs. It is currently being used to look at the eyeball and the optic nerve to help determine the cause(s) of changes that NASA has noted mostly in long duration astronauts. NASA is also pushing the limits of ultrasound use regarding musculoskeletal problems as these are some of the most common and most likely problems to occur. Significant challenges to using ultrasounds on space missions is training the astronaut to use the equipment (ultrasound technicians spend years in training and developing the skills necessary to be "good" at their job) as well as interpreting the images that are captured. Much of ultrasound interpretation is done real-time but it is impractical to train astronauts to actually read/interpret ultrasounds. Thus, the data is currently being sent back to mission control and forwarded to medical personnel to read and interpret. Future exploration class missions will need to be autonomous due to transmission times taking too long for urgent/emergent medical conditions. The ability to be autonomous, or to use other equipment such as MRIs, is currently being researched.
Space Shuttle era
With the additional lifting capability presented by the Space Shuttle program, NASA designers were able to create a more comprehensive medical readiness kit. The SOMS consists of two separate packages: the Medications and Bandage Kit (MBK) and the Emergency Medical Kit (EMK). While the MBK contained capsulate medications (tablets, capsules, and suppositories), bandage materials, and topical medication, the EMK had medications to be administered by injection, items for performing minor surgeries, diagnostic/therapeutic items, and a microbiological test kit.
John Glenn, the first American astronaut to orbit the Earth, returned with much fanfare to space once again on STS-95 at 77 years of age to confront the physiological challenges preventing long-term space travel for astronauts—loss of bone density, loss of muscle mass, balance disorders, sleep disturbances, cardiovascular changes, and immune system depression—all of which are problems confronting aging people as well as astronauts.
Future investigations
Feasibility of Long Duration Space Flights
In the interest of creating the possibility of longer duration space flight, NASA has invested in the research and application of preventative space medicine, not only for medically preventable pathologies but trauma as well. Although trauma constitutes more of a life-threatening situation, medically preventable pathologies pose more of a threat to astronauts. "The involved crewmember is endangered because of mission stress and the lack of complete treatment capabilities on board the spacecraft, which could result in the manifestation of more severe symptoms than those usually associated with the same disease in the terrestrial environment. Also, the situation is potentially hazardous for the other crewmembers because the small, closed, ecological system of the spacecraft is conducive to disease transmission. Even if the disease is not transmitted, the safety of the other crewmembers may be jeopardized by the loss of the capabilities of the crewmember who is ill. Such an occurrence will be more serious and potentially hazardous as the durations of crewed missions increase and as operational procedures become more complex. Not only do the health and safety of the crewmembers become critical, but the probability of mission success is lessened if the illness occurs during flight. Aborting a mission to return an ill crewmember before mission goals are completed is costly and potentially dangerous." Treatment of trauma may involve surgery in zero-gravity, which is a challenging proposition given the need for blood sample containment. Diagnosis and monitoring of crew members is a particularly vital need. NASA tested the rHEALTH ONE to advance this capability for on-orbit, travel to Moon and Mars. This capability is mapped to Risk of Adverse Health Outcomes and Decrements in Performance Due to Medical Conditions that occur in Mission, as well as Long Term Health Outcomes Due to Mission Exposures. Without an approach to perform onboard medical monitoring, loss of crew members may jeopardize long duration missions.
Impact on science and medicine
Astronauts are not the only ones who benefit from space medicine research. Several medical products have been developed that are space spinoffs, which are practical applications for the field of medicine arising out of the space program. Because of joint research efforts between NASA, the National Institutes on Aging (a part of the National Institutes of Health), and other aging-related organizations, space exploration has benefited a particular segment of society, seniors. Evidence of aging related medical research conducted in space was most publicly noticeable during STS-95. These spin-offs are sometimes termed as "exomedicine".
Pre-Mercury through Apollo
Radiation therapy for the treatment of cancer: In conjunction with the Cleveland Clinic, the cyclotron at Glenn Research Center in Cleveland, Ohio was used in the first clinical trials for the treatment and evaluation of neutron therapy for cancer patients.
Foldable walkers: Made from a lightweight metal material developed by NASA for aircraft and spacecraft, foldable walkers are portable and easy to manage.
Personal alert systems: These are emergency alert devices that can be worn by individuals who may require emergency medical or safety assistance. When a button is pushed, the device sends a signal to a remote location for help. To send the signal, the device relies on telemetry technology developed at NASA.
CAT and MRI scans: These devices are used by hospitals to see inside the human body. Their development would not have been possible without the technology provided by NASA after it found a way to take better pictures of the Earth's moon.
Neuromuscular Electric Stimulation (NMES): A form of treatment originally developed to combat muscle atrophy in space that has been found to have applications outside of space. A prominent example of NMES being used outside of space medicine is muscle stimulator devices for paralyzed individuals. These devices can be used from up to half an hour per day to prevent muscle atrophy in paralyzed individuals. It provides electrical stimulation to muscles which is equal to jogging three miles per week. A well-known example is that Christopher Reeve used these in his therapy. Outside of paralyzed individuals, it also has applications in sports medicine, where it is used to manage or prevent potential damages that those high-intensity lifestyles have on athletes.
Orthopedic evaluation tools: equipment to evaluate posture, gait and balance disturbances was developed at NASA, along with a radiation-free way to measure bone flexibility using vibration.
Diabetic foot mapping: This technique was developed at NASA's center in Cleveland, Ohio to help monitor the effects of diabetes in feet.
Foam cushioning: special foam used for cushioning astronauts during liftoff is used in pillows and mattresses at many nursing homes and hospitals to help prevent ulcers, relieve pressure, and provide a better night's sleep.
Kidney dialysis machines: the Marquardt Corporation, an ancestor company with NASA, were developing a system that would purify and recycle water during space missions in the late 1960s. From this project, the Marquardt Corporation observed that these processes could be used in removing toxic waste from used dialysis fluid. This allowed the development of a kidney dialysis machine. These machines rely on technology developed by NASA in order to process and remove toxic waste from used dialysis fluid.
Talking wheelchairs: paralyzed individuals who have difficulty speaking may use a talking feature on their wheelchairs which was developed by NASA to create synthesized speech for aircraft. "Talking Wheelchairs" or The Versatile Portable Speech Prosthesis (VSP) is a technology that aids in the communication for non-verbal persons. The project started in May 1978 and finished in November 1981. Originally, this technology was created for people who were diagnosed with cerebral palsy who were using traditional electric wheelchairs. This technology is portable and versatile, as well as a highly successful speech prosthesis. However, the nickname "talking wheelchair" has created some separation from the wheelchair itself. The VSP is easily accessible to the person using it by operation of single or multiple switches or by keyboard, and uses a synthetic voice used for verbal speech. The synthetic voice provides communication opportunities that regular speaking persons have such as: communicating with people in a crowd, communicating in the dark, communicating with people who have vision problems, communicating with younger children, communicating when the listener's back is turned, etc. The synthetic voice also provides a sense of personal and individual communication as the keyboard can be programmed with “fun” words as well as “throw-away lines”. The first version of the versatile portable speech prosthesis was completed in May 1979. There were additions made to the VSP in November 1979 and provided more controls for speech. By November 1979, VSP was capable of taking English text and successful in putting out English speech. The user was also able to store and retrieve vocabulary, as well as edit and create new vocabulary. The controls and plugs on the VSP were versatile allowing plug-and-go ability. With the limitations of ASR systems, Portable Speech Prosthesis have moved to the use of Silent Speech Recognition (SSR). The goal of using SSR with VSP is to recognize information that is speech related with some modals such as surface electromyography (sEMG). Speech recognition models used algorithms for extracting speech-related features through the sEMG signals. The patterns of sEMG signals used grammar models to recognize sequences of words. Phoneme-based models were also used when recognizing vocabulary of previously untrained words. Multi-point sensors were used with these algorithms in which they could be arranged in a flexible way to record the measurements of sEMG signals from the small articular muscles found in the human face and neck.
Collapsible, lightweight wheelchairs: wheelchairs designed for portability that can be folded and put into trunks of cars. They rely on synthetic materials that NASA developed for its air and space craft
Surgically implantable heart pacemaker: these devices depend on technologies developed by NASA for use with satellites. They communicate information about the activity of the pacemaker, such as how much time remains before the batteries need to be replaced.
Implantable heart defibrillator: this tool continuously monitors heart activity and can deliver an electric shock to restore heartbeat regularity.
EMS communications: technology used to communicate telemetry between Earth and space was developed by NASA to monitor the health of astronauts in space from the ground. Ambulances use this same technology to send information—like EKG readings—from patients in transport to hospitals. This allows faster and better treatment.
Weightlessness therapy: The weightlessness of space can allow some individuals with limited mobility on Earth—even those normally confined to wheelchairs—the freedom to move about with ease. Physicist Stephen Hawking took advantage of weightlessness in NASA's Vomit Comet aircraft in 2007. This idea also led to the development of the Anti-Gravity Treadmill from NASA technology, which employs "differential air pressure to mimic...gravity".
Ultrasound microgravity
The Advanced Diagnostic Ultrasound in Microgravity Study is funded by the National Space Biomedical Research Institute and involves the use of ultrasound among Astronauts including former ISS Commanders Leroy Chiao and Gennady Padalka who are guided by remote experts to diagnose and potentially treat hundreds of medical conditions in space. This study has a widespread impact and has been extended to cover professional and Olympic sports injuries as well as medical students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations. Findings from this study were submitted for publication to the journal Radiology aboard the International Space Station; the first article submitted in space.
See also
Artificial gravity
Aviation medicine
Bioastronautics
Effect of spaceflight on the human body
Fatigue and sleep loss during spaceflight
Intervertebral disc damage and spaceflight
List of microorganisms tested in outer space
Mars analog habitat
Medical treatment during spaceflight
Microgravity University
Reduced-gravity aircraft
Renal stone formation in space
Spaceflight osteopenia
Spaceflight radiation carcinogenesis
Space food
Space nursing
Space Nursing Society
Space pharmacology
Team composition and cohesion in spaceflight missions
Visual impairment due to intracranial pressure
References
Notes
Sources
External links
Space Medicine Association
Description of space medicine
NASA History Series Publications (many of which are online)
Sleep in Space, Digital Sleep Recorder used by NASA in STS-90 and STS-95 missions
A Solution for Medical Needs and Cramped Quarters in Space – NASA
Human spaceflight programs
International Space Station experiments | Space medicine | [
"Engineering"
] | 7,790 | [
"Space programs",
"Human spaceflight programs"
] |
5,607,556 | https://en.wikipedia.org/wiki/Minot%27s%20Ledge%20Light | Minot's Ledge Light, officially Minots Ledge Light, is a lighthouse on Minots Ledge, one mile offshore of the towns of Cohasset and Scituate, Massachusetts, to the southeast of Boston Harbor.The current lighthouse is the second on the site, the first having been washed away in a storm after only a few months of use.
First lighthouse
In 1843, lighthouse inspector I. W. P. Lewis compiled a report on Minots Ledge, showing that more than 40 vessels had been lost due to striking the ledge from 1832 to 1841, with serious loss of life and damage to property. The most dramatic incident was the sinking of a ship "St John" in October 1849 with ninety-nine Irish immigrants, who all drowned within sight of their new homeland. It was initially proposed to build a lighthouse similar to John Smeaton's pioneering Eddystone Lighthouse, situated off the south-west coast of England. However Captain William H. Swift, put in charge of planning the tower, believed it impossible to build such a tower on the mostly submerged ledge. Instead he successfully argued for an iron pile light, a spidery structure drilled into the rock.
The first Minot's Ledge Lighthouse was built between 1847 and 1850, and was lighted for the first time on January 1, 1850. One night in April 1851, the new lighthouse was struck by a major storm that caused damage throughout the Boston area. The following day only a few bent pilings were found on the rock. The two assistant keepers who had been tending the lighthouse at the time had died at their posts.
The current lighthouse
Until 1863 the design and construction of lighthouses was the responsibility of the Corps of Topographical Engineers; this resulted in a rivalry with the longer-established Army Corps of Engineers, which built fortifications and had responsibility, as it does today, for waterway improvements. The Chief Engineer of the Army Corps of Engineers, Joseph G. Totten, personally took charge of the project to design and construct a permanent lighthouse on Minot's Ledge.
Totten's design was as simple as it was effective. With extensive experience building fortifications, Totten fully appreciated the permanency and strength of granite constructions. He designed the lighthouse so the first 40 feet of lighthouse would be a solid granite base weighing thousands of tons. To secure the lighthouse to the ledge, he had several massive iron pins emplaced so that the lighthouse would be literally pinned to the ledge by its own weight. Working on the ledge could take place only in conditions when it was exposed at low tide and the sea was calm, so construction took years.
Work started on the current lighthouse in 1855, and it was completed and first lit on November 15, 1860. With a final cost of $300,000, it was the most expensive light house that was ever constructed in the United States to that date. The lighthouse is built of large and heavy dovetailed granite blocks, which were cut and dressed ashore in Quincy and taken to the ledge by ship. The lighthouse was equipped with a third-order Fresnel lens.
The light signal, a 1-4-3 flashing cycle adopted in 1894, is locally referred to as "I LOVE YOU" (1-4-3 being the number of letters in that phrase), and it is often cited as such by romantic couples within its range.
Minots Ledge Light was automated in 1947.
Historical information
The following is taken from the Coast Guard Historian's website:
Minot's Ledge Lighthouse keepers in 1940: George H. Fitzpatrick, Perc A. Evans, Patrick J. Bridy
Minot's Ledge Lighthouse was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1977. The light was added to the National Register of Historic Places in 1987 as Minot's Ledge Light.
It was put up for sale under the National Historic Lighthouse Preservation Act in 2009.
Nomenclature and location
Officially, it is Minots Ledge Light, but the National Register listing calls it Minot's Ledge Light.
There is a replica of the top section of the lighthouse, located on the shores of Cohasset Harbor. The replica can be viewed just outside the Cohasset Sailing Club. The replica on shore is not a replica, but instead is made from the stone and steel remnants of the original upper portion of the lighthouse including the lamp chamber, which was wholly rebuilt in the late twentieth century, the copper dome is in fact a replica.. It is located about one mile off of the coast of Scituate Neck.
In Popular Culture
An image of Minot's Ledge Light has featured prominently on the label of Cohasset Punch, a brand of liqueur popular in Chicago, from 1899 until its discontinuation in the late-1980s. The brand was revived in 2024 and features a new illustration of Minot's Ledge Light.
See also
Government Island Historic District, the Cohasset land station associated with the lighthouse
National Register of Historic Places in Plymouth County, Massachusetts
References
External links
Minot's Ledge poem. Fitz-James O'Brien, Harper's New Monthly Magazine, April 1861. audio recording, 2006, Public Domain.
Scituate, Massachusetts
Collapsed buildings and structures in the United States
Disasters in Massachusetts
1851 disasters in the United States
Historic Civil Engineering Landmarks
Lighthouses completed in 1850
Lighthouses completed in 1860
Lighthouses in Plymouth County, Massachusetts
Lighthouses on the National Register of Historic Places in Massachusetts | Minot's Ledge Light | [
"Engineering"
] | 1,106 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.