id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
178,480 | https://en.wikipedia.org/wiki/Anaphase | Anaphase () is the stage of mitosis after the process of metaphase, when replicated chromosomes are split and the newly-copied chromosomes (daughter chromatids) are moved to opposite poles of the cell. Chromosomes also reach their overall maximum condensation in late anaphase, to help chromosome segregation and the re-formation of the nucleus.
Anaphase starts when the anaphase promoting complex marks an inhibitory chaperone called securin for destruction by ubiquitylating it. Securin is a protein which inhibits a protease known as separase. The destruction of securin unleashes separase which then breaks down cohesin, a protein responsible for holding sister chromatids together.
At this point, three subclasses of microtubule unique to mitosis are involved in creating the forces necessary to separate the chromatids: kinetochore microtubules, interpolar microtubules, and astral microtubules.
The centromeres are split, and the sister chromatids are pulled toward the poles by kinetochore microtubules. They take on a V-shape or Y-shape as they are pulled to either pole.
While the chromosomes are drawn to each side of the cell, interpolar microtubules and astral microtubules generate forces that stretch the cell into an oval.
Once anaphase is complete, the cell moves into telophase.
Phases
Anaphase is characterized by two distinct motions. The first of these, anaphase A, moves chromosomes to either pole of a dividing cell (marked by centrosomes, from which mitotic microtubules are generated and organised). The movement for this is primarily generated by the action of kinetochores, and a subclass of microtubule called kinetochore microtubules.
The second motion, anaphase B, involves the separation of these poles from each other. The movement for this is primarily generated by the action of interpolar microtubules and astral microtubules.
Anaphase A
A combination of different forces have been observed acting on chromatids in anaphase A, but the primary force is exerted centrally. Microtubules attach to the midpoint of chromosomes (the centromere) via protein complexes (kinetochores). The attached microtubules depolymerise and shorten, which together with motor proteins creates movement that pulls chromosomes towards centrosomes located at each pole of the cell.
Anaphase B
The second part of anaphase is driven by its own distinct mechanisms. Force is generated by several actions. Interpolar microtubules begin at each centrosome and join at the equator of the dividing cell. They push against one another, causing each centrosome to move further apart. Meanwhile, astral microtubules begin at each centrosome and join with the cell membrane. This allows them to pull each centrosome closer to the cell membrane. Movement created by these microtubules is generated by a combination of microtubule growth or shrinking, and by motor proteins such as dyneins or kinesins.
Relation to the cell cycle
Anaphase accounts for approximately 1% of the cell cycle's duration. It begins with the regulated triggering of the metaphase-to-anaphase transition. Metaphase ends with the destruction of B cyclin. B cyclin is marked with ubiquitin which flags it for destruction by proteasomes, which is required for the function of metaphase cyclin-dependent kinases (M-Cdks). In essence, Activation of the Anaphase-promoting complex (APC) causes the APC to cleave the M-phase cyclin and the inhibitory protein securin which activates the separase protease to cleave the cohesin subunits holding the chromatids together.
See also
Interphase
Prophase
Prometaphase
Metaphase
Telophase
Cytoskeleton
Anaphase I
Anaphase II
Cdc20
References
External links
Mitosis
de:Mitose#Anaphase | Anaphase | Biology | 872 |
75,803,705 | https://en.wikipedia.org/wiki/C3H6OS2 | {{DISPLAYTITLE:C3H6OS2}}
The molecular formula C3H6OS2 may refer to:
S,S'-Dimethyl dithiocarbonate
Ethyl xanthic acid
Thiomethylketone
1,2-dithiolane-1-oxide
1,3-dithiolane-1-oxide | C3H6OS2 | Chemistry | 78 |
15,105,978 | https://en.wikipedia.org/wiki/Well | A well is an excavation or structure created on the earth by digging, driving, or drilling to access liquid resources, usually water. The oldest and most common kind of well is a water well, to access groundwater in underground aquifers. The well water is drawn up by a pump, or using containers, such as buckets that are raised mechanically or by hand. Water can also be injected back into the aquifer through the well. Wells were first constructed at least eight thousand years ago and historically vary in construction from a sediment of a dry watercourse to the qanats of Iran, and the stepwells and sakiehs of India. Placing a lining in the well shaft helps create stability, and linings of wood or wickerwork date back at least as far as the Iron Age.
Wells have traditionally been sunk by hand digging, as is still the case in rural areas of the developing world. These wells are inexpensive and low-tech as they use mostly manual labour, and the structure can be lined with brick or stone as the excavation proceeds. A more modern method called caissoning uses pre-cast reinforced concrete well rings that are lowered into the hole. Driven wells can be created in unconsolidated material with a well hole structure, which consists of a hardened drive point and a screen of perforated pipe, after which a pump is installed to collect the water. Deeper wells can be excavated by hand drilling methods or machine drilling, using a bit in a borehole. Drilled wells are usually cased with a factory-made pipe composed of steel or plastic. Drilled wells can access water at much greater depths than dug wells.
Two broad classes of well are shallow or unconfined wells completed within the uppermost saturated aquifer at that location, and deep or confined wells, sunk through an impermeable stratum into an aquifer beneath. A collector well can be constructed adjacent to a freshwater lake or stream with water percolating through the intervening material. The site of a well can be selected by a hydrogeologist, or groundwater surveyor. Water may be pumped or hand drawn. Impurities from the surface can easily reach shallow sources and contamination of the supply by pathogens or chemical contaminants needs to be avoided. Well water typically contains more minerals in solution than surface water and may require treatment before being potable. Soil salination can occur as the water table falls and the surrounding soil begins to dry out. Another environmental problem is the potential for methane to seep into the water.
History
Very early Neolithic wells are known from the Eastern Mediterranean. The oldest reliably dated well is from the pre-pottery neolithic (PPN) site of Kissonerga-Mylouthkia on Cyprus. At around 8400 BC a shaft (well 116) of circular diameter was driven through limestone to reach an aquifer at a depth of . Well 2070 from Kissonerga-Mylouthkia, dating to the late PPN, reaches a depth of . Other slightly younger wells are known from this site and from neighbouring Parekklisha-Shillourokambos. A first stone lined well of depth is documented from a drowned final PPN (c. 7000 BC) site at ‘Atlit-Yam off the coast near modern Haifa in Israel.
Wood-lined wells are known from the early Neolithic Linear Pottery culture, for example in Ostrov, Czech Republic, dated 5265 BC, Kückhoven (an outlying centre of Erkelenz), dated 5300 BC, and Eythra in Schletz (an outlying centre of Asparn an der Zaya) in Austria, dated 5200 BC.
The neolithic Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text The Book of Changes, originally a divination text of the Western Zhou dynasty (1046 -771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. A well excavated at the Hemedu excavation site was believed to have been built during the neolithic era. The well was cased by four rows of logs with a square frame attached to them at the top of the well. 60 additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation.
In Egypt, shadoofs and sakias are used. The sakia is much more efficient, as it can bring up water from a depth of 10 metres (versus the 3 metres of the shadoof). The sakia is the Egyptian version of the noria. Some of the world's oldest known wells, located in Cyprus, date to 7000–8,500 BC. Two wells from the Neolithic period, around 6500 BC, have been discovered in Israel. One is in Atlit, on the northern coast of Israel, and the other is in the Jezreel Valley.
Wells for other purposes came along much later, historically. The first recorded salt well was dug in the Sichuan province of China around 2,250 years ago. This was the first time that ancient water well technology was applied successfully for the exploitation of salt, and marked the beginning of Sichuan's salt drilling industry. The earliest known oil wells were also drilled in China, in 347 CE. These wells had depths of up to about and were drilled using bits attached to bamboo poles. The oil was burned to evaporate brine and produce salt. By the 10th century, extensive bamboo pipelines connected oil wells with salt springs. The ancient records of China and Japan are said to contain many allusions to the use of natural gas for lighting and heating. Petroleum was known as Burning water in Japan in the 7th century.
Types
Dug wells
Until recent centuries, all artificial wells were pumpless hand-dug wells of varying degrees of sophistication, and they remain a very important source of potable water in some rural developing areas, where they are routinely dug and used today. Their indispensability has produced a number of literary references, literal and figurative, including the reference to the incident of Jesus meeting a woman at Jacob's well (John 4:6) in the Bible and the "Ding Dong Bell" nursery rhyme about a cat in a well.
Hand-dug wells are excavations with diameters large enough to accommodate one or more people with shovels digging down to below the water table. The excavation is braced horizontally to avoid landslide or erosion endangering the people digging. They can be lined with stone or brick; extending this lining upwards above the ground surface to form a wall around the well serves to reduce both contamination and accidental falls into the well.
A more modern method called caissoning uses reinforced concrete or plain concrete pre-cast well rings that are lowered into the hole. A well-digging team digs under a cutting ring and the well column slowly sinks into the aquifer, whilst protecting the team from collapse of the well bore.
Hand-dug wells are inexpensive and low tech (compared to drilling) and they use mostly manual labour to access groundwater in rural locations of developing countries. They may be built with a high degree of community participation, or by local entrepreneurs who specialize in hand-dug wells. They have been successfully excavated to . They have low operational and maintenance costs, in part because water can be extracted by hand, without a pump. The water often comes from an aquifer or groundwater, and can be easily deepened, which may be necessary if the ground water level drops, by telescoping the lining further down into the aquifer. The yield of existing hand dug wells may be improved by deepening or introducing vertical tunnels or perforated pipes.
Drawbacks to hand-dug wells are numerous. It can be impractical to hand dig wells in areas where hard rock is present, and they can be time-consuming to dig and line even in favourable areas. Because they exploit shallow aquifers, the well may be susceptible to yield fluctuations and possible contamination from surface water, including sewage. Hand dug well construction generally requires the use of a well trained construction team, and the capital investment for equipment such as concrete ring moulds, heavy lifting equipment, well shaft formwork, motorized de-watering pumps, and fuel can be large for people in developing countries. Construction of hand dug wells can be dangerous due to collapse of the well bore, falling objects and asphyxiation, including from dewatering pump exhaust fumes.
The Woodingdean Water Well, hand-dug between 1858 and 1862, is the deepest hand-dug well at . The Big Well in Greensburg, Kansas, is billed as the world's largest hand-dug well, at deep and in diameter. However, the Well of Joseph in the Cairo Citadel at deep and the Pozzo di San Patrizio (St. Patrick's Well) built in 1527 in Orvieto, Italy, at deep by wide are both larger by volume.
Driven wells
Driven wells may be very simply created in unconsolidated material with a well hole structure, which consists of a hardened drive point and a screen (perforated pipe). The point is simply hammered into the ground, usually with a tripod and driver, with pipe sections added as needed. A driver is a weighted pipe that slides over the pipe being driven and is repeatedly dropped on it. When groundwater is encountered, the well is washed of sediment and a pump installed.
Drilled wells
Drilled wells are constructed using various types of drilling machines, such as top-head rotary, table rotary, or cable tool, which all use drilling stems that rotate to cut into the formation, thus the term "drilling."
Drilled wells can be excavated by simple hand drilling methods (augering, sludging, jetting, driving, hand percussion) or machine drilling (auger, rotary, percussion, down the hole hammer). Deep rock rotary drilling method is most common. Rotary can be used in 90% of formation types (consolidated).
Drilled wells can get water from a much deeper level than dug wells can − often down to several hundred metres.
Drilled wells with electric pumps are used throughout the world, typically in rural or sparsely populated areas, though many urban areas are supplied partly by municipal wells. Most shallow well drilling machines are mounted on large trucks, trailers, or tracked vehicle carriages. Water wells typically range from deep, but in some areas it can go deeper than .
Rotary drilling machines use a segmented steel drilling string, typically made up of 3m (10ft), to 8m (26ft) sections of steel tubing that are threaded together, with a bit or other drilling device at the bottom end. Some rotary drilling machines are designed to install (by driving or drilling) a steel casing into the well in conjunction with the drilling of the actual bore hole. Air and/or water is used as a circulation fluid to displace cuttings and cool bits during the drilling. Another form of rotary-style drilling, termed mud rotary, makes use of a specially made mud, or drilling fluid, which is constantly being altered during the drill so that it can consistently create enough hydraulic pressure to hold the side walls of the bore hole open, regardless of the presence of a casing in the well. Typically, boreholes drilled into solid rock are not cased until after the drilling process is completed, regardless of the machinery used.
The oldest form of drilling machinery is the cable tool, still used today. Specifically designed to raise and lower a bit into the bore hole, the spudding of the drill causes the bit to be raised and dropped onto the bottom of the hole, and the design of the cable causes the bit to twist at approximately revolution per drop, thereby creating a drilling action. Unlike rotary drilling, cable tool drilling requires the drilling action to be stopped so that the bore hole can be bailed or emptied of drilled cuttings. Cable tool drilling rigs are rare as they tend to be 10x slower to drill through materials compared to similar diameter rotary air or rotary mud equipped rigs.
Drilled wells are usually cased with a factory-made pipe, typically steel (in air rotary or cable tool drilling) or plastic/PVC (in mud rotary wells, also present in wells drilled into solid rock). The casing is constructed by welding, either chemically or thermally, segments of casing together. If the casing is installed during the drilling, most drills will drive the casing into the ground as the bore hole advances, while some newer machines will actually allow for the casing to be rotated and drilled into the formation in a similar manner as the bit advancing just below. PVC or plastic is typically solvent welded and then lowered into the drilled well, vertically stacked with their ends nested and either glued or splined together. The sections of casing are usually or more in length, and in diameter, depending on the intended use of the well and local groundwater conditions.
Surface contamination of wells in the United States is typically controlled by the use of a surface seal. A large hole is drilled to a predetermined depth or to a confining formation (clay or bedrock, for example), and then a smaller hole for the well is completed from that point forward. The well is typically cased from the surface down into the smaller hole with a casing that is the same diameter as that hole. The annular space between the large bore hole and the smaller casing is filled with bentonite clay, concrete, or other sealant material. This creates an impermeable seal from the surface to the next confining layer that keeps contaminants from traveling down the outer sidewalls of the casing or borehole and into the aquifer. In addition, wells are typically capped with either an engineered well cap or seal that vents air through a screen into the well, but keeps insects, small animals, and unauthorized persons from accessing the well.
At the bottom of wells, based on formation, a screening device, filter pack, slotted casing, or open bore hole is left to allow the flow of water into the well. Constructed screens are typically used in unconsolidated formations (sands, gravels, etc.), allowing water and a percentage of the formation to pass through the screen. Allowing some material to pass through creates a large area filter out of the rest of the formation, as the amount of material present to pass into the well slowly decreases and is removed from the well. Rock wells are typically cased with a PVC liner/casing and screen or slotted casing at the bottom, this is mostly present just to keep rocks from entering the pump assembly. Some wells utilize a filter pack method, where an undersized screen or slotted casing is placed inside the well and a filter medium is packed around the screen, between the screen and the borehole or casing. This allows the water to be filtered of unwanted materials before entering the well and pumping zone.
Classification
There are two broad classes of drilled-well types, based on the type of aquifer the well is in:
Shallow or unconfined wells are completed in the uppermost saturated aquifer at that location (the upper unconfined aquifer).
Deep or confined wells are sunk through an impermeable stratum into an aquifer that is sandwiched between two impermeable strata (aquitards or aquicludes). The majority of deep aquifers are classified as artesian because the hydraulic head in a confined well is higher than the level of the top of the aquifer. If the hydraulic head in a confined well is higher than the land surface it is a "flowing" artesian well (named after Artois in France).
A special type of water well may be constructed adjacent to freshwater lakes or streams. Commonly called a collector well but sometimes referred to by the trade name Ranney well or Ranney collector, this type of well involves sinking a caisson vertically below the top of the aquifer and then advancing lateral collectors out of the caisson and beneath the surface water body. Pumping from within the caisson induces infiltration of water from the surface water body into the aquifer, where it is collected by the collector well laterals and conveyed into the caisson where it can be pumped to the ground surface.
Two additional broad classes of well types may be distinguished, based on the use of the well:
production or pumping wells, are large diameter (greater than 15 cm in diameter) cased (metal, plastic, or concrete) water wells, constructed for extracting water from the aquifer by a pump (if the well is not artesian).
monitoring wells or piezometers, are often smaller diameter wells used to monitor the hydraulic head or sample the groundwater for chemical constituents. Piezometers are monitoring wells completed over a very short section of aquifer. Monitoring wells can also be completed at multiple levels, allowing discrete samples or measurements to be made at different vertical elevations at the same map location.
A water well constructed for pumping groundwater can be used passively as a monitoring well and a small diameter well can be pumped, but this distinction by use is common.
Siting
Before excavation, information about the geology, water table depth, seasonal fluctuations, recharge area and rate should be found if possible. This work can be done by a hydrogeologist, or a groundwater surveyor using a variety of tools including electro-seismic surveying, any available information from nearby wells, geologic maps, and sometimes geophysical imaging. These professionals provide advice that is almost as accurate a driller who has experience and knowledge of nearby wells/bores and the most suitable drilling technique based on the expected target depth.
Contamination
Shallow pumping wells can often supply drinking water at a very low cost. However, impurities from the surface easily reach shallow sources, which leads to a greater risk of contamination for these wells compared to deeper wells. Contaminated wells can lead to the spread of various waterborne diseases. Dug and driven wells are relatively easy to contaminate; for instance, most dug wells are unreliable in the majority of the United States. Some research has found that, in cold regions, changes in river flow and flooding caused by extreme rainfall or snowmelt can degrade well water quality.
Pathogens
Most of the bacteria, viruses, parasites, and fungi that contaminate well water comes from fecal material from humans and other animals. Common bacterial contaminants include E. coli, Salmonella, Shigella, and Campylobacter jejuni. Common viral contaminants include norovirus, sapovirus, rotavirus, enteroviruses, and hepatitis A and E. Parasites include Giardia lamblia, Cryptosporidium, Cyclospora cayetanensis, and microsporidia.
Chemical contamination
Chemical contamination is a common problem with groundwater. Nitrates from sewage, sewage sludge or fertilizer are a particular problem for babies and young children. Pollutant chemicals include pesticides and volatile organic compounds from gasoline, dry-cleaning, the fuel additive methyl tert-butyl ether (MTBE), and perchlorate from rocket fuel, airbag inflators, and other artificial and natural sources.
Several minerals are also contaminants, including lead leached from brass fittings or old lead pipes, chromium VI from electroplating and other sources, naturally occurring arsenic, radon, and uranium—all of which can cause cancer—and naturally occurring fluoride, which is desirable in low quantities to prevent tooth decay, but can cause dental fluorosis in higher concentrations.
Some chemicals are commonly present in water wells at levels that are not toxic, but can cause other problems. Calcium and magnesium cause what is known as hard water, which can precipitate and clog pipes or burn out water heaters. Iron and manganese can appear as dark flecks that stain clothing and plumbing, and can promote the growth of iron and manganese bacteria that can form slimy black colonies that clog pipes.
Prevention
The quality of the well water can be significantly increased by lining the well, sealing the well head, fitting a self-priming hand pump, constructing an apron, ensuring the area is kept clean and free from stagnant water and animals, moving sources of contamination (pit latrines, garbage pits, on-site sewer systems) and carrying out hygiene education. The well should be cleaned with 1% chlorine solution after construction and periodically every 6 months.
Well holes should be covered to prevent loose debris, animals, animal excrement, and wind-blown foreign matter from falling into the hole and decomposing. The cover should be able to be in place at all times, including when drawing water from the well. A suspended roof over an open hole helps to some degree, but ideally the cover should be tight fitting and fully enclosing, with only a screened air vent.
Minimum distances and soil percolation requirements between sewage disposal sites and water wells need to be observed. Rules regarding the design and installation of private and municipal septic systems take all these factors into account so that nearby drinking water sources are protected.
Education of the general population in society also plays an important role in protecting drinking water.
Mitigation
Cleanup of contaminated groundwater tends to be very costly. Effective remediation of groundwater is generally very difficult. Contamination of groundwater from surface and subsurface sources can usually be dramatically reduced by correctly centering the casing during construction and filling the casing annulus with an appropriate sealing material. The sealing material (grout) should be placed from immediately above the production zone back to surface, because, in the absence of a correctly constructed casing seal, contaminated fluid can travel into the well through the casing annulus. Centering devices are important (usually one per length of casing or at maximum intervals of 9 m) to ensure that the grouted annular space is of even thickness.
Upon the construction of a new test well, it is considered best practice to invest in a complete battery of chemical and biological tests on the well water in question. Point-of-use treatment is available for individual properties and treatment plants are often constructed for municipal water supplies that suffer from contamination. Most of these treatment methods involve the filtration of the contaminants of concern, and additional protection may be garnered by installing well-casing screens only at depths where contamination is not present.
Wellwater for personal use is often filtered with reverse osmosis water processors; this process can remove very small particles. A simple, effective way of killing microorganisms is to bring the water to a full boil for one to three minutes, depending on location. A household well contaminated by microorganisms can initially be treated by shock chlorination using bleach, generating concentrations hundreds of times greater than found in community water systems; however, this will not fix any structural problems that led to the contamination and generally requires some expertise and testing for effective application.
After the filtration process, it is common to implement an ultraviolet (UV) system to kill pathogens in the water. UV light affects the DNA of the pathogen by UV-C photons breaking through the cell wall. UV disinfection has been gaining popularity in the past decades as it is a chemical-free method of water treatment.
Environmental problems
A risk with the placement of water wells is soil salination which occurs when the water table of the soil begins to drop and salt begins to accumulate as the soil begins to dry out. Another environmental problem that is very prevalent in water well drilling is the potential for methane to seep through.
Soil salination
The potential for soil salination is a large risk when choosing the placement of water wells. Soil salination is caused when the water table of the soil drops over time and salt begins to accumulate. In turn, the increased amount of salt begins to dry the soil out. The increased level of salt in the soil can result in the degradation of soil and can be very harmful to vegetation.
Methane
Methane, an asphyxiant, is a chemical compound that is the main component of natural gas. When methane is introduced into a confined space, it displaces oxygen, reducing oxygen concentration to a level low enough to pose a threat to humans and other aerobic organisms but still high enough for a risk of spontaneous or externally caused explosion. This potential for explosion is what poses such a danger in regards to the drilling and placement of water wells.
Low levels of methane in drinking water are not considered toxic. When methane seeps into a water supply, it is commonly referred to as "methane migration". This can be caused by old natural gas wells near water well systems becoming abandoned and no longer monitored.
Lately, however, the described wells/pumps are no longer very efficient and can be replaced by either handpumps or treadle pumps. Another alternative is the use of self-dug wells, electrical deep-well pumps (for higher depths). Appropriate technology organizations as Practical Action are now supplying information on how to build/set-up (DIY) handpumps and treadle pumps in practice.
PFAS/PFOS Fire fighting foam
Per- and polyfluoroalkyl substances (PFAS or PFASs) are a group of synthetic organofluorine chemical compounds that have multiple fluorine atoms attached to an alkyl chain. PFAS are a group of "forever chemicals" that spread very quickly and very far in ground water polluting it permanently. Water wells near certain airports where any foam fire fighting or training activities occurred up to 2010 are likely to be contaminated by PFAS.
Water security
A study concluded that of ~39 million groundwater wells 6-20% are at high risk of running dry if local groundwater levels decline by less than five meters, or – as with many areas and possibly more than half of major aquifers – continue to decline.
Society and culture
Springs and wells have had cultural significance since prehistoric times, leading to the foundation of towns such as Wells and Bath in Somerset. Interest in health benefits led to the growth of spa towns including many with wells in their name, examples being Llandrindod Wells and Royal Tunbridge Wells.
Eratosthenes is sometimes claimed to have used a well in his calculation of the Earth's circumference; however, this is just a simplification used in a shorter explanation of Cleomedes, since Eratosthenes had used a more elaborate and precise method.
Many incidents in the Bible take place around wells, such as the finding of a wife for Isaac in Genesis and Jesus's talk with the Samaritan woman in the Gospels.
A simple model for water well recovery
For a well with impermeable walls, the water in the well is resupplied from the bottom of the well. The rate at which water flows into the well will depend on the pressure difference between the ground water at the well bottom and the well water at the well bottom. The pressure of a column of water of height z will be equal to the weight of the water in the column divided by the cross-sectional area of the column, so the pressure of the ground water a distance zT below the top of the water table will be:
where ρ is the mass density of the water and g is the acceleration due to gravity. When the water in the well is below the water table level, the pressure at the bottom of the well due to the water in the well will be less than Pg and water will be forced into the well. Referring to the diagram, if z is the distance from the bottom of the well to the well water level and zT is the distance from the bottom of the well to the top of the water table, the pressure difference will be:
Applying Darcy's Law, the volume rate (F) at which water is forced into the well will be proportional to this pressure difference:
where R is the resistance to the flow, which depends on the well cross section, the pressure gradient at the bottom of the well, and the characteristics of the substrate at the well bottom. (e.g., porosity). The volume flow rate into the well can be written as a function of the rate of change of the well water level:
Combining the above three equations yields a simple differential equation in z:
which may be solved:
where z0 is the well water level at time t=0 and τ is the well time constant:
Note that if dz/dt for a depleted well can be measured, it will be equal to and the time constant τ can be calculated. According to the above model, it will take an infinite amount of time for a well to fully recover, but if we consider a well that is 99% recovered to be "practically" recovered, the time for a well to practically recover from a level at z will be:
For a well that is fully depleted (z=0) it would take a time of about 4.6 τ to practically recover.
The above model does not take into account the depletion of the aquifer due to the pumping which lowered the well water level (See aquifer test and groundwater flow equation). Also, practical wells may have impermeable walls only up to, but not including the bedrock, which will give a larger surface area for water to enter the well.
Similar and related water structures
Types of ancient wells
Brick-lined well
Castle well, for use in the castle
Cistern, ancient Greek
Stepwell, ancient India
Modern construction techniques
Baptist well drilling, simple technique
Rodriguez well, for harvesting drinking water in polar regions
Spring supply, piped water supply from the well
Uses
Holy well, sacred wells in various religions
Abraham's well, sacred well in Israel
Ghat, sacred in Hinduism and Buddhism
Drainage and irrigation
Drainage by wells
Shadoof, an irrigation tool that is used to lift water from a water source onto land or into another waterway or basin
Washing
Lavoir, public place for washing clothes.
See also
Fossil water
History of water supply and sanitation
Ancient water conservation techniques
Self-supply of water and sanitation
References
Bibliography
External links
Sustainable Groundwater Development theme of the Rural Water Supply Network (RWSN)
Water Portal – Akvopedia
Sustainable Sanitation and Water Management Toolbox
U.S. Centers for Disease Control and Prevention (CDC) Healthy Water – Water Wells Site covering well basics, guidelines for proper siting and location of wells to avoid contamination, well testing, diseases related to wells, emergency well treatment and other topics.
US Geological Survey – Ground water: Wells
US Geological Survey – Water Science Pictures Flowing Artesian Well
Drilling wells 18 extremely useful questions and answers
American Ground Water Trust
Lifewater International Technical Library
Well Construction Technical Resources for NGOs
Archaeological features
Bodies of water
Well
In situ geotechnical investigations | Well | Chemistry,Engineering,Environmental_science | 6,332 |
67,808,182 | https://en.wikipedia.org/wiki/Treedom | Treedom is a platform that allows anyone to plant trees in different countries of the world. Treedom also allows the 'owner's of the planted trees to receive images of the trees that have just been planted along with its GPS coordinates and updates from the project it is part of.
Procedure
The project developer who applies to become a "tree planter" has to make a formal request in the form of the "project". The submission is reviewed to exclude projects that require cutting other trees to make the space, violate the law, consider planting invasive species and the like. The farmer confirms the fact of planting the tree with the help of the specialized mobile application that captures both photo and GPS coordinates. These reports are then manually checked, verifying the location, quality of the image and species of the planted tree. Trees that do not take root for the first three years must be replanted. Treedom claims inspecting in place at least 25% of these projects per year. Additionally, 5% of the planted trees are put aside as so called “Project Reserve” that should cover possible loss of trees and the related absorption (like trees that die after the third year, for which a substitution is not provided). A user can then order planting a selected tree online, paying as for a web purchase.
Tree planters
Treedom works in collaboration with small collective of farmers, local community and NGO across different countries including Kenya, Tanzania, Guatemala, Ecuador, Italy, Haiti, Nepal, Pakistan, Peru, Italy, etc.
For the trees which bear fruits, the fruits are reckoned to belong to the farmers who planted it. The farmer planting a tree remains responsible for its growth and take care where the organization provides support by arranging agroforestry training and income opportunities. The platform is known to promote welfare of farmers, including female farmers for which it announced 'Mothers day campaign' during March 2020 with the aim to spread awareness about the difficulties faced by female agricultural workers.
User interface
When a person chooses to own a certain tree, the sapling is planted by the local farmer on behalf of the person. The updates of the sapling are provided using GPS location and photographs on regular basis via webpage dedicated to this plant. Apart from this, the platform also allows a tree to be gifted. It is also possible to view the local weather data in the vicinity of the tree.
History
Treedom was founded in the year 2010 by Tommaso Speroni and Federico Garcea in Florence, Italy. The objective of the organization was described as The Sustainable Development Goals, which includes counter-deforestation, protecting biodiversity, preventing soil erosion, combating emission on one side and sustainable food production and income security for farmers on the other side. As in April 2021, Treedom has been reported to collaborate with 75000 farmers and plant more than 2 million trees across Asia, Africa and Central and South America.
Notes
Green computing e e-commerce sostenibile. Un piccolo viaggio negli impatti ambientali della rete,
Den Herzschlag der Natur spüren Achtsam und verbunden leben,
Energy Policy and Climate Change, ,
Subramanian Senthilkannan Muthu, Miguel Angel Gardetti, Sustainability in the Textile and Apparel Industries: Consumerism and Fashion Sustainability.
Zaigham Mahmood, Developing and Monitoring Smart Environments for Intelligent Cities,
References
Environmental organisations based in Italy
Biodiversity
Carbon emissions
Organizations established in 2010
B Lab-certified corporations | Treedom | Biology | 705 |
2,633,378 | https://en.wikipedia.org/wiki/GoldStar | GoldStar was a South Korean electronics company established in 1958. The corporate name was changed to LG Electronics and LG Cable on February 28, 1995, after merging with Lucky Chemical. LG Cable was spun off from LG Electronics and changed its name to LS Cable in 2005.
Manufacturing
GoldStar manufactured a wide variety of products, including radios, televisions, air conditioners, MSX home computers, LCD games, videocassette recorders, video and audio cassette tapes, microwave ovens, electronic typewriters, integrated circuits, escalators, elevators, injection molding machines, dehumidifiers, fans, and tractors. GoldStar televisions became a commonly sold brand of consumer television sets in the United States in the 1980s.
GoldStar Precision was a division manufacturing electronic test equipment such as multi-meters and oscilloscopes and industrial electronics. The name was changed to LG Precision with the merger, with many of the same products and model numbers being produced but with new branding. The measuring device division was acquired by EZ Digital in 1999, again with identical or similar model numbers. The industrial products continue under the company name LIG Nex1.
Motors
GoldStar tractors began in 1975, as a division of Hyundai, in cooperation with Yanmar of Japan. In 1977, they began cooperation with Fiat of Italy, and in 1983, was acquired by GoldStar. Both companies, Lucky Chemicals and Goldstar Co. Ltd, merged and formed Lucky-Goldstar in this year. That same year, they began cooperation with Mitsubishi, and re-established cooperation with Fiat in 1984.
Tractors were sold under the GoldStar and Fiat-GoldStar brands. Fiat-GoldStar was a brand of tractors sold in cooperation with the Fiat company of Italy. The Fiat-GoldStar name was discontinued when GoldStar changed its corporate name to LG Cable.
Other products
GoldStar also produced some models of computer monitors, optical disc drives and a model/version of multimedia videogame console 3DO, along with several manufacturers such as Sanyo, JVC or Samsung among others.
See also
LG Tractors
LS Tractors
References
External links
LG Electronics
Computer companies of South Korea
Defunct computer hardware companies
Home appliance manufacturers of South Korea
Electronics companies established in 1958
Manufacturing companies disestablished in 2002
2002 disestablishments in South Korea
South Korean companies established in 1958
Radio manufacturers | GoldStar | Engineering | 488 |
39,083,176 | https://en.wikipedia.org/wiki/Albatrellus%20piceiphilus | Albatrellus piceiphilus is a species of fungus in the family Albatrellaceae. Found in Picea crassifolia forest in Gansu Province, China, it was described as new to science in 2008. Molecular analysis shows that it groups in a "Russuloid" clade with Albatrellus citrinus and A. ovinus.
References
External links
Russulales
Fungi described in 2008
Fungi of Asia
Taxa named by Yu-Cheng Dai
Taxa named by Bao-Kai Cui
Fungus species | Albatrellus piceiphilus | Biology | 105 |
64,138,114 | https://en.wikipedia.org/wiki/Chlorotonil%20A | Chlorotonil A is a polyketide natural product produced by the myxobacterium Sorangium cellulosum So ce1525. It displays antimalarial activity in an animal model, and has in vitro antibacterial and antifungal activity. The activity of chlorotonil A has been attributed to the gem-dichloro-1,3-dione moiety, which is a unique functionality in polyketides. In addition to its unique halogenation, the structure of chlorotonil A has also garnered interest due to its similarity to anthracimycin, a polyketide natural product with antibiotic activity against Gram-positive bacteria. Recently, structure-optimization resulted in semi-synthetic derivatives ChB1-Epo2 and Dehalogenil, molecules with significantly improved physicochemical properties.
Biosynthesis
Chlorotonil A is synthesized from a type I modular polyketide synthase (PKS). This gene cluster does not have any acyltransferase (AT) domains, indicating that it is a trans-AT PKS; in these systems, there is a tandem-AT domain that loads the extender subunits onto the acyl carrier protein (ACP) and checks the intermediates, rather than individual AT domains in each module. The gene cluster of chlorotonil A is organized so that the initiator, acetyl-CoA, is loaded onto the tandem-AT domain, then is iteratively elongated with malonyl-CoA units to construct the macrolactone backbone. At modules 3 and 7, a double bond shift occurs in the elongation module to allow for the β,γ-unsaturation and α-methylation. There is a spontaneous, non-enzymatic intramolecular Diels-Alder-like [4+2] cycloaddition at module 8 to furnish the decalin motif.
Following macrolactonization by the thioesterase domain of module 10, the premature chlorotonil A core is chlorinated twice by CtoA, a flavin-dependent halogenase. The halogenated core is then methylated by the standalone SAM-dependent methyltransferase CtoF to yield chlorotonil A.
References
See also
Myxobacteria
Polyketides
Heterocyclic compounds with 3 rings
Oxygen heterocycles
Organochlorides | Chlorotonil A | Chemistry | 518 |
3,868,720 | https://en.wikipedia.org/wiki/Variable-incidence%20wing | A variable-incidence wing has an adjustable angle of incidence relative to its fuselage. This allows the wing to operate at a high angle of attack for take-off and landing while allowing the fuselage to remain close to horizontal.
The pivot mechanism adds extra weight over a conventional wing and increases costs, but in some applications the benefits can outweigh the costs.
Several examples have flown, with one, the F-8 Crusader carrier-borne jet fighter, entering production.
History
Some early aeroplanes had wings which could be varied in incidence for control and trim, in place of conventional elevator control surfaces. Wing warping varied the incidence of the outer wing and was used by several pioneers, including initially the Wright brothers.
Early examples of rigid variable-incidence wings were not particularly successful. They include the Mulliner Knyplane in 1911, the Ratmanoff monoplane in 1913 and the Pasul Schmidt biplane, also in 1913. A patent for a rigid variable-incidence wing was lodged in France on 20 May 1912 by Bulgarian inventor George Boginoff. It is believed that four unsuccessful Russian types were built between 1916 and 1917. The Zerbe Air Sedan was a tandem quadruplane which flew only once, in 1921.
The first example to be made in any quantity was the French tandem-wing Mignet Pou du Ciel (Flying Flea), which became briefly popular during the 1930s. It had a variable-incidence forewing which proved unsafe, and sales were discontinued following a series of fatal crashes.
During World War II, the German company Blohm & Voss developed the variable-incidence monoplane to provide increased lift at takeoff, where the rear fuselage was too close to the ground to allow rotation of the whole aircraft. The fuselage of the BV 144 prototype transport sat low on a short undercarriage, allowing passengers to go on and off without the need for additional steps. Another proposal by B&V, the P 193 attack aircraft, was of pusher configuration and could not rotate its fuselage for takeoff without the propeller fouling the ground, so it was given a variable-incidence wing. Russian designer S. G. Kozlov designed the E1 variable-incidence fighter, but the unfinished prototype was destroyed when the factory was overrun by Germany in 1941.Yefim Gordon and Bill Gunston; Soviet X-Planes, Midland, 2000, p.83.
Carrier-borne aircraft must have good forward visibility during the descent and approach for a deck landing. Without a variable-incidence wing (or other high-lift device), the pilot must pitch up the entire aircraft to maintain lift at the slow approach speed required, and this can restrict forward vision. By increasing the incidence of the wing but not the fuselage, both high lift and good forward vision can be maintained. The device also avoids the need for a long, bulky and heavy nose undercarriage to raise the angle of attack at takeoff. The Supermarine Type 322 prototype flew in 1943, and the Seagull ASR.1 amphibian flying boat in 1948.
After the war the USA revisited the idea for the jet age. The Martin XB-51 bomber and the Republic XF-91 interceptor adopted variable incidence for much the same reason as B&V. Both first flew in 1949, but only a handful of prototypes of either was built. They were followed in 1955 by the Vought F-8 Crusader carrier-borne jet fighter, the only variable-incidence type to go into production and enjoy a successful service career.
See also
Stabilator - a variable-incidence horizontal stabilizer or tailplane.
Tiltwing - a type of vertical takeoff plane which tilts its wings and engines.
Variable camber wing - in which the aerofoil profile is changed rather than tilted.
References
Aircraft configurations
Variable-geometry-wing aircraft
Aircraft wing design
Wing configurations | Variable-incidence wing | Engineering | 776 |
12,285,519 | https://en.wikipedia.org/wiki/Normethadone | Normethadone (INN, BAN; brand names Ticarda, Cophylac, Dacartil, Eucopon, Mepidon, Noramidone, Normedon, and others), also known as desmethylmethadone or phenyldimazone, is a synthetic opioid analgesic and antitussive agent.
Normethadone is listed under the Single Convention on Narcotic Drugs 1961 and is a Schedule I Narcotic controlled substance in the United States, with a DEA ACSCN of 9635 and an annual manufacturing quota of 2 grams. It has an effective span of action for about 14 days, and is 12 to 20 times stronger than morphine. The salts in use are the hydrobromide (free base conversion ratio 0.785), hydrochloride (0.890), methyliodide (0.675), oxalate (0.766), picrate (0.563), and the 2,6-ditertbutylnapthalindisulphonate (0.480).
See also
Methade
References
Dimethylamino compounds
Analgesics
Antitussives
Ketones
Mu-opioid receptor agonists
Synthetic opioids | Normethadone | Chemistry | 270 |
1,517,848 | https://en.wikipedia.org/wiki/International%20Centre%20for%20Diffraction%20Data | The International Centre for Diffraction Data (ICDD) maintains a database of powder diffraction patterns, the Powder Diffraction File (PDF), including the d-spacings (related to angle of diffraction) and relative intensities of observable diffraction peaks. Patterns may be experimentally determined, or computed based on crystal structure and Bragg's law. It is most often used to identify substances based on x-ray diffraction data, and is designed for use with a diffractometer. The PDF contains more than a million unique material data sets. Each data set contains diffraction, crystallographic and bibliographic data, as well as experimental, instrument and sampling conditions, and select physical properties in a common standardized format.
The organization was founded in 1941 as the Joint Committee on Powder Diffraction Standards. In 1978, the current name was adopted to highlight the global commitment of this scientific endeavor.
The ICDD is a nonprofit scientific organization working in the field of X-ray analysis and materials characterization. It produces materials databases, characterization tools, and educational materials, as well as organizing and supporting global workshops, clinics and conferences.
Products and services of the ICDD include the paid subscription based Powder Diffraction File databases (PDF-2, PDF-4+, PDF-4+/Web , PDF-4/Minerals, PDF-4/Organics, PDF-4/Axiom, and ICDD Server Edition), educational workshops, clinics, and symposia. It is a sponsor of the Denver X-ray Conference and the Pharmaceutical Powder X-ray Diffraction Symposium. It also publishes the journals Advances in X-ray Analysis and Powder Diffraction.
In 2019, Materials Data, also known as MDI, merged with ICDD. Materials Data creates JADE software used to collect, analyze, and simulate XRD data and solve issues in an array of materials science projects.
In 2020, the ICDD and the Cambridge Crystallographic Data Centre, which curates and maintains the Cambridge Structural Database, announced a data partnership.
See also
Powder diffraction
Crystallography
References
External links
History, contents & use of the PDF
Materials Data
Advances in X-ray Analysis—Technical articles on x-ray methods and analyses
Powder Diffraction Journal—quarterly journal published by the JCPDS-International Centre for Diffraction Data through the Cambridge University Press
Denver X-ray Conference—World's largest X-ray conference on the latest advancements in XRD and XRF
PPXRD-16 —Pharmaceutical Powder X-ray Diffraction Symposium
Crystallography organizations
Diffraction
Optics institutions
Organizations established in 1941 | International Centre for Diffraction Data | Physics,Chemistry,Materials_science,Astronomy | 550 |
49,357,008 | https://en.wikipedia.org/wiki/Master%20regulator%20gene | In genetics, a master regulator gene is a regulator gene at the top of a gene regulation hierarchy, particularly in regulatory pathways related to cell fate and differentiation.
Examples
Most genes considered master regulators code for transcription factor proteins, which in turn alter the expression of downstream genes in the pathway. Canonical examples of master regulators include Oct-4 (also called POU5F1), SOX2, and NANOG, all transcription factors involved in maintaining pluripotency in stem cells. Master regulators involved in development and morphogenesis can also appear as oncogenes relevant to tumorigenesis and metastasis, as with the Twist transcription factor.
Other genes reported as master regulators code for SR proteins, which function as splicing factors, and some noncoding RNAs.
Criticism
The master regulator concept has been criticized for being a "simplified paradigm" that fails to account for the multifactorial influences on some cell fates.
References
Gene expression | Master regulator gene | Chemistry,Biology | 194 |
25,280,031 | https://en.wikipedia.org/wiki/Comunes%20Collective | Comunes is a nonprofit organization aiming to encourage the commons and facilitating grassroots work through free software web tools. Previously known as Ourproject.org, this collective established itself as a legal entity in 2009, forming Comunes. Nowadays it serves as an umbrella organization for several projects related to the Commons.
Philosophy and Values
The objectives of the Communes include providing legal protection to member projects, together with technical infrastructure. The organization claims to be inspired by Software in the Public Interest organization, which provides similar protection to free software projects. Comunes member projects must focus on encouraging the protection or expansion of the Commons. Comunes Manifesto shows a view on the social movements as nodes in a social network, analysing which problems this ecosystem has and proposing Comunes web tools for diminishing them.
Projects
Ourproject.org
Ourproject.org is a web-based collaborative free content repository. It acts as a central location for offering web space and tools for projects of any topic, focusing on free culture and free knowledge. It aims to extend the ideas and methodology of free software to social areas and free culture in general. Thus, it provides multiple web services (hosting, mailing lists, wiki, ftp, forums, etc.) to an online community of social, cultural, artistic, and educational projects as long as they share their contents with Creative Commons licenses (or other free/libre licenses). Active since 2002, Ourproject.org hosts 1,200 projects and its services receive more than 1 million monthly visits.
Kune
Kune is a software platform for federated social networking and collaborative work, focusing on workgroups rather than in individuals. Kune aims to allow the creation of online spaces of collaborative work, where organizations and individuals can build projects online, coordinate common agendas, set up virtual meetings and join organizations with similar interests. It is programmed using GWT, on top of the XMPP protocol and integrating Wave-In-A-Box. Licensed under AGPL, it has been under development since 2007 and it launched a beta and production site in April 2012.
Move Commons
Move Commons (MC) is a web tool for initiatives, collectives, and NGOs to declare and make visible their core principles. The idea behind MC follows the same mechanics of Creative Commons tagging cultural works, providing a user-friendly, bottom-up, labelling system for each initiative, with four meaningful icons and some keywords. It aims to boost the visibility and diffusion of such initiatives, and build a network among related initiatives/collectives, allowing mutual discovery. Additionally, newcomers could easily understand the collective approach in their website, or discover collectives matching their field/location/interests with a semantic search. It has been presented in several forums. Nowadays it is in beta version, but there are already a few organizations using their MC badges.
Other projects
Comunes includes other newer projects such as Alerta, the community-driven alert system, Plantaré, the community currency for seed exchange, and others.
Partners
Comunes has developed partnership with several organizations:
GRASIA (Group of Software Agents, Engineering & Applications): Research group of Universidad Complutense de Madrid, it is collaborating with Comunes to offer joint grants to students and provide hardware resources.
American University of Science and Technology (Beirut): offering students the chance to frame their senior project and Master theses within Comunes projects.
The Master of Free Software of Universidad Rey Juan Carlos (Madrid): offering students the chance to complete their compulsory Master internships in the free software community of Comunes projects.
Medialab-Prado: Serves as a forum to present Comunes initiatives. One of them, Move Commons, is part of their Commons Lab.
IEPALA Foundation: It has hired a programmer for the development of Kune and provides technical resources for an alpha-testing environment.
Xsto.info: Free software cooperative that provides technical infrastructure to Comunes without charge.
See also
Commons
Commons-based peer production
Ourproject
Kune (software)
Software in the Public Interest
References
External links
Comunes website
Free-content websites
Open content projects
Creative Commons-licensed websites
Collaborative projects
Community websites
Internet properties established in 2009
Multilingual websites
Internet-related activism
Web service development tools
Organizations established in 2009
Non-profit organisations based in Spain
Public commons
Non-profit technology
Information technology organizations
Free and open-source software organizations
Web service providers | Comunes Collective | Technology | 889 |
42,739,040 | https://en.wikipedia.org/wiki/NGC%204485 | NGC 4485 is an irregular galaxy located in the northern constellation of Canes Venatici. It was discovered January 14, 1788 by William Herschel. This galaxy is located at a distance of 29 million light years and is receding with a heliocentric radial velocity of 483 km/s.
NGC 4485 is interacting with the spiral galaxy NGC 4490 and as a result both galaxies are distorted and are undergoing intense star formation. They have a projected separation of and are surrounded by an extended hydrogen envelope with a dense bridge of gas joining the two. Both galaxies are otherwise isolated and of low mass. The star formation rate in NGC 4485 is ·yr−1.
Gallery
References
External links
Irregular galaxies
Canes II Group
Canes Venatici
4485
07648
41326 | NGC 4485 | Astronomy | 161 |
10,965,841 | https://en.wikipedia.org/wiki/DeSanctis%E2%80%93Cacchione%20syndrome | DeSanctis–Cacchione syndrome is a genetic disorder characterized by the skin and eye symptoms of xeroderma pigmentosum (XP) occurring in association with microcephaly, progressive intellectual disability, slowed growth and sexual development, deafness, choreoathetosis, ataxia and quadriparesis.
Genetics
In at least some case, the gene lesion involves a mutation in the CSB gene.
It can be associated with ERCC6.
Diagnosis
Treatment
See also
Xeroderma pigmentosum
List of cutaneous conditions
References
External links
Genodermatoses
DNA replication and repair-deficiency disorders
Rare syndromes
Syndromes affecting the skin
Syndromes affecting the eye
Syndromes affecting head size
Syndromes with intellectual disability
Syndromes affecting the nervous system
Syndromes affecting hearing | DeSanctis–Cacchione syndrome | Biology | 161 |
20,869,108 | https://en.wikipedia.org/wiki/Stability%20constants%20of%20complexes | In coordination chemistry, a stability constant (also called formation constant or binding constant) is an equilibrium constant for the formation of a complex in solution. It is a measure of the strength of the interaction between the reagents that come together to form the complex. There are two main kinds of complex: compounds formed by the interaction of a metal ion with a ligand and supramolecular complexes, such as host–guest complexes and complexes of anions. The stability constant(s) provide(s) the information required to calculate the concentration(s) of the complex(es) in solution. There are many areas of application in chemistry, biology and medicine.
History
Jannik Bjerrum (son of Niels Bjerrum) developed the first general method for the determination of stability constants of metal-ammine complexes in 1941. The reasons why this occurred at such a late date, nearly 50 years after Alfred Werner had proposed the correct structures for coordination complexes, have been summarised by Beck and Nagypál. The key to Bjerrum's method was the use of the then recently developed glass electrode and pH meter to determine the concentration of hydrogen ions in solution. Bjerrum recognised that the formation of a metal complex with a ligand was a kind of acid–base equilibrium: there is competition for the ligand, L, between the metal ion, Mn+, and the hydrogen ion, H+. This means that there are two simultaneous equilibria that have to be considered. In what follows electrical charges are omitted for the sake of generality. The two equilibria are
Hence by following the hydrogen ion concentration during a titration of a mixture of M and HL with base, and knowing the acid dissociation constant of HL, the stability constant for the formation of ML could be determined. Bjerrum went on to determine the stability constants for systems in which many complexes may be formed.
The following twenty years saw a veritable explosion in the number of stability constants that were determined. Relationships, such as the Irving-Williams series were discovered. The calculations were done by hand using the so-called graphical methods. The mathematics underlying the methods used in this period are summarised by Rossotti and Rossotti. The next key development was the use of a computer program, LETAGROP to do the calculations. This permitted the examination of systems too complicated to be evaluated by means of hand-calculations. Subsequently, computer programs capable of handling complex equilibria in general, such as SCOGS and MINIQUAD were developed so that today the determination of stability constants has almost become a "routine" operation. Values of thousands of stability constants can be found in two commercial databases.
Theory
The formation of a complex between a metal ion, M, and a ligand, L, is in fact usually a substitution reaction. For example, in aqueous solutions, metal ions will be present as aqua ions, so the reaction for the formation of the first complex could be written as
The equilibrium constant for this reaction is given by
[L] should be read as "the concentration of L" and likewise for the other terms in square brackets. The expression can be greatly simplified by removing those terms which are constant. The number of water molecules attached to each metal ion is constant. In dilute solutions the concentration of water is effectively constant. The expression becomes
Following this simplification a general definition can be given, for the general equilibrium
The definition can easily be extended to include any number of reagents. The reagents need not always be a metal and a ligand but can be any species which form a complex. Stability constants defined in this way, are association constants. This can lead to some confusion as pKa values are dissociation constants. In general purpose computer programs it is customary to define all constants as association constants. The relationship between the two types of constant is given in association and dissociation constants.
Stepwise and cumulative constants
A cumulative or overall constant, given the symbol β, is the constant for the formation of a complex from reagents. For example, the cumulative constant for the formation of ML2 is given by
{M} + 2L <=> ML2;
The stepwise constants, K1 and K2 refer to the formation of the complexes one step at a time.
{M} + L <=> ML;
{ML} + L <=> ML2;
It follows that
A cumulative constant can always be expressed as the product of stepwise constants. Conversely, any stepwise constant can be expressed as a quotient of two or more overall constants. There is no agreed notation for stepwise constants, though a symbol such as K is sometimes found in the literature. It is good practice to specify each stability constant explicitly, as illustrated above.
Hydrolysis products
The formation of a hydroxo complex is a typical example of a hydrolysis reaction. A hydrolysis reaction is one in which a substrate reacts with water, splitting a water molecule into hydroxide and hydrogen ions. In this case the hydroxide ion then forms a complex with the substrate.
;
In water the concentration of hydroxide is related to the concentration of hydrogen ions by the self-ionization constant, Kw.
The expression for hydroxide concentration is substituted into the formation constant expression
In general, for the reaction
In the older literature the value of log K is usually cited for an hydrolysis constant. The log β* value is usually cited for an hydrolysed complex with the generic chemical formula MpLq(OH)r.
Acid–base complexes
A Lewis acid, A, and a Lewis base, B, can be considered to form a complex AB.
There are three major theories relating to the strength of Lewis acids and bases and the interactions between them.
Hard and soft acid–base theory (HSAB). This is used mainly for qualitative purposes.
Drago and Wayland proposed a two-parameter equation which predicts the standard enthalpy of formation of a very large number of adducts quite accurately. −ΔH⊖ (A − B) = EAEB + CACB. Values of the E and C parameters are available.
Guttmann donor numbers: for bases the number is derived from the enthalpy of reaction of the base with antimony pentachloride in 1,2-Dichloroethane as solvent. For acids, an acceptor number is derived from the enthalpy of reaction of the acid with triphenylphosphine oxide.
For more details see: acid–base reaction, acid catalysis, Extraction (chemistry)
Thermodynamics
The thermodynamics of metal ion complex formation provides much significant information. In particular it is useful in distinguishing between enthalpic and entropic effects. Enthalpic effects depend on bond strengths and entropic effects have to do with changes in the order/disorder of the solution as a whole. The chelate effect, below, is best explained in terms of thermodynamics.
An equilibrium constant is related to the standard Gibbs free energy change for the reaction
R is the gas constant and T is the absolute temperature. At 25 °C, . Free energy is made up of an enthalpy term and an entropy term.
The standard enthalpy change can be determined by calorimetry or by using the Van 't Hoff equation, though the calorimetric method is preferable. When both the standard enthalpy change and stability constant have been determined, the standard entropy change is easily calculated from the equation above.
The fact that stepwise formation constants of complexes of the type MLn decrease in magnitude as n increases may be partly explained in terms of the entropy factor. Take the case of the formation of octahedral complexes.
For the first step m = 6, n = 1 and the ligand can go into one of 6 sites. For the second step m = 5 and the second ligand can go into one of only 5 sites. This means that there is more randomness in the first step than the second one; ΔS⊖ is more positive, so ΔG⊖ is more negative and . The ratio of the stepwise stability constants can be calculated on this basis, but experimental ratios are not exactly the same because ΔH⊖ is not necessarily the same for each step. Exceptions to this rule are discussed below, in #chelate effect and #Geometrical factors.
Ionic strength dependence
The thermodynamic equilibrium constant, K⊖, for the equilibrium
can be defined as
where {ML} is the activity of the chemical species ML etc. K⊖ is dimensionless since activity is dimensionless. Activities of the products are placed in the numerator, activities of the reactants are placed in the denominator. See activity coefficient for a derivation of this expression.
Since activity is the product of concentration and activity coefficient (γ) the definition could also be written as
where [ML] represents the concentration of ML and Γ is a quotient of activity coefficients. This expression can be generalized as
To avoid the complications involved in using activities, stability constants are determined, where possible, in a medium consisting of a solution of a background electrolyte at high ionic strength, that is, under conditions in which Γ can be assumed to be always constant. For example, the medium might be a solution of 0.1 mol dm−3 sodium nitrate or 3 mol dm−3 sodium perchlorate. When Γ is constant it may be ignored and the general expression in theory, above, is obtained.
All published stability constant values refer to the specific ionic medium used in their determination and different values are obtained with different conditions, as illustrated for the complex CuL (L = glycinate). Furthermore, stability constant values depend on the specific electrolyte used as the value of Γ is different for different electrolytes, even at the same ionic strength. There does not need to be any chemical interaction between the species in equilibrium and the background electrolyte, but such interactions might occur in particular cases. For example, phosphates form weak complexes with alkali metals, so, when determining stability constants involving phosphates, such as ATP, the background electrolyte used will be, for example, a tetralkylammonium salt. Another example involves iron(III) which forms weak complexes with halide and other anions, but not with perchlorate ions.
When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories.
Temperature dependence
All equilibrium constants vary with temperature according to the Van 't Hoff equation
Alternatively
R is the gas constant and T is the thermodynamic temperature. Thus, for exothermic reactions, where the standard enthalpy change, ΔH⊖, is negative, K decreases with temperature, but for endothermic reactions, where ΔH⊖ is positive, K increases with temperature.
Factors affecting the stability constants of complexes
The chelate effect
Consider the two equilibria, in aqueous solution, between the copper(II) ion, Cu2+ and ethylenediamine (en) on the one hand and methylamine, MeNH2 on the other.
In the first reaction the bidentate ligand ethylene diamine forms a chelate complex with the copper ion. Chelation results in the formation of a five-membered ring. In the second reaction the bidentate ligand is replaced by two monodentate methylamine ligands of approximately the same donor power, meaning that the enthalpy of formation of Cu–N bonds is approximately the same in the two reactions. Under conditions of equal copper concentrations and when the concentration of methylamine is twice the concentration of ethylenediamine, the concentration of the bidentate complex will be greater than the concentration of the complex with 2 monodentate ligands. The effect increases with the number of chelate rings so the concentration of the EDTA complex, which has six chelate rings, is much higher than a corresponding complex with two monodentate nitrogen donor ligands and four monodentate carboxylate ligands. Thus, the phenomenon of the chelate effect is a firmly established empirical fact: under comparable conditions, the concentration of a chelate complex will be higher than the concentration of an analogous complex with monodentate ligands.
The thermodynamic approach to explaining the chelate effect considers the equilibrium constant for the reaction: the larger the equilibrium constant, the higher the concentration of the complex.
When the analytical concentration of methylamine is twice that of ethylenediamine and the concentration of copper is the same in both reactions, the concentration [Cu(en)]2+ is much higher than the concentration [Cu(MeNH2)2]2+ because
The difference between the two stability constants is mainly due to the difference in the standard entropy change, ΔS⊖. In the reaction with the chelating ligand there are two particles on the left and one on the right, whereas in equation with the monodentate ligand there are three particles on the left and one on the right. This means that less entropy of disorder is lost when the chelate complex is formed than when the complex with monodentate ligands is formed. This is one of the factors contributing to the entropy difference. Other factors include solvation changes and ring formation. Some experimental data to illustrate the effect are shown in the following table.
{| class="wikitable"
! Equilibrium !! !! !! !!
|-
|
||6.55|| −37.4 || −57.3||19.9
|-
|
||10.62|| −60.67 || −56.48||−4.19
|}
These data show that the standard enthalpy changes are indeed approximately equal for the two reactions and that the main reason why the chelate complex is so much more stable is that the standard entropy term is much less unfavourable, indeed, it is favourable in this instance. In general it is difficult to account precisely for thermodynamic values in terms of changes in solution at the molecular level, but it is clear that the chelate effect is predominantly an effect of entropy. Other explanations, including that of Schwarzenbach, are discussed in Greenwood and Earnshaw.
The chelate effect increases as the number of chelate rings increases. For example, the complex [Ni(dien)2)]2+ is more stable than the complex [Ni(en)3)]2+; both complexes are octahedral with six nitrogen atoms around the nickel ion, but dien (diethylenetriamine, 1,4,7-triazaheptane) is a tridentate ligand and en is bidentate. The number of chelate rings is one less than the number of donor atoms in the ligand. EDTA (ethylenediaminetetracetic acid) has six donor atoms so it forms very strong complexes with five chelate rings. Ligands such as DTPA, which have eight donor atoms are used to form complexes with large metal ions such as lanthanide or actinide ions which usually form 8- or 9-coordinate complexes.
5-membered and 6-membered chelate rings give the most stable complexes. 4-membered rings are subject to internal strain because of the small inter-bond angle is the ring. The chelate effect is also reduced with 7- and 8- membered rings, because the larger rings are less rigid, so less entropy is lost in forming them.
Deprotonation of aliphatic –OH groups
Removal of a proton from an aliphatic –OH group is difficult to achieve in aqueous solution because the energy required for this process is rather large. Thus, ionization of aliphatic –OH groups occurs in aqueous solution only in special circumstances. One such circumstance is found with compounds containing the H2N–C–C–OH substructure. For example, compounds containing the 2-aminoethanol substructure can form metal–chelate complexes with the deprotonated form, H2N–C–C–O−. The chelate effect supplies the extra energy needed to break the O–H bond.
An important example occurs with the molecule tris. This molecule should be used with caution as a buffering agent as it will form chelate complexes with ions such as Fe3+ and Cu2+.
The macrocyclic effect
It was found that the stability of the complex of copper(II) with the macrocyclic ligand cyclam (1,4,8,11-tetraazacyclotetradecane) was much greater than expected in comparison to the stability of the complex with the corresponding open-chain amine.
This phenomenon was named the macrocyclic effect and it was also interpreted as an entropy effect. However, later studies suggested that both enthalpy and entropy factors were involved.
An important difference between macrocyclic ligands and open-chain (chelating) ligands is that they have selectivity for metal ions, based on the size of the cavity into which the metal ion is inserted when a complex is formed. For example, the crown ether 18-crown-6 forms much stronger complexes with the potassium ion, K+ than with the smaller sodium ion, Na+.
In hemoglobin an iron(II) ion is complexed by a macrocyclic porphyrin ring. The article hemoglobin incorrectly states that oxyhemoglogin contains iron(III). It is now known that the iron(II) in hemoglobin is a low-spin complex, whereas in oxyhemoglobin it is a high-spin complex. The low-spin Fe2+ ion fits snugly into the cavity of the porphyrin ring, but high-spin iron(II) is significantly larger and the iron atom is forced out of the plane of the macrocyclic ligand. This effect contributes the ability of hemoglobin to bind oxygen reversibly under biological conditions. In Vitamin B12 a cobalt(II) ion is held in a corrin ring. Chlorophyll is a macrocyclic complex of magnesium(II).
Geometrical factors
Successive stepwise formation constants Kn in a series such as MLn (n = 1, 2, ...) usually decrease as n increases. Exceptions to this rule occur when the geometry of the MLn complexes is not the same for all members of the series. The classic example is the formation of the diamminesilver(I) complex [Ag(NH3)2]+ in aqueous solution.
In this case, K2 > K1. The reason for this is that, in aqueous solution, the ion written as Ag+ actually exists as the four-coordinate tetrahedral aqua species [Ag(H2O)4]+. The first step is then a substitution reaction involving the displacement of a bound water molecule by ammonia forming the tetrahedral complex [Ag(NH3)(H2O)3]+. In the second step, all the aqua ligands are lost and a linear, two-coordinate product [H3N–Ag–NH3]+ is formed. Examination of the thermodynamic data shows that the difference in entropy change is the main contributor to the difference in stability constants for the two complexation reactions.
Other examples exist where the change is from octahedral to tetrahedral, as in the formation of [CoCl4]2− from [Co(H2O)6]2+.
Classification of metal ions
Ahrland, Chatt and Davies proposed that metal ions could be described as class A if they formed stronger complexes with ligands whose donor atoms are nitrogen, oxygen or fluorine than with ligands whose donor atoms are phosphorus, sulfur or chlorine and class B if the reverse is true. For example, Ni2+ forms stronger complexes with amines than with phosphines, but Pd2+ forms stronger complexes with phosphines than with amines. Later, Pearson proposed the theory of hard and soft acids and bases (HSAB theory). In this classification, class A metals are hard acids and class B metals are soft acids. Some ions, such as copper(I), are classed as borderline. Hard acids form stronger complexes with hard bases than with soft bases. In general terms hard–hard interactions are predominantly electrostatic in nature whereas soft–soft interactions are predominantly covalent in nature. The HSAB theory, though useful, is only semi-quantitative.
The hardness of a metal ion increases with oxidation state. An example of this effect is given by the fact that Fe2+ tends to form stronger complexes with N-donor ligands than with O-donor ligands, but the opposite is true for Fe3+.
Effect of ionic radius
The Irving–Williams series refers to high-spin, octahedral, divalent metal ion of the first transition series. It places the stabilities of complexes in the order
Mn < Fe < Co < Ni < Cu > Zn
This order was found to hold for a wide variety of ligands. There are three strands to the explanation of the series.
The ionic radius is expected to decrease regularly for Mn2+ to Zn2+. This would be the normal periodic trend and would account for the general increase in stability.
The crystal field stabilisation energy (CFSE) increases from zero for manganese(II) to a maximum at nickel(II). This makes the complexes increasingly stable. CFSE returns to zero for zinc(II).
Although the CFSE for copper(II) is less than for nickel(II), octahedral copper(II) complexes are subject to the Jahn–Teller effect which results in a complex having extra stability.
Another example of the effect of ionic radius the steady increase in stability of complexes with a given ligand along the series of trivalent lanthanide ions, an effect of the well-known lanthanide contraction.
Applications
Stability constant values are exploited in a wide variety of applications. Chelation therapy is used in the treatment of various metal-related illnesses, such as iron overload in β-thalassemia sufferers who have been given blood transfusions. The ideal ligand binds to the target metal ion and not to others, but this degree of selectivity is very hard to achieve. The synthetic drug deferiprone achieves selectivity by having two oxygen donor atoms so that it binds to Fe3+ in preference to any of the other divalent ions that are present in the human body, such as Mg2+, Ca2+ and Zn2+. Treatment of poisoning by ions such as Pb2+ and Cd2+ is much more difficult since these are both divalent ions and selectivity is harder to accomplish. Excess copper in Wilson's disease can be removed by penicillamine or Triethylene tetramine (TETA). DTPA has been approved by the U.S. Food and Drug Administration for treatment of plutonium poisoning.
DTPA is also used as a complexing agent for gadolinium in MRI contrast enhancement. The requirement in this case is that the complex be very strong, as Gd3+ is very toxic. The large stability constant of the octadentate ligand ensures that the concentration of free Gd3+ is almost negligible, certainly well below toxicity threshold. In addition the ligand occupies only 8 of the 9 coordination sites on the gadolinium ion. The ninth site is occupied by a water molecule which exchanges rapidly with the fluid surrounding it and it is this mechanism that makes the paramagnetic complex into a contrast reagent.
EDTA forms such strong complexes with most divalent cations that it finds many uses. For example, it is often present in washing powder to act as a water softener by sequestering calcium and magnesium ions.
The selectivity of macrocyclic ligands can be used as a basis for the construction of an ion selective electrode. For example, potassium selective electrodes are available that make use of the naturally occurring macrocyclic antibiotic valinomycin.
An ion-exchange resin such as chelex 100, which contains chelating ligands bound to a polymer, can be used in water softeners and in chromatographic separation techniques. In solvent extraction the formation of electrically neutral complexes allows cations to be extracted into organic solvents. For example, in nuclear fuel reprocessing uranium(VI) and plutonium(VI) are extracted into kerosene as the complexes [MO2(TBP)2(NO3)2] (TBP = tri-n-butyl phosphate). In phase-transfer catalysis, a substance which is insoluble in an organic solvent can be made soluble by addition of a suitable ligand. For example, potassium permanganate oxidations can be achieved by adding a catalytic quantity of a crown ether and a small amount of organic solvent to the aqueous reaction mixture, so that the oxidation reaction occurs in the organic phase.
In all these examples, the ligand is chosen on the basis of the stability constants of the complexes formed. For example, TBP is used in nuclear fuel reprocessing because (among other reasons) it forms a complex strong enough for solvent extraction to take place, but weak enough that the complex can be destroyed by nitric acid to recover the uranyl cation as nitrato complexes, such as [UO2(NO3)4]2− back in the aqueous phase.
Supramolecular complexes
Supramolecular complexes are held together by hydrogen bonding, hydrophobic forces, van der Waals forces, π-π interactions, and electrostatic effects, all of which can be described as noncovalent bonding. Applications include molecular recognition, host–guest chemistry and anion sensors.
A typical application in molecular recognition involved the determination of formation constants for complexes formed between a tripodal substituted urea molecule and various saccharides. The study was carried out using a non-aqueous solvent and NMR chemical shift measurements. The object was to examine the selectivity with respect to the saccharides.
An example of the use of supramolecular complexes in the development of chemosensors is provided by the use of transition-metal ensembles to sense for ATP.
Anion complexation can be achieved by encapsulating the anion in a suitable cage. Selectivity can be engineered by designing the shape of the cage. For example, dicarboxylate anions could be encapsulated in the ellipsoidal cavity in a large macrocyclic structure containing two metal ions.
Experimental methods
The method developed by Bjerrum is still the main method in use today, though the precision of the measurements has greatly increased. Most commonly, a solution containing the metal ion and the ligand in a medium of high ionic strength is first acidified to the point where the ligand is fully protonated. This solution is then titrated, often by means of a computer-controlled auto-titrator, with a solution of CO2-free base. The concentration, or activity, of the hydrogen ion is monitored by means of a glass electrode. The data set used for the calculation has three components: a statement defining the nature of the chemical species that will be present, called the model of the system, details concerning the concentrations of the reagents used in the titration, and finally the experimental measurements in the form of titre and pH (or emf) pairs.
Other ion-selective electrodes (ISE) may be used. For example, a fluoride electrode may be used with the determination of stability complexes of fluoro-complexes of a metal ion.
It is not always possible to use an ISE. If that is the case, the titration can be monitored by other types of measurement. Ultraviolet–visible spectroscopy, fluorescence spectroscopy and NMR spectroscopy are the most commonly used alternatives. Current practice is to take absorbance or fluorescence measurements at a range of wavelengths and to fit these data simultaneously. Various NMR chemical shifts can also be fitted together.
The chemical model will include values of the protonation constants of the ligand, which will have been determined in separate experiments, a value for log Kw and estimates of the unknown stability constants of the complexes formed. These estimates are necessary because the calculation uses a non-linear least-squares algorithm. The estimates are usually obtained by reference to a chemically similar system. The stability constant databases can be very useful in finding published stability constant values for related complexes.
In some simple cases the calculations can be done in a spreadsheet. Otherwise, the calculations are performed with the aid of a general-purpose computer programs. The most frequently used programs are:
Potentiometric and/or spectrophotometric data: PSEQUAD
Potentiometric data: HYPERQUAD, BEST, ReactLab pH PRO
Spectrophotometric data: HypSpec, SQUAD, SPECFIT, ReactLab EQUILIBRIA.
NMR data HypNMR, WINEQNMR2
In biochemistry, formation constants of adducts may be obtained from Isothermal titration calorimetry (ITC) measurements. This technique yields both the stability constant and the standard enthalpy change for the equilibrium. It is mostly limited, by availability of software, to complexes of 1:1 stoichiometry.
Critically evaluated data
The following references are for critical reviews of published stability constants for various classes of ligands. All these reviews are published by IUPAC and the full text is available, free of charge, in pdf format.
ethylenediamine (en)
Nitrilotriacetic acid (NTA)
aminopolycarboxylic acids (complexones)
Alpha hydroxy acids and other hydroxycarboxylic acids
crown ethers
phosphonic acids
imidazoles and histamines
amino acids with polar side-chains
nucleotides
acetylacetone
general
Chemical speciation of environmentally significant heavy metals with inorganic ligands. Part 1: The Hg2+–Cl−, OH−, , , and systems.
Chemical speciation of environmentally significant metals with inorganic ligands Part 2: The Cu2+–OH−, Cl−, , , and aqueous systems
Chemical speciation of environmentally significant metals with inorganic ligands Part 3: The Pb2+–OH−, Cl−, , , and systems
Chemical speciation of environmentally significant metals with inorganic ligands. Part 4: The Cd2+–OH−, Cl−, , , and systems
Databases
The Ki Database is a public domain database of published binding affinities (Ki) of drugs and chemical compounds for receptors, neurotransmitter transporters, ion channels, and enzymes.
BindingDB is a public domain database of measured binding affinities, focusing chiefly on the interactions of protein considered to be drug-targets with small, drug-like molecules
References
Further reading
External links
Stability constants website: Contains information on computer programs, applications, databases and hardware for experimental titrations.
Equilibrium chemistry
Coordination chemistry | Stability constants of complexes | Chemistry | 6,499 |
36,315,777 | https://en.wikipedia.org/wiki/WRKY%20protein%20domain | The WRKY domain is found in the WRKY transcription factor family, a class of transcription factors. The WRKY domain is found almost exclusively in plants although WRKY genes appear present in some diplomonads, social amoebae and other amoebozoa, and fungi incertae sedis. They appear absent in other non-plant species. WRKY transcription factors have been a significant area of plant research for the past 20 years. The WRKY DNA-binding domain recognizes the W-box (T)TGAC(C/T) (and variants of this sequence) cis-regulatory element.
Structure
WRKY transcription factors contain either one or two WRKY protein domains. The WRKY protein domain is 60 to 70 amino acids long type of DNA binding domain. The domain is characterized by a highly conserved core WRKYGQK motif and a zinc finger region. The cysteine and histidine zinc finger domain occurs as a CX4-5CX22-23HXH or CX7CX23HXC type, where X can be any amino acid. The zinc finger binds a Zn+2 ion, which is required for protein function. While the WRKYGQK is highly conserved in most WRKY domains, variation in the core sequence has been documented. A frequently occurring variant of the core sequence is WRKYGKK, which is present in most plant species.
The structure of the WRKY protein domain was first determined in 2005 using nuclear magnetic resonance (NMR) and later by crystallography. The WRKY protein domain is a globular shape composed of five anti-parallel β-strands. The core WRKYGQK motif is found on the second β-strand. Eighteen amino acids are highly conserved in the WRKY protein domain, including the core motif, zinc-finger binding cysteines and histidines, and a triad forming a DWK salt bridge. The triad consist of a conserved tryptophan (W) of the core motif, along with an aspartic acid (D) four amino acids upstream and a lysine (K) 29 amino acids downstream of it, stabilizing the entire domain. Five amino acids on the third β-strand (PRSYY) are also well conserved in the WRKY domain. Importantly, the WRKY genes contain a conserved intron in the WRKY domain, which occurs at the location encoding for the PR of the PRSYY amino acid sequence, thus explaining the conservation of this motif.
WRKY-DNA Interaction
The WRKY domain forms a unique wedge-shaped structure that enters perpendicularly in the major groove of the DNA strand. WRKY protein domains interact with the (T/A)TGAC(T/A) cis-element, also called the W-box. Recent evidence suggests that the GAC core of the W-box is the primary target of the WRKY domain and flanking sequences help dictate DNA interaction with very specific WRKY proteins. The RKYGQK residues of the core motif and additional arginine and lysine residues of the WRKY domain are responsible for interaction with the phosphate backbone of seven consecutive DNA base pairs, including the GAC core. Changing the tryptophan, tyrosine, or either lysine of the WRKYGQK motif to alanine completely abolishes DNA-binding, indicating these amino acids are essential for recognizing the W-box element. While not essential, altering the WRKYGQK motif arginine, glycine or glutamine to alanine reduces DNA-binding to the W-box. Overall, these complex WRKY protein domain-DNA interactions results in gene activation necessary for numerous aspects of plant development and defense.
External links
WRKY family at PlantTFDB: Plant Transcription Factor Database
WRKY Transcription Factor Family at The Arabidopsis Information Resource
The Rushton Lab
The Somssich Lab
The Shen Lab
Somssich’s list of WRKY-related publications
Eulgem Lab
References
2005 in science
Plant proteins
Protein domains | WRKY protein domain | Biology | 843 |
53,842,340 | https://en.wikipedia.org/wiki/Elisa%20Molinari | Elisa Molinari is an Italian physicist from the University of Modena and CNR, Italy. She has been primarily interested in computational materials science and nanotechnologies, and she has been particularly active in the theory of fundamental properties of low-dimensional structures, in the simulation of nanodevices, in the development of related computational methods. She also has a continuing interest in scientific imaging and communication.
Since 2001, she has been a full professor of Condensed Matter Physics at Unimore, University of Modena and Reggio Emilia, Italy, and since 2015, she serves as Director, MaX European Centre of Excellence on 'Materials design at the exascale.'
Honors
Molinari was awarded the status of Fellow in the American Physical Society, after she was nominated by a Forum on International Physics in 1999, for "her contribution to the theory of semiconductors and their interfaces, in particular, her fundamental work on electron-electron and electron-phonon interaction in nanostructures; and for her involvement in the training of young theorists from many countries and the organization of international conferences."
Selected publications
Molinari has published more than 270 papers.
D. Varsano et al, "A monolayer transition-metal dichalcogenide as a topological excitonic insulator", Nature Nanotechnology 15, 367 (2020)
P. D'Amico et al, "Intrinsic edge excitons in two-dimensional MoS2", Phys. Rev. B 101, 161410 (2020)
M.O. Atambo et al, "Electronic and optical properties of doped TiO2 by many-body perturbation theory", Phys. Rev. Materials 3, 045401 (2019)
A. Portone et al, "Tailoring optical properties and stimulated emission in nanostructured polythiophene", Scientific Reports 9, 7370 (2019)
J.O. Island et al, "Interaction-Driven Giant Orbital Magnetic Moments in Carbon Nanotubes", Phys. Rev. Letters 121, 127704 (2018)
D. Varsano et al, "Carbon nanotubes as excitonic insulators", Nature Comm. 8, 1461 (2017)
A. De Sio et al, "Tracking the coherent generation of polaron pairs in conjugated polymers", Nature Comm. 7, 13742 (2016)
L. Bursi et al, "Quantifying the Plasmonic Character of Optical Excitations in Nanostructures", ACS Photonics 3, 520 (2016)
G. Soavi et al "Exciton-exciton annihilation and biexciton stimulated emission in graphene nanoribbons", Nature Comm.7, 11010 (2016)
S. Falke et al, "Coherent ultrafast charge transfer in an organic photovoltaic blend", Science 344, 1001 (2014)
R. Denk et al, "Exciton Dominated Optical Response of Ultra-Narrow Graphene Nanoribbons", Nature Comm 5, 4253 (2014)
C. A. Rozzi et al, "Quantum coherence controls the charge separation in a prototypical artificial light-harvesting system", Nature Comm 4, 1602 (2013)
P. Ruffieux et al, "Electronic Structure of Atomically Precise Graphene Nanoribbons", ACS Nano 6, 6930 (2012)
S. Kalliakos et al, "A molecular state of correlated electrons in a quantum dot", Nature Physics 4, 467 - 471 (2008)
D. Prezzi et al, "Optical properties of graphene nanoribbons: The role of many-body effects", Phys Rev B77, 041404 (2008)
A. Ferretti et al, "Mixing of electronic states in pentacene adsorption on copper", Phys Rev Lett 99, 046802 (2007)
J. Maultzsch et al, "Exciton binding energies in carbon nanotubes from two-photon photoluminescence", Phys Rev B 72, 241402 (2005)
A. Ferretti et al, "First-principles theory of correlated transport through nanojunctions", Phys Rev Lett 94, 116802 (2005)
References
Fellows of the American Physical Society
20th-century Italian physicists
20th-century American women scientists
Italian women physicists
Academic staff of the University of Modena and Reggio Emilia | Elisa Molinari | Materials_science,Engineering | 947 |
39,039,164 | https://en.wikipedia.org/wiki/Nerode%20Prize | The EATCS–IPEC Nerode Prize is a theoretical computer science prize awarded for outstanding research in the area of multivariate algorithmics. It is awarded by the European Association for Theoretical Computer Science and the International Symposium on Parameterized and Exact Computation. The prize was offered for the first time in 2013.
Winners
The prize winners so far have been:
2013: Chris Calabro, Russell Impagliazzo, Valentine Kabanets, Ramamohan Paturi, and Francis Zane, for their research formulating the exponential time hypothesis and using it to determine the exact parameterized complexity of several important variants of the Boolean satisfiability problem.
2014: Hans L. Bodlaender, Rodney G. Downey, Michael R. Fellows, Danny Hermelin, Lance Fortnow, and Rahul Santhanam, for their work on kernelization, proving that several problems with fixed-parameter tractable algorithms do not have polynomial-size kernels unless the polynomial hierarchy collapses.
2015: Erik Demaine, Fedor V. Fomin, Mohammad Hajiaghayi, and Dimitrios Thilikos, for their research on bidimensionality, defining a broad framework for the design of fixed-parameter-tractable algorithms for domination and covering problems on graphs.
2016: Andreas Björklund for his paper Determinant Sums for Undirected Hamiltonicity, showing that methods based on algebraic graph theory lead to a significantly improved algorithm for finding Hamiltonian cycles
2017: Fedor V. Fomin, Fabrizio Grandoni, and Dieter Kratsch, for developing the "measure and conquer" method for the analysis of backtracking algorithms.
2018: Stefan Kratsch and Magnus Wahlström for their work using matroid theory to develop polynomial-size kernels for odd cycle transversal and related problems.
2019: Noga Alon, Raphael Yuster, and Uri Zwick, for inventing the Color-coding technique, a vastly important ingredient in the toolbox of parameterized algorithm design.
2020: Daniel Marx, Jianer Chen, Yang Liu, Songjian Lu, Barry O’Sullivan, Igor Razgon, for inventing the concepts of important separators and cuts which have become elegant and efficient tools used to establish the fixed parameter tractability of graph problems.
2021: C. S. Calude, S. Jain, B. Khoussainov, W. Li, F. Stephan, for their quasipolynomial time algorithm for deciding parity games.
2022: Bruno Courcelle for Courcelle's theorem on the fixed-parameter tractability of graph properties in monadic second-order logic.
2023: Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michal Pilipczuk, Johan M. M. van Rooij, and Jakub Onufry Wojtaszczyk for their paper Solving Connectivity Problems Parameterized by Treewidth in Single Exponential Time.
2024: Hans L. Bodlaender, Fedor V. Fomin, Daniel Lokshtanov, Eelko Penninkx, Saket Saurabh, and Dimitrios M. Thilikos for their paper (Meta) Kernelization.
See also
List of computer science awards
References
Theoretical computer science
Computer science awards | Nerode Prize | Mathematics,Technology | 690 |
24,542,925 | https://en.wikipedia.org/wiki/Acellular%20dermis | Acellular dermis is a type of biomaterial derived from processing human or animal tissues to remove cells and retain portions of the extracellular matrix (ECM). These materials are typically cell-free, distinguishing them from classical allografts and xenografts, can be integrated or incorporated into the body, and have been FDA approved for human use for more than 10 years in a wide range of clinical indications.
Harvesting and processing
All ECM samples originate from mammalian tissues, such as dermis, pericardium, and small intestinal submucosa (SIS). After explantation from the source, the ECM biomaterial retains some characteristics of the original tissue. The ECM tissues can be harvested from varying stages in the developmental stages in mammalian species such as human, porcine, equine, and bovine. Although they are similarly composed of fibril collagen, the microstructure, specific composition (including presence of non-collagenous protein and glycosaminoglycans and ratio of different types of collagen), physical dimensions and mechanical properties can differ. Depending on the developmental stage of the tissue during which harvesting occurred, the microstructure can vary within an organism. Additionally, keeping in mind the size and shape of the final tissue, the potential of the physical dimensions of the tissue of origin must be considered.
Despite this “memory” of the ECM tissue, methods have been engineered so that these innate characteristics can be modified, saved or removed. The modification process varies depending on the material used in clinical setting. Some ECM biomaterials undergo a modification that removes all the cells but leaves the remainder of the other ECM components called decellularization. Another process that can be introduced into the biomaterial is artificial crosslinking. Artificial crosslinking has been shown to stabilize reconstituted collagen, which can rapidly degenerate in vivo. Although mechanical strength is gained, the artificial crosslinks that are added increase the chance for a host-cell rejection, due to its foreign origin. Due to this complication, intentional crosslinking is no longer practiced as more recent advancements have been made that increase the lifespan of the collagen without the use of artificial stabilization. Finally, to ensure the ECM biomaterial is without infectious bacteria and viruses, most are terminally sterilized. This can include ethylene oxide (EO) gas, gamma irradiation, or electron beam (e-beam) irradiation as the sterilizing agent.
Decellularized ECM biomaterials can be further processed into a fine powder and then lyophilized (freeze-dried). This powder can then be mixed with collagenase to form an ECM derived hydrogel (self-healing hydrogels). These hydrogels are then used in cell culture to help maintain cell phenotype and increase cell proliferation. Cells cultured on ECM hydrogels maintain their phenotype better than cells cultured on other substrates such as matrigel or type 1 collagen. Though hydrogels do not yet have direct clinical relevance, they have shown promise as a method of assisting in organ regeneration.
Similarly, whole organs can be decellularized to create 3-D ECM scaffolds. These scaffolds can then be re-cellularized in an attempt to regenerate whole organs for transplant. This method works primarily for organs with a complex vasculature, as it allows detergent to be fully perfused through the material.
Host/implant interactions
Wound healing of the skin and tendons is a complex coordinated process in the body that happens slowly over weeks or even years. A number of products in the market today aim to affect this process positively, although little data is available on their success. The majority of products are still in the development phases where the (often inflammatory) interactions between the host and the implanted devices are being assessed.
Implanted ECM biomaterials fall into two general categories based on how they interact with the host. Incorporating devices eventually allow the growth of cells and passage of blood vessels through the matrix, whereas nonincorporating biomaterials are encapsulated by a wall of fused macrophages. In nonincorporating biomaterials such as Permacol, an acellular porcine dermal implant for hernia repair, it is important that the material is not degraded or infiltrated by the immune system. Encapsulated biomaterials that are recognized as foreign can be degraded and/or rejected by the body and migrate to the outside of the body.
In incorporated ECM biomaterials, infiltration by the immune system can occur in as few as seven days, leading to rapid degradation of the device volume. In the case of Graftjacket, an allograft from human dermis, the matrix is quickly populated by host cells as vasculature. The device itself decreased more than 60% in volume, and is replaced with host fibroblasts and macrophages.
Applications
ECM biomaterials are used to promote healing in a number of tissues, especially the skin and tendons. Surgimend, a collagen matrix derived from fetal bovine dermis, can trigger the healing of tendons (which do not heal spontaneously) in the ankle. This intervention can shorten healing time by almost half and allows the patient to return to full activity much sooner. Open wounds, like tendons, do not spontaneously heal and can persist for long stretches of time. When ECM biomaterials are added in multiple layers to the ulcer, the wound begins to close quickly and generates host tissue. Although preliminary studies seem promising, little information is available on the success of and direct comparisons between different ECM biomaterial devices in human trials.
Alloderm, an acellular dermis derived from the skin of donated cadavers, is used in reconstructive and dental surgeries. In gingival grafts, the acellular dermis is an alternative to tissue cut from the palate of the patient's mouth. It has also been used for abdominal hernia repair, and to rebuild resected turbinates in the treatment of empty nose syndrome. Alloderm and other acellular dermal matrices are used routinely in implant based breast reconstruction after mastectomy for improved soft tissue coverage and thus decrease the risk of visible rippling, capsular contraction, implant malposition, bottoming out and implant exposure.
The FDA has not approved any acellular dermal matrix products for use in implant-based breast reconstruction following surgery to remove a breast tumour, as the published literature suggests that some products may have high risk profiles.
Examples
Human dermis
Small intestinal submucosa
Bovine dermis
Porcine dermis
Human Demineralized Bone Matrix
Equine pericardium
Bovine pericardium
Chitin shell
Dura mater
Alloderm
References
Biomedical engineering | Acellular dermis | Engineering,Biology | 1,460 |
803,731 | https://en.wikipedia.org/wiki/International%20Latitude%20Service |
The International Latitude Service was created by the International Geodetic Association in 1899 to study variations in latitude caused by polar motion, precession, or "wobble" of the Earth's axis.
In 1891, at the meeting of the Permanent Commission of the International Geodetic Association in Florence, Wilhelm Foerster referred to the discovery by Seth Carlo Chandler of the polar motion predicted by Leonhard Euler in 1765 and his impact on the determination latitudes. He proposed that the International Geodetic Association implement a systematic study of this important phenomenon. In 1895, the creation of the International Latitude Service was decided by the International Geodesic Association. Its central office was based in Potsdam and headed by Friedrich Robert Helmert. Regular observations began in 1899. After 1916, the operations of the International Latitudes Service continued under the aegis of the Reduced Geodetic Association among Neutral States presided by Raoul Gautier director of Geneva Observatory.
The original International Latitude Observatories were a system of six observatories located near the parallel of 39° 08' north latitude. The alignment of all six stations along the parallel helped the observatories to perform uniform data analysis. The original six observatories were located in:
Gaithersburg, Maryland, United States
Cincinnati, Ohio, United States
Ukiah, California, United States
Mizusawa, Iwate, Japan
Charjui, Turkmenistan
Carloforte, Italy
Twelve groups of stars were studied in the program, each group containing six pairs of stars. Each night, each station observed two of the star groups along a preset schedule and later compared the data against the measurements taken by the sister stations.
Economic difficulties and war caused the closings of some of the original stations, though a newer station was created in Uzbekistan after World War I. The data collected by the observatories over the years still has use to scientists, and has been applied to studies of polar motion, the physical properties of the Earth, climatology and satellite tracking and navigation.
The final six observatories were located, in order of Longitude (E to W), in:
Gaithersburg, Maryland, USA, Gaithersburg Latitude Observatory:
Cincinnati, Ohio, USA:
Ukiah, California, USA:
Mizusawa, Japan, National Institutes of Natural Sciences National Astronomical Observatory of Japan, Mizusawa VERA Observatory:
Kitab, in Uzbekistan:
Carloforte, Italy:
The ILS was renamed International Polar Motion Service (IPMS) in 1962.
It was replaced when the International Earth Rotation Service (IERS) was established in 1987.
See also
List of astronomical observatories
References
Further reading
External links
National Park Service - The history of the Gaithersburg Observatory in Maryland, and the overall project.
Astronomical observatories
Astronomical observatories in Italy
Astronomical observatories in Japan
Astronomical observatories in California
Astronomical observatories in Uzbekistan
Buildings and structures in Cincinnati
Buildings and structures in Mendocino County, California
Astronomical observatories in Ohio
Astronomical observatories in Maryland
Buildings and structures in Gaithersburg, Maryland
International geodesy organizations | International Latitude Service | Astronomy | 649 |
5,058,843 | https://en.wikipedia.org/wiki/Ku%20%28protein%29 | Ku is a dimeric protein complex that binds to DNA double-strand break ends and is required for the non-homologous end joining (NHEJ) pathway of DNA repair. Ku is evolutionarily conserved from bacteria to humans. The ancestral bacterial Ku is a homodimer (two copies of the same protein bound to each other). Eukaryotic Ku is a heterodimer of two polypeptides, Ku70 (XRCC6) and Ku80 (XRCC5), so named because the molecular weight of the human Ku proteins is around 70 kDa and 80 kDa. The two Ku subunits form a basket-shaped structure that threads onto the DNA end. Once bound, Ku can slide down the DNA strand, allowing more Ku molecules to thread onto the end. In higher eukaryotes, Ku forms a complex with the DNA-dependent protein kinase catalytic subunit (DNA-PKcs) to form the full DNA-dependent protein kinase, DNA-PK. Ku is thought to function as a molecular scaffold to which other proteins involved in NHEJ can bind, orienting the double-strand break for ligation.
The Ku70 and Ku80 proteins consist of three structural domains. The N-terminal domain is an alpha/beta domain. This domain only makes a small contribution to the dimer interface. The domain comprises a six-stranded beta sheet of the Rossmann fold. The central domain of Ku70 and Ku80 is a DNA-binding beta-barrel domain. Ku makes only a few contacts with the sugar-phosphate backbone, and none with the DNA bases, but it fits sterically to major and minor groove contours forming a ring that encircles duplex DNA, cradling two full turns of the DNA molecule. By forming a bridge between the broken DNA ends, Ku acts to structurally support and align the DNA ends, to protect them from degradation, and to prevent promiscuous binding to unbroken DNA. Ku effectively aligns the DNA, while still allowing access of polymerases, nucleases and ligases to the broken DNA ends to promote end joining. The C-terminal arm is an alpha helical region which embraces the central beta-barrel domain of the opposite subunit. In some cases a fourth domain is present at the C-terminus, which binds to DNA-dependent protein kinase catalytic subunit.
Both subunits of Ku have been experimentally knocked out in mice. These mice exhibit chromosomal instability, indicating that NHEJ is important for genome maintenance.
In many organisms, Ku has additional functions at telomeres in addition to its role in DNA repair.
Abundance of Ku80 seems to be related to species longevity.
Aging
Mutant mice defective in Ku70, or Ku80, or double mutant mice deficient in both Ku70 and Ku80 exhibit early aging. The mean lifespans of the three mutant mouse strains were similar to each other, at about 37 weeks, compared to 108 weeks for the wild-type control. Six specific signs of aging were examined, and the three mutant mice were found to display the same aging signs as the control mice, but at a much earlier age. Cancer incidence was not increased in the mutant mice. These results suggest that Ku function is important for longevity assurance and that the NHEJ pathway of DNA repair (mediated by Ku) has a key role in repairing DNA double-strand breaks that would otherwise cause early aging. (Also see DNA damage theory of aging.)
Plants
Ku70 and Ku80 have also been experimentally characterized in plants, where they appear to play a similar role to that in other eukaryotes. In rice, suppression of either protein has been shown to promote homologous recombination (HR) This effect was exploited to improve gene targeting (GT) efficiency in Arabidopsis thaliana. In the study, the frequency of HR-based GT using a zinc-finger nuclease (ZFN) was increased up to sixteen times in ku70 mutants This result has promising implications for genome editing across eukaryotes as DSB repair mechanisms are highly conserved. A substantial difference is that in plants, Ku is also involved in maintaining an alternate telomere morphology characterized by blunt-ends or short (≤ 3-nt) 3’ overhangs. This function is independent of the role of Ku in DSB repair, as removing the ability of the Ku complex to translocate along DNA has been shown to preserve blunt-ended telomeres while impeding DNA repair.
and archaea
Bacteria usually have only one Ku gene (if they have one at all). Unusually, Mesorhizobium loti has two, mlr9624 and mlr9623.
Archaea usually also only have one Ku gene (for the ~4% of species that have one at all). The evolutionary history is blurred by extensive horizontal gene transfer with bacteria.
Bacterial and archaeal Ku proteins are unlike their eukaryotic counterparts in that they only have the central beta-barrel domain.
Name
The name 'Ku' is derived from the surname of the Japanese patient in which it was discovered.
References
External links
Protein families
DNA repair
Protein domains
DNA-binding proteins | Ku (protein) | Biology | 1,073 |
840,106 | https://en.wikipedia.org/wiki/Master%20equation | In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states.
The name was proposed in 1940:
Introduction
A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable t. The most familiar form of a master equation is a matrix form:
where is a column vector, and is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either
a d-dimensional system (where d is 1,2,3,...), where any state is connected with exactly its 2d nearest neighbors, or
a network, where every pair of states may have a connection (depending on the network's properties).
When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state i is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix depends on the time, ), the process is not stationary and the master equation reads
When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation:
The matrix can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, and then the process is not in equilibrium.
Detailed description of the matrix and properties of the system
Let be the matrix describing the transition rates (also known as kinetic rates or reaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but is appropriate for conventional matrix multiplication.
For each state k, the increase in occupation probability depends on the contribution from all other states to k, and is given by:
where is the probability for the system to be in the state , while the matrix is filled with a grid of transition-rate constants. Similarly, contributes to the occupation of all other states
In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman–Kolmogorov equation.
The master equation can be simplified so that the terms with ℓ = k do not appear in the summation. This allows calculations even if the main diagonal of is not defined or has been assigned an arbitrary value.
The final equality arises from the fact that
because the summation over the probabilities yields one, a constant function. Since this has to hold for any probability (and in particular for any probability of the form for some k) we get
Using this we can write the diagonal elements as
The master equation exhibits detailed balance if each of the terms of the summation disappears separately at equilibrium—i.e. if, for all states k and ℓ having equilibrium probabilities and ,
These symmetry relations were proved on the basis of the time reversibility of microscopic dynamics (microscopic reversibility) as Onsager reciprocal relations.
Examples of master equations
Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a master equation, thereby performing a great simplification of the problem (see mathematical model).
The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a master equation, it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix).
Another special case of the master equation is the Fokker–Planck equation which describes the time evolution of a continuous probability distribution. Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as the system size expansion.
Stochastic chemical kinetics provide yet another example of the use of the master equation. A master equation may be used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules). The chemical master equation can also solved for the very large models, such as the DNA damage signal from fungal pathogen Candida albicans.
Quantum master equations
A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical.
The Redfield equation and Lindblad equation are examples of approximate quantum master equations assumed to be Markovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and the VPQME (variational polaron transformed quantum master equation).
Theorem about eigenvalues of the matrix and time evolution
Because fulfills
and
one can show that:
There is at least one eigenvector with a vanishing eigenvalue, exactly one if the graph of is strongly connected.
All other eigenvalues fulfill .
All eigenvectors with a non-zero eigenvalue fulfill .
This has important consequences for the time evolution of a state.
See also
Kolmogorov equations (Markov jump process)
Continuous-time Markov process
Quantum master equation
Fermi's golden rule
Detailed balance
Boltzmann's H-theorem
References
External links
Timothy Jones, A Quantum Optics Derivation (2006)
Statistical mechanics
Stochastic calculus
Equations
Equations of physics | Master equation | Physics,Mathematics | 1,342 |
723,561 | https://en.wikipedia.org/wiki/Biological%20agent | Biological agents, also known as biological weapons or bioweapons, are pathogens used as weapons. In addition to these living or replicating pathogens, toxins and biotoxins are also included among the bio-agents. More than 1,200 different kinds of potentially weaponizable bio-agents have been described and studied to date.
Some biological agents have the ability to adversely affect human health in a variety of ways, ranging from relatively mild allergic reactions to serious medical conditions, including serious injury, as well as serious or permanent disability or death. Many of these organisms are ubiquitous in the natural environment where they are found in water, soil, plants, or animals. Bio-agents may be amenable to "weaponization" to render them easier to deploy or disseminate. Genetic modification may enhance their incapacitating or lethal properties, or render them impervious to conventional treatments or preventives. Since many bio-agents reproduce rapidly and require minimal resources for propagation, they are also a potential danger in a wide variety of occupational settings.
The 1972 Biological Weapons Convention is an international treaty banning the development, use or stockpiling of biological weapons; as of March 2021, there were 183 states parties to the treaty. Bio-agents are, however, widely studied for both defensive and medical research purposes under various biosafety levels and within biocontainment facilities throughout the world.
Classifications
Operational
The former United States biological weapons program (1943–1969) categorized its weaponized anti-personnel bio-agents as either "lethal agents" (Bacillus anthracis, Francisella tularensis, Botulinum toxin) or "incapacitating agents" (Brucella suis, Coxiella burnetii, Venezuelan equine encephalitis virus, staphylococcal enterotoxin B).
Legal
Since 1997, United States law has declared a list of bio-agents designated by the U.S. Department of Health and Human Services or the U.S. Department of Agriculture that have the "potential to pose a severe threat to public health and safety" to be officially defined as "select agents" and possession or transportation of them are tightly controlled as such. Select agents are divided into "HHS select agents and toxins", "USDA select agents and toxins" and "Overlap select agents and toxins".
Regulatory
The US Centers for Disease Control and Prevention (CDC) breaks biological agents into three categories: Category A, Category B, and Category C. Category A agents pose the greatest threat to the US. Criteria for being a Category "A" agent include high rates of morbidity and mortality, ease of dissemination and communicability, ability to cause a public panic, and special action required by public health officials to respond. Category A agents include anthrax, botulism, plague, smallpox, and viral hemorrhagic fevers.
List of bio-agents of military importance
The following pathogens and toxins were weaponized by one nation or another at some time. NATO abbreviations are included where applicable.
Bacterial bio-agents
Chlamydial bio-agents
Rickettsial bio-agents
Viral bio-agents
Mycotic bio-agents
Biological toxins
Biological vectors
Simulants
Simulants are organisms or substances which mimic physical or biological properties of real biological agents, without being pathogenic. They are used to study the efficiency of various dissemination techniques or the risks caused by the use of biological agents in bioterrorism. To simulate dispersal, attachment or the penetration depth in human or animal lungs, simulants must have particle sizes, specific weight and surface properties, similar to the simulated biological agent.
The typical size of simulants (1–5 μm) enables it to enter buildings with closed windows and doors and penetrate deep into the lungs. This bears a significant health risk, even if the biological agent is normally not pathogenic.
Bacillus globigii (historically named Bacillus subtilis in the context of bio-agent simulants) (BG, BS, or U)
Serratia marcescens (SM or P)
Aspergillus fumigatus mutant C-2 (AF)
Escherichia coli (EC)
Bacillus thuringiensis (BT)
Erwinia herbicola (current accepted name: Pantoea agglomerans) (EH)
Fluorescent particles such as zinc cadmium sulfide, ZnCdS (FP)
International law
While the history of biological weapons use goes back more than six centuries to the siege of Caffa in 1346, international restrictions on biological weapons began only with the 1925 Geneva Protocol, which prohibits the use but not the possession or development of chemical and biological weapons in international armed conflicts. Upon ratification of the Geneva Protocol, several countries made reservations regarding its applicability and use in retaliation. Due to these reservations, it was in practice a "no-first-use" agreement only.
The 1972 Biological Weapons Convention supplements the Geneva Protocol by prohibiting the development, production, acquisition, transfer, stockpiling and use of biological weapons. Having entered into force on 26 March 1975, this agreement was the first multilateral disarmament treaty to ban the production of an entire category of weapons of mass destruction. As of March 2021, 183 states have become party to the treaty. The treaty is considered to have established a strong global norm against biological weapons, which is reflected in the treaty's preamble, stating that the use of biological weapons would be "repugnant to the conscience of mankind". However, its effectiveness has been limited due to insufficient institutional support and the absence of any formal verification regime to monitor compliance.
In 1985, the Australia Group was established, a multilateral export control regime of 43 countries aiming to prevent the proliferation of chemical and biological weapons.
In 2004, the United Nations Security Council passed Resolution 1540, which obligates all UN Member States to develop and enforce appropriate legal and regulatory measures against the proliferation of chemical, biological, radiological, and nuclear weapons and their means of delivery, in particular, to prevent the spread of weapons of mass destruction to non-state actors.
In popular culture
See also
Biological hazard
Biological contamination
Laboratory Response Network
Pulsed ultraviolet light
References
External links
Rafał L. Górny, Biological agents, OSHwiki ()
Biological Agents, OSHA
Select Agents and Toxins, Centers for Disease Control and Prevention
Biological weapons e-learning module in the EU's non-proliferation and disarmament course (taught by Filippa Lentzos)
Biological contamination | Biological agent | Biology,Environmental_science | 1,349 |
31,307,996 | https://en.wikipedia.org/wiki/Shelf%20%28storage%29 | A shelf (: shelves) is a flat, horizontal plane used for items that are displayed or stored in a home, business, store, or elsewhere. It is raised off the floor and often anchored to a wall, supported on its shorter length sides by brackets, or otherwise anchored to cabinetry by brackets, dowels, screws, or nails. It can also be held up by columns or pillars. A shelf is also known as a counter, ledge, mantel, or rack. Tables designed to be placed against a wall, possibly mounted, are known as console tables, and are similar to individual shelves.
A shelf can be attached to a wall or other vertical surface, be suspended from a ceiling, be a part of a free-standing frame unit, or it can be part of a piece of furniture such as a cabinet, bookcase, entertainment center, headboard, and so on. Usually, two to six shelves make up a unit, each shelf being attached perpendicularly to the vertical or diagonal supports and positioned parallel one above the other. Free-standing shelves can be accessible from either one or both longer length sides. A shelf with hidden internal brackets is termed a floating shelf. A shelf or case designed to hold books is a bookshelf.
The length of the shelf is based upon the space limitations of its siting and the amount of weight which it will be expected to hold. The vertical distance between the shelves is based upon the space limitations of the unit's siting and the height of the objects; adjustable shelving systems allow the vertical distance to be altered. The unit can be fixed or be some form of mobile shelving. The most heavy-duty shelving is pallet racking. In a store, the front edge of the shelf under the object(s) held might be used to display the name, product number, pricing, and other information about the object(s).
Materials
Shelves are normally made of strong materials such as wood, bamboo or steel, though shelves to hold lighter-weight objects can be made of glass or plastic. Do it yourself (DIY) shelves can be made from things such as an old door, colored pencils or books. Additionally, shelves can also be 3D printed, allowing do it yourself (DIY) projects to have intricate detail.
Pipe shelving
Pipe shelving can be used in a home, business, store or restaurant. It consists mainly of wood boards resting on black or galvanized steel gas pipe. Copper pipe can be used but it is not recommended for heavy-duty shelves. Pipe shelving can also be modified to be used as retail clothing displays and wall shelves. Pipe shelving supports rest on the floor with floor flanges (these need not be attached) and attaches to the wall with flanges that are directed backwards. Many different designs exist and some companies make these shelves for commercial and residential applications and others make these shelves as DIY projects.
Pipe shelving is mainly attached to a wall but some companies have designed free standing units. Pipe shelving has even been used in reclamation projects such as shipping container architecture and was used by Marriott Hotels & Resorts in a bar project.
Proportions for hanging on a wall
When hanging shelves on a wall, home designers generally try to ensure that the shelf should be no wider than 1.4 x bracket's width and no wider than 1.2 x bracket's height. Spacing brackets for a long shelf should be no more than 4 x shelf-breadth between each bracket - this holds true for normal materials used at home.
Length and size of screws holding the shelf to the wall differ depending on the material of the wall. A good rule of thumb for concrete walls is that the screw should go into the wall at least as far as one-tenth the width of the shelf. But there are shelf systems where a brace is hung on the wall, onto which brackets are attached without screws.
Etymology
The word shelf originates in late 14th century Middle English. The word is from the Old English ; similar to Low German meaning shelf and Old Norse - meaning bench.
See also
Wire shelving
References
External links
Cabinets (furniture)
Furniture
Retail store elements | Shelf (storage) | Technology | 849 |
23,718,712 | https://en.wikipedia.org/wiki/Doherty%20amplifier | The Doherty amplifier is a modified class B radio frequency amplifier invented by William H. Doherty of Bell Telephone Laboratories Inc in 1936. Whereas conventional class B amplifiers can clip on high input-signal levels, the Doherty power amplifier can accommodate signals with high peak-to-average power ratios by using two amplifier circuits within the one overall amplifier to accommodate the different signal levels. In this way, the amplifier achieves a high level of linearity while retaining good power efficiency.
In Doherty's day, within the Western Electric product line, the eponymous electronic device was operated as a linear amplifier with a driver which was modulated. In the 50,000-watt implementation, the driver was a complete 5,000-watt transmitter which could, if necessary, be operated independently of the Doherty amplifier and the Doherty amplifier was used to raise the 5,000-watt level to the required 50,000-watt level.
The amplifier was usually configured as a grounded-cathode, carrier–peak amplifier using two vacuum tubes in parallel connection, one as a class B carrier tube and the other as a class B peak tube (power transistors in modern implementations). The tubes' source (driver) and load (antenna) were split and combined through +90 and −90 degree phase shifting networks.
Alternate configurations included a grounded-grid carrier tube and a grounded-cathode peak tube whereby the driver power was effectively passed-through the carrier tube and was added to the resulting output power, but this benefit was more appropriate for the earlier and less efficient triode implementations rather than the later and more efficient tetrode implementations.
Successor broadcast developments
As successor to Western Electric Company Inc for radio broadcast transmitters, the Doherty concept was considerably refined by Continental Electronics Manufacturing Company of Dallas, Texas.
Early Continental Electronics designs, by James O. Weldon and others, retained most of the characteristics of Doherty's amplifier but added medium-level screen-grid modulation of the driver (317B, et al.).
The 317B could be cut-back to 5,000 watts as two of the four cabinets of the 317B (50,000 watt rated power) were precisely a 315B (5,000 watt rated power), but as a 316B (10,000 watt rated power) and a 315B differed by only a single 4CX5000 tube, and no other significant differences, a cut-back to 10,000 watts was also possible, and CE sales literature emphasized this possibility, on special order. A cut-back to 25,000 watts, too, was theoretically possible, but with reduced overall efficiency (50,000 watt amplifier operated at 25,000 watts, with driver operated at 5,000 watts). At the time of the 317B's introduction, and for many years thereafter, only "discrete" powers were authorized, meaning only 50,000, 25,000, 10,000 and 5,000 watt power levels were possible from this transmitter, and 5,000, 2,500, 1,000, 500, 250 and 100 watt power levels were possible from other transmitters.
A further refinement of Doherty's amplifier was the high-level screen-grid modulation scheme invented by Joseph B. Sainton (317C, et al.).
Sainton's 317C series consisted of a class C carrier tube in parallel connection with a class C peak tube. As in Doherty's amplifier, the tubes' source (driver) and load (antenna) were split and combined through +90 and −90 degree phase-shifting networks. The unmodulated radio frequency carrier was applied to the control grids of both tubes with the same control grid bias points. Carrier modulation was applied to the screen grids of both tubes, but the screen grid bias points of the carrier and peak tubes were different and were established such that the peak tube was cut off when modulation was absent, and the amplifier was producing rated unmodulated carrier power, and both tubes were conducting, and each tube was contributing twice the rated carrier power during 100% modulation as four times the rated carrier power was required to achieve 100% modulation. As both tubes were operated in class C, a significant improvement in efficiency was thereby achieved in the final stage.
In addition, as the tetrode carrier and peak tubes required very little drive power, a significant improvement in efficiency within the driver was achieved as well. The commercial version of the Sainton amplifier employed a cathode-follower modulator, not the push-pull modulator disclosed in the patent, and the entire 50,000-watt transmitter was implemented using only nine total tubes of four tube types, all of these being general-purpose tubes, a remarkable achievement, given that the 317C's most significant competitor, RCA's BTA-50G, was implemented using thirty-two total tubes of nine tube types, nearly one-half of these being special-purpose, being employed only in the BTA-50G.
Nearly 300 CE 317C transmitters were installed in North America alone, easily outdistancing all competitors combined, until the introduction of high-power transistorized designs by others, at which point CECo withdrew from this market.
Doherty amplifiers are now widely used in modern digital broadcast transmitters for television and radio. The digital modulation used has a high peak-to-average power ratio (PAPR).
Non-broadcast developments
Modern communication systems have seen the sudden resurrection of Doherty amplifiers in 4G and pre-5G massive Multiple-Input Multiple-Output (mMIMO) based base stations. The fact that modern communication systems use complex signal modulation schemes like OFDM (Orthogonal Frequency Division Multiplexing) with a high peak-to-average power ratio (PAPR), the probability of amplifier operating at its peak power with its maximum efficiency is very low. The property to the Doherty amplifier exhibiting multiple efficiency peaks at various lower power levels makes it an attractive option to boost the average efficiency of modern-day transmitters. The Doherty amplifier can accomplish this by using a technique called "Dynamic Load Modulation" wherein the load, as seen by the main amplifier, changes as a function of power level in order to boost the efficiency at lower power levels.
Footnotes
References
Further reading
− US Patent
Electronic amplifiers
Valve amplifiers | Doherty amplifier | Technology | 1,274 |
60,178,824 | https://en.wikipedia.org/wiki/Riboviria | Riboviria is a realm of viruses that includes all viruses that use a homologous RNA-dependent polymerase for replication. It includes RNA viruses that encode an RNA-dependent RNA polymerase, as well as reverse-transcribing viruses (with either RNA or DNA genomes) that encode an RNA-dependent DNA polymerase. RNA-dependent RNA polymerase (RdRp), also called RNA replicase, produces RNA (ribonucleic acid) from RNA. RNA-dependent DNA polymerase (RdDp), also called reverse transcriptase (RT), produces DNA (deoxyribonucleic acid) from RNA. These enzymes are essential for replicating the viral genome and transcribing viral genes into messenger RNA (mRNA) for translation of viral proteins.
Riboviria was established in 2018 to accommodate all RdRp-encoding RNA viruses and was expanded a year later to also include RdDp-encoding retroviruses. These two groups of viruses are assigned to two separate kingdoms: Orthornavirae for RdRp-encoding RNA viruses, and Pararnavirae for RdDp-encoding viruses, i.e. all reverse-transcribing viruses. While the realm has few prokaryotic viruses, it includes most eukaryotic viruses, including most human, animal, and plant viruses, however, metagenomic studies are changing this perspective.
Many of the most widely known viral diseases are caused by viruses in Riboviria, which includes coronaviruses, ebola virus, HIV, influenza viruses, and the rabies virus. These viruses and others have been prominent throughout history, including Tobacco mosaic virus, which was the first virus to be discovered. Many reverse transcribing viruses notably become integrated into the genome of their host as part of their replication cycle. As a result of that, it is estimated that about 7–8% of the human genome originates from these viruses.
Etymology
Riboviria is a portmanteau of ribo, referencing ribonucleic acid, and the suffix -viria, which is the suffix used for virus realms.
Characteristics
All members of Riboviria contain a gene that encodes for an RNA-dependent polymerase, also called RNA-directed polymerase. There are two types of RNA-dependent polymerases: RNA-dependent RNA polymerase (RdRp), also called RNA replicase, which synthesizes RNA from RNA, and RNA-dependent DNA polymerase (RdDp), also called reverse transcriptase (RT), which synthesizes DNA from RNA. In a typical virus particle, called a virion, the RNA-dependent polymerase is bound to the viral genome in some manner and begins transcription of the viral genome after entering a cell. As part of a virus's life cycle, the RNA-dependent polymerase also synthesizes copies of the viral genome as part of the process of creating new viruses.
Viruses that replicate via RdRp belong to three groups in the Baltimore classification system, all of which are in the kingdom Orthornavirae: single-stranded RNA (ssRNA) viruses, which are either positive (+) or negative (-) sense, and double-stranded RNA viruses (dsRNA). +ssRNA viruses have genomes that can functionally act as mRNA, and a negative sense strand can also be created to form dsRNA from which mRNA is transcribed from the negative strand. The genomes of -ssRNA viruses and dsRNA viruses act as templates from which RdRp creates mRNA.
Viruses that replicate via reverse transcription belong to two Baltimore groups, both of which are in the kingdom Pararnavirae: single-stranded RNA (ssRNA-RT) viruses, all of which belong to the order Ortervirales, and double-stranded DNA (dsDNA-RT) viruses, which belong to the family Caulimoviridae, also in Ortervirales, and the family Hepadnaviridae of the order Blubervirales. ssRNA-RT viruses have their positive-sense genome transcribed by RdDp to synthesize a negative sense complementary DNA (-cDNA) strand. The +RNA strand is degraded and later replaced by RdDp with a +DNA strand to synthesize a linear dsDNA copy of the viral genome. This genome is then integrated into the host cell's DNA.
For dsDNA-RT viruses, a pregenomic +RNA strand is transcribed from the relaxed circular DNA (rcDNA), which is in turn used by RdDp to transcribe a -cDNA strand. The +RNA strand is degraded and replaced in a similar manner as +ssRNA-RT viruses to synthesize the rcDNA. The rcDNA genome is later repaired by the host cell's DNA repair mechanisms to synthesize a covalently closed circular DNA (cccDNA) genome. The integrated genome of +ssRNA-RT viruses and the cccDNA of dsDNA-RT viruses are then transcribed into mRNA by the host cell enzyme RNA polymerase II.
Viral mRNA is translated by the host cell's ribosomes to produce viral proteins. In order to produce more viruses, viral RNA-dependent polymerases use copies of the viral genome as templates to replicate the viral genome. For +ssRNA viruses, an intermediate dsRNA genome is created from which +ssRNA is synthesized from the negative strand. For -ssRNA viruses, genomes are synthesized from complementary positive sense strands. dsRNA viruses replicate their genomes from mRNA by synthesizing a complementary negative sense strand to form genomic dsRNA. For dsDNA-RT viruses, pregenomic RNA created from the cccDNA is retrotranscribed into new dsDNA genomes. For +ssRNA-RT viruses, the genome is replicated from the integrated genome. After replication and translation, the genome and the viral proteins are assembled into complete virions, which then leave the host cell.
Phylogenetics
Both kingdoms in Riboviria show a relation to the reverse transcriptases of group II introns that encode RTs and retrotransposons, which are self-replicating DNA sequences, the latter of which self-replicate via reverse transcription and integrate themselves into other parts of the same DNA molecule. Reverse transcribing viruses, assigned to Pararnavirae, appear to have evolved from a retrotransposon on a single occasion. The origin of the RdRps of Orthornavirae is less clear due to a lack of information, that they originate from a reverse transcriptase from bacterial group II intron before the emergence of eukaryotes or originated before the last universal common ancestor (LUCA) being descendants of the ancient RNA world and that they preceded the retroelement reverse transcriptases. A larger study (2022) where new lineages (phyla) were described, was in favor of the hypothesis that RNA viruses descend from the RNA world, suggesting that retroelements originated from an ancestor related to the phylum Lenarviricota and that members of a newly discovered Taraviricota lineage (phylum) would be the ancestors of all RNA viruses.
Classification
Riboviria contains two kingdoms: Orthornavirae and Pararnavirae. Orthornavirae contains multiple phyla and unassigned taxa, whereas Pararnavirae is monotypic down to the rank of class. This taxonomy can be visualized hereafter.
Kingdom: Orthornavirae, which contains all RdRp-encoding RNA viruses, i.e. all dsRNA, +ssRNA, and -ssRNA viruses, often collectively called RNA viruses
Phylum: Ambiviricota
Phylum: Duplornaviricota
Phylum: Kitrinoviricota
Phylum: Lenarviricota
Phylum: Negarnaviricota
Phylum: Pisuviricota
Family incertae sedis: Birnaviridae
Family incertae sedis: Permutotetraviridae
Kingdom: Pararnavirae, which contains all RdDp-encoding viruses, i.e. all ssRNA-RT and dsDNA-RT viruses, collectively called reverse transcribing viruses
Phylum: Artverviricota
Additionally, Riboviria contains two incertae sedis families and four incertae sedis genera. Additional information about them is needed to know their exact placement in higher taxa.
Family incertae sedis: Polymycoviridae
Family incertae sedis: Sarthroviridae
Genus incertae sedis: Albetovirus
Genus incertae sedis: Aumaivirus
Genus incertae sedis: Papanivirus
Genus incertae sedis: Virtovirus
Metagenomic studies have suggested the existence of six new phyla not in the ICTV: Arctiviricota, Taraviricota, Pomiviricota, Paraxenoviricota, Wamoviricota and Artimaviricota.
Riboviria partially merges Baltimore classification with virus taxonomy, including the Baltimore groups for RNA viruses and reverse transcribing viruses in the realm. Baltimore classification is a classification system used for viruses based on their manner of mRNA production, often used alongside standard virus taxonomy, which is based on evolutionary history. All members of five Baltimore groups belong to Riboviria: Group III: dsRNA viruses, Group IV: +ssRNA viruses, Group V: -ssRNA viruses, Group VI: ssRNA-RT viruses, and Group VII: dsDNA-RT viruses. Realms are the highest level of taxonomy used for viruses and Riboviria is one of four, the other three being Duplodnaviria, Monodnaviria, and Varidnaviria.
Most identified eukaryotic viruses are RNA viruses, and for that reason most eukaryotic viruses belong to Riboviria, including most human, animal, and plant viruses. Other major branches of eukaryotic viruses include herpesviruses in Duplodnaviria, the kingdom Shotokuvirae in Monodnaviria, and many viruses in Varidnaviria. In contrast, only three groups of prokaryotic RNA viruses have been identified: the class Leviviricetes, the family Cystoviridae and the phylum Artimaviricota. They also suggest that families Picobirnaviridae and Partitiviridae previously associated with eukaryotes also infect prokaryotes and also the phylum Taraviricota. Studies of metagenomic samples have uncovered new prokaryotic RNA virus taxa including two new phyla that infect only prokaryotes, suggesting that their diversity is greater than previously thought and challenging the traditional view that RNA viruses only infect mostly eukaryotes.
Interactions with hosts
Disease
Viruses in Riboviria are associated with a wide range of diseases, including many of the most widely known viral diseases. Notable disease-causing viruses in the realm include:
coronaviruses
Crimean-Congo hemorrhagic fever orthonairovirus
Dengue virus
Ebolavirus
hantaviruses
the Hepatitis viruses
the human immunodeficiency viruses
Human orthopneumovirus
influenza viruses
Japanese encephalitis virus
Lassa mammarenavirus
Measles morbillivirus
Mumps orthorubulavirus
Norovirus
Poliovirus
Rabies lyssavirus
Rhinoviruses
Rift Valley fever phlebovirus
Rotavirus
Rubella virus
Tick-borne encephalitis virus
West Nile virus
Yellow fever virus
Zika virus
Animal viruses in Riboviria include orbiviruses, which cause various diseases in ruminants and horses, including Bluetongue virus, African horse sickness virus, Equine encephalosis virus, and epizootic hemorrhagic disease virus. The vesicular stomatitis virus causes disease in cattle, horses, and pigs. Bats harbor many viruses including ebolaviruses and henipaviruses, which also can cause disease in humans. Similarly, arthropod viruses in the Flavivirus and Phlebovirus genera are numerous and often transmitted to humans. Coronaviruses and influenza viruses cause disease in various vertebrates, including bats, birds, and pigs. The family Retroviridae contains many viruses that cause leukemia, immunodeficiency, and other cancers and immune system-related diseases in animals.
Plant viruses in the realm are numerous and infect many economically important crops. Tomato spotted wilt virus is estimated to cause more than US$1 billion in damages annually, affecting more than 800 plant species including chrysanthemum, lettuce, peanut, pepper, and tomato. Cucumber mosaic virus infects more than 1,200 plant species and likewise causes significant crop losses. Potato virus Y causes significant reductions in yield and quality for pepper, potato, tobacco, and tomato, and Plum pox virus is the most important virus among stone fruit crops. Brome mosaic virus, while not causing significant economic losses, is found throughout much of the world and primarily infects grasses, including cereals.
Endogenization
Many reverse transcribing viruses, called retroviruses, in Riboviria are able to become integrated into the DNA of their host. These viruses become endogenized as part of their replication cycle. Namely, the viral genome is integrated into the host genome by the retroviral enzyme integrase, and viral mRNA is produced from that DNA. Endogenization is a form of horizontal gene transfer between unrelated organisms, and it is estimated that about 7–8% of the human genome consists of retroviral DNA. Endogenization can also be used to study the evolutionary history of viruses, showing an approximate time period when a virus first became endogenized into the host's genome as well as the rate of evolution for the viruses since endogenization first occurred.
History
Diseases caused by viruses in Riboviria have been known for much of recorded history, though their cause was only discovered in modern times. Tobacco mosaic virus was discovered in 1898 and was the first virus to be discovered. Viruses transmitted by arthropods have been central in the development of vector control, which often aims to prevent viral infections. In modern history, numerous disease outbreaks have been caused by various members of the realm, including coronaviruses, ebola, and influenza. HIV especially has had dramatic effects on society as it causes a sharp decline in life expectancy and significant stigma for infected persons.
For a long time, the relation between many viruses in Riboviria could not be established due to the high amount of genetic divergence among RNA viruses. With the development of viral metagenomics, many additional RNA viruses were identified, helping to fill in the gaps of their relations. This led to the establishment of Riboviria in 2018 to accommodate all RdRp-encoding RNA viruses based on phylogenetic analysis that they were related.
A year later, all reverse transcribing viruses were added to the realm. The kingdoms were also established in 2019, separating the two RNA-dependent polymerase branches. When the realm was founded, it mistakenly included two viroid families, Avsunviroidae and Pospiviroidae, and the genus Deltavirus, which were promptly removed in 2019 because they use host cell enzymes for replication.
See also
List of higher virus taxa
References
Further reading
Virus realms | Riboviria | Biology | 3,210 |
25,186,925 | https://en.wikipedia.org/wiki/Hartmann%20number | The Hartmann number (Ha) is the ratio of electromagnetic force to the viscous force, first introduced by Julius Hartmann (18811951) of Denmark. It is frequently encountered in fluid flows through magnetic fields. It is defined by:
where
B is the magnetic field intensity
L is the characteristic length scale
σ is the electrical conductivity
μ is the dynamic viscosity
See also
Magnetohydrodynamics
References
Further reading
Hartmann number is indicated by letter M in analogy with Mach number for aerodynamics.
Dimensionless numbers of fluid mechanics
Fluid dynamics
Magnetohydrodynamics | Hartmann number | Chemistry,Engineering | 119 |
1,907,070 | https://en.wikipedia.org/wiki/Add-on%20%28Mozilla%29 | For Mozilla software, an add-on is a software component that extends the functionality of the Firefox web browser and related applications although most are browser extensions. Mozilla provides add-ons to users via its official add-on website.
In 2017, Mozilla enacted major changes to the application programming interface (API) for extensions in Firefox, replacing the long-standing XUL and XPCOM APIs with the WebExtensions API that is modeled after Google Chrome's API. Thus add-ons that remain compatible with Firefox are now largely compatible with Chrome as well. As of January, 2024, there are more than 36,000 add-ons and over 495,000 themes available for Firefox.
Add-ons categories
Themes
Early versions of Firefox supported themes that could greatly change the appearance of the browser, but this was scaled back over time. Current themes are limited to changing the background and text color of toolbars, formerly called personas, now called Firefox Themes.
WebExtensions
Starting with Firefox 57, only the new WebExtensions API is supported for extensions; relegating the older extension technology as legacy.
Legacy extensions
Prior to 2017, Firefox supported extensions developed via various APIs: XUL, XPCOM, and Jetpack. Mozilla now refers to these as legacy extensions.
Plug-ins
Plug-ins are no longer supported in Firefox. In the past, they were used to handle media types for which the application did not have built-in capability. They were deprecated due to security concerns and improvements in Web APIs. The last one that was officially supported was Adobe Flash Player, which Adobe discontinued in 2020.
Security
Mozilla had no mechanism to restrict the privileges of legacy Firefox extensions. This meant that a legacy extension could read or modify the data used by another extension or any file accessible to the user running Mozilla applications. But the current WebExtensions API imposes security restrictions.
Starting with Firefox 40, Mozilla began to roll out a requirement for extension signing. It is now required in all official Firefox releases.
Website
The Mozilla add-ons website is the official repository for Firefox add-ons. In contrast to mozdev.org which provides free hosting for Mozilla-related projects, the add-ons site is tailored for users. By default, Firefox automatically checks the site for updates to installed add-ons.
In January 2008, Mozilla announced that the site had accumulated a total of 600 million add-on downloads and that over 100 million installed add-ons automatically check the site for updates every day. In July 2012, the total had increased to 3 billion downloads from the site.
References
External links
Official add-on website
WebExtensions API reference documentation
Extension Workshop, Mozilla's site for Firefox extension developer documentation
Mozilla
Free software websites | Add-on (Mozilla) | Technology | 621 |
38,247,406 | https://en.wikipedia.org/wiki/Lines%20of%20Stollhofen | The Lines of Stollhofen () was a line of defensive earthworks built for the Reichsarmee at the start of the War of the Spanish Succession (1701–1714) running for about from Stollhofen on the Rhine to the impenetrable woods on the hills east of Bühl.
The lines were constructed by order of Margrave Louis William I of Baden-Baden in order to protect northern Baden from the newly erected French fortress of Fort Louis on the River Rhine.
Location
The roughly long and only partly fortified line started in the east near Obertal (today part of Bühlertal), ran westwards over the heights to Bühl and then northwest in the Rhine valley via Vimbuch (today a village in the municipality of Bühl), Leiberstung (today part of Sinzheim) and Stollhofen to the River Rhine. It comprised linear schanzen in the terrain, as well as individual star schanzen, hornworks, small forts and fortified villages, and used the watercourses on the Rhine Plain in order to flood the fields of fire and approach using weirs.
At the same time, by including the villages of Bühl and Stollhofen, it enabled control of the old trade routes from Basle to Frankfurt (today the Bundesstraße B3) at Bühl, and from Strasbourg to Frankfurt (old Roman road, today the B 36). Until 1707, the line bounded the operational area of the French troops and barred the easiest route to Bavaria via Pforzheim.
History
Following his Rhine crossing in mid-February 1703, Marshal Villars found the passes through the Black Forest to be still impassible because of snow. Therefore, he initially occupied Kehl Fortress on 12 March as his base east of the Rhine, united with the army of Marshal Tallard, and on 19 April 1703 began an attack on the Bühl-Stollhofen Line. He bombarded the line south of Kappelwindeck and tried to bypass the line to the east with 25 battalions under Blainville. Both attempts, on 19 and 24 April, failed because the French could not capture the fortifications at Obertal. On 25 April, Villars pulled back.
In summer 1703, however, Margrave Louis William could not stop Villars marching up the Kinzig valley and on into Bavaria. There, Villars was victorious in the First Battle of Höchstädt. Likewise in 1704, Tallar passed through the Black Forest unhindered along the Dreisam Valley.
After the death of Margrave Louis William (9 January 1707), Villars captured the Bühl-Stollhofen Line in May without a fight and had it destroyed.
Several months after the loss of the Bühl-Stollhofen Line, work began on the Ettlingen Line under the Rhine Army commander, George Louis of Brunswick-Lüneburg. The line was reinforced during the War of the Polish Succession (1733–1738), was destroyed by the French in 1734 broke and was rebuilt in 1735.
Today
As a result of the canalization of the Rhine by Tulla in the 19th century and the construction of roads and settlements in the last century the remains of the line are now visible in places only in the wooded areas east of Bühl. In the Bühl Municipal Museum is the 1703 map of the Bühl-Stollhofen Line drawn by Major Elster.
See also
Baroque fortifications in the Black Forest
Eppingen lines
Johan Wijnand van Goor defended the lines in 1703
Battle of Blenheim (August 1704) the lines played an important blocking role in the weeks before the battle
Prince Eugene of Savoy commanded the forces on the line immediately before the Battle of Blenheim
Marshall Villars (May 1707) attacked the lines with a holding operation and then outflanked them defeating Christian Ernst, Margrave of Brandenburg.
Notes
References
Hauptstaatsarchiv Stuttgart, Bestand L 6, Bü 1696, 1707
Further reading
Eugen von Müller: Die Bühl-Stollhofener Linie im Jahr 1706, in Hrsg.: Badische Historische Kommission: Zeitschrift für die Geschichte des Oberrheins, Band 21 1906, Carl Winter's Universitätsbuchhandlung, Heidelberg, 1906
Hans Zelter: Die Stollhofener Linie, in Fortifikation No. 9, 1995, pp. 20–24
External links
Lines from Stollhofen in:
Bühl und Stollhofen (PDF-Datei; 148 kB)
Forts in Germany
Margraviate of Baden
Black Forest
War of the Spanish Succession
Military history of Germany
Semipermanent fortifications
Fortification lines
History of Baden-Württemberg | Lines of Stollhofen | Engineering | 974 |
1,130,902 | https://en.wikipedia.org/wiki/Sphingomyelin | Sphingomyelin (SPH, ) is a type of sphingolipid found in animal cell membranes, especially in the membranous myelin sheath that surrounds some nerve cell axons. It usually consists of phosphocholine and ceramide, or a phosphoethanolamine head group; therefore, sphingomyelins can also be classified as sphingophospholipids. In humans, SPH represents ~85% of all sphingolipids, and typically makes up 10–20 mol % of plasma membrane lipids.
Sphingomyelin was first isolated by German chemist Johann L.W. Thudicum in the 1880s. The structure of sphingomyelin was first reported in 1927 as N-acyl-sphingosine-1-phosphorylcholine. Sphingomyelin content in mammals ranges from 2 to 15% in most tissues, with higher concentrations found in nerve tissues, red blood cells, and the ocular lenses. Sphingomyelin has significant structural and functional roles in the cell. It is a plasma membrane component and participates in many signaling pathways. The metabolism of sphingomyelin creates many products that play significant roles in the cell.
Physical characteristics
Composition
Sphingomyelin consists of a phosphocholine head group, a sphingosine, and a fatty acid. It is one of the few membrane phospholipids not synthesized from glycerol. The sphingosine and fatty acid can collectively be categorized as a ceramide. This composition allows sphingomyelin to play significant roles in signaling pathways: the degradation and synthesis of sphingomyelin produce important second messengers for signal transduction.
Sphingomyelin obtained from natural sources, such as eggs or bovine brain, contains fatty acids of various chain length. Sphingomyelin with set chain length, such as palmitoylsphingomyelin with a saturated 16 acyl chain, is available commercially.
Properties
Ideally, sphingomyelin molecules are shaped like a cylinder, however many molecules of sphingomyelin have a significant chain mismatch (the lengths of the two hydrophobic chains are significantly different). The hydrophobic chains of sphingomyelin tend to be much more saturated than other phospholipids. The main transition phase temperature of sphingomyelins is also higher compared to the phase transition temperature of similar phospholipids, near 37 °C. This can introduce lateral heterogeneity in the membrane, generating domains in the membrane bilayer.
Sphingomyelin undergoes significant interactions with cholesterol. Cholesterol has the ability to eliminate the liquid to solid phase transition in phospholipids. Due to sphingomyelin transition temperature being within physiological temperature ranges, cholesterol can play a significant role in the phase of sphingomyelin. Sphingomyelin are also more prone to intermolecular hydrogen bonding than other phospholipids.
Location
Sphingomyelin is synthesized at the endoplasmic reticulum (ER), where it can be found in low amounts, and at the trans Golgi. It is enriched at the plasma membrane with a greater concentration on the outer than the inner leaflet. The Golgi complex represents an intermediate between the ER and plasma membrane, with slightly higher concentrations towards the trans side.
Metabolism
Synthesis
The synthesis of sphingomyelin involves the enzymatic transfer of a phosphocholine from phosphatidylcholine to a ceramide. The first committed step of sphingomyelin synthesis involves the condensation of L-serine and palmitoyl-CoA. This reaction is catalyzed by serine palmitoyltransferase. The product of this reaction is reduced, yielding dihydrosphingosine. The dihydrosphingosine undergoes N-acylation followed by desaturation to yield a ceramide. Each one of these reactions occurs at the cytosolic surface of the endoplasmic reticulum. The ceramide is transported to the Golgi apparatus where it can be converted to sphingomyelin. Sphingomyelin synthase is responsible for the production of sphingomyelin from ceramide. Diacylglycerol is produced as a byproduct when the phosphocholine is transferred.
Degradation
Sphingomyelin breakdown is responsible for initiating many universal signaling pathways. It is hydrolyzed by sphingomyelinases (sphingomyelin specific type-C phospholipases). The phosphocholine head group is released into the aqueous environment while the ceramide diffuses through the membrane.
Function
Membranes
The membranous myelin sheath that surrounds and electrically insulates many nerve cell axons is particularly rich in sphingomyelin, suggesting its role as an insulator of nerve fibers. The plasma membrane of other cells is also abundant in sphingomyelin, though it is largely to be found in the exoplasmic leaflet of the cell membrane. There is, however, some evidence that there may also be a sphingomyelin pool in the inner leaflet of the membrane. Moreover, neutral sphingomyelinase-2 – an enzyme that breaks down sphingomyelin into ceramide – has been found to localise exclusively to the inner leaflet, further suggesting that there may be sphingomyelin present there.
Signal transduction
The function of sphingomyelin remained unclear until it was found to have a role in signal transduction. It has been discovered that sphingomyelin plays a significant role in cell signaling pathways. The synthesis of sphingomyelin at the plasma membrane by sphingomyelin synthase 2 produces diacylglycerol, which is a lipid-soluble second messenger that can pass along a signal cascade. In addition, the degradation of sphingomyelin can produce ceramide which is involved in the apoptotic signaling pathway.
Apoptosis
Sphingomyelin has been found to have a role in cell apoptosis by hydrolyzing into ceramide. Studies in the late 1990s had found that ceramide was produced in a variety of conditions leading to apoptosis. It was then hypothesized that sphingomyelin hydrolysis and ceramide signaling were essential in the decision of whether a cell dies. In the early 2000s new studies emerged that defined a new role for sphingomyelin hydrolysis in apoptosis, determining not only when a cell dies but how. After more experimentation it has been shown that if sphingomyelin hydrolysis happens at a sufficiently early point in the pathway the production of ceramide may influence either the rate and form of cell death or work to release blocks on downstream events.
Lipid rafts
Sphingomyelin, as well as other sphingolipids, are associated with lipid microdomains in the plasma membrane known as lipid rafts. Lipid rafts are characterized by the lipid molecules being in the lipid ordered phase, offering more structure and rigidity compared to the rest of the plasma membrane. In the rafts, the acyl chains have low chain motion but the molecules have high lateral mobility. This order is in part due to the higher transition temperature of sphingolipids as well as the interactions of these lipids with cholesterol. Cholesterol is a relatively small, nonpolar molecule that can fill the space between the sphingolipids that is a result of the large acyl chains. Lipid rafts are thought to be involved in many cell processes, such as membrane sorting and trafficking, signal transduction, and cell polarization. Excessive sphingomyelin in lipid rafts may lead to insulin resistance.
Due to the specific types of lipids in these microdomains, lipid rafts can accumulate certain types of proteins associated with them, thereby increasing the special functions they possess. Lipid rafts have been speculated to be involved in the cascade of cell apoptosis.
Abnormalities and associated diseases
Sphingomyelin can accumulate in a rare hereditary disease called Niemann–Pick disease, types A and B. It is a genetically-inherited disease caused by a deficiency in the lysosomal enzyme acid sphingomyelinase, which causes the accumulation of sphingomyelin in spleen, liver, lungs, bone marrow, and brain, causing irreversible neurological damage. Of the two types involving sphingomyelinase, type A occurs in infants. It is characterized by jaundice, an enlarged liver, and profound brain damage. Children with this type rarely live beyond 18 months. Type B involves an enlarged liver and spleen, which usually occurs in the pre-teen years. The brain is not affected. Most patients present with <1% normal levels of the enzyme in comparison to normal levels. A hemolytic protein, lysenin, may be a valuable probe for sphingomyelin detection in cells of Niemann-Pick A patients.
As a result of the autoimmune disease multiple sclerosis (MS), the myelin sheath of neuronal cells in the brain and spinal cord is degraded, resulting in loss of signal transduction capability. MS patients exhibit upregulation of certain cytokines in the cerebrospinal fluid, particularly tumor necrosis factor alpha. This activates sphingomyelinase, an enzyme that catalyzes the hydrolysis of sphingomyelin to ceramide; sphingomyelinase activity has been observed in conjunction with cellular apoptosis.
An excess of sphingomyelin in the red blood cell membrane (as in abetalipoproteinemia) causes excess lipid accumulation in the outer leaflet of the red blood cell plasma membrane. This results in abnormally shaped red cells called acanthocytes.
Additional images
References
External links
Phospholipids
Membrane biology | Sphingomyelin | Chemistry | 2,076 |
19,180,301 | https://en.wikipedia.org/wiki/Watches%20of%20the%20Night | The phrase 'watches of the night' has been used since at least the Book of Mishna: "watches of the night": the night-time; watch originally each of the three or four periods of time, during which a watch or guard was kept, into which the night was divided by the Jews and Romans". The phrase occurs several places in the Old Testament (Psalm 63:6; 119:148; Lamentation 2:19) and it is suggested in the New Testament (Gospel of Mark 13:35). Also found in the Dhammapada, chapter 12 (Attavaggo).
This has also been used in several works of literature as a cliché for what is also called 'the wee small hours', or 'the early morning', often with connotations of blackness (both of night and of the spirits) and depression (e. g. Longfellow wrote in The Cross of Snow (1879) "In the long, sleepless watches of the night"). Kipling uses this, along with a pun on the word 'watches': the story turns on two identical timepieces.
"Watches of the Night" is a short story by Rudyard Kipling. It was first published in the Civil and Military Gazette on March 25, 1887; in book form, first in the first Indian edition of Plain Tales from the Hills in 1888; and in the many subsequent editions of that collection. It is one of the "Tales" which deals with the tense, enclosed society of the British in India, and the levels of gossip and malice that could be engendered therein. "Watches of the Night," like many of Kipling's works, has a punning, allusive title.
Both the Colonel, commanding the regiment, and a Subaltern in the Regiment, Platte, a poor man, own Waterbury watches. (These are fob or Pocket watches, not wrist watches: Each usually hangs from a chain.) The Waterbury (from the town of Waterbury, Connecticut is a mass-produced and not especially prestigious make. The Colonel, who affects to be "a horsey man" (but is not) wears his watch, not on a chain, but on a leather strap made from the lip-strap of a horse's harness; Platte wears his from a leather guard, presumably because he can afford no better. One night the two men change - in a hurry - at the Club, and, not unnaturally, take each other's watch. They go on their separate ways. Later that night, as Platte returns home, his horse rears and upsets his cart, throwing him to the ground outside Mrs Larkyn's house, where his watch falls loose. The Colonel loses his watch, which slips on to the floor - where a native bearer finds it (and keeps it). Going home in a hired carriage, the Colonel finds the driver drunk, and returns late. His wife, who is religious (and, we have been told "manufactured the Station scandal"), is disinclined to believe him.
In the morning, Mrs Larkyn, who has been a victim of the Colonel's wife scandal-mongering, finds the watch that Platte has dropped, and shows it to him. He affects to believe it is "...disgusting! Shocking old man!". They send the Colonel's watch (which is the one Platte had been wearing) to the Colonel's wife. She attacks the Colonel, being wholly convinced of Original Sin - and begins to realize the harm and pain that unfounded suspicion can cause - and has caused her victims.
That is really the moral of the story. "The mistrust and the tragedy of it," says Kipling, "are killing the Colonel's Wife, and are making the Colonel wretched.
All quotations in this article have been taken from the Uniform Edition of Plain Tales from the Hills published by Macmillan & Co., Limited in London in 1899. The text is that of the third edition (1890), and the author of the article has used his own copy of the 1923 reprint. Further comment, including page-by-page notes, can be found on the Kipling Society's website, at http://www.kipling.org.uk/rg_watches1.htm.
References
1887 short stories
Short stories by Rudyard Kipling
Rudyard Kipling stories about India
Works originally published in the Civil and Military Gazette | Watches of the Night | Physics,Astronomy,Mathematics,Technology | 937 |
2,478,755 | https://en.wikipedia.org/wiki/FLiNaK | FLiNaK is the name of the ternary eutectic alkaline metal fluoride salt mixture LiF-NaF-KF (46.5-11.5-42 mol %). It has a melting point of 462 °C and a boiling point of 1570 °C. It is used as electrolyte for the electroplating of refractory metals and compounds like titanium, tantalum, hafnium, zirconium and their borides. FLiNaK also could see potential use as a coolant in the very high temperature reactor, a type of nuclear reactor.
Coolant
FLiNaK salt was researched heavily during the late 1950s by Oak Ridge National Laboratory as potential candidate for a coolant in the molten salt reactor because of its low melting point, its high heat capacity, and its chemical stability at high temperatures. Ultimately, its sister salt, FLiBe, was chosen as the solvent salt for the molten salt reactor due to a more desirable nuclear cross section. FLiNaK still gathers interest as an intermediate coolant for a high-temperature molten salt reactor where it could transfer heat without being in the presence of the fuel.
Chemistry
Fluoride salts, like all salts, cause corrosion in most metals and alloys. FLiNaK is different from FLiBe in the sense that is a basic melt—or it has an excess of fluorine ions. As FLiNaK melts, all three components are alkali fluorides and therefore disassociate into positive and negative ions. The concentration of molten fluorine ions are able to corrode any metallic structures if it is energetically favorable. This is in contrast to FLiBe, which in a 66-34 mol% mixture will be a chemically neutral mix, as fluorine ions from LiF are donated to BeF2 to create the tetrafluoroberyllate ion BeF42−.
See also
Molten salt reactor
FLiBe
Fluoride volatility
References
Nuclear materials
Electrochemistry
Nuclear chemistry
Nuclear reactor coolants
Fluorides
Metal halides
Lithium compounds
Sodium compounds
Potassium compounds
Alkali metal fluorides | FLiNaK | Physics,Chemistry | 440 |
58,981 | https://en.wikipedia.org/wiki/Tobacco%20smoke | Tobacco smoke is a sooty aerosol produced by the incomplete combustion of tobacco during the smoking of cigarettes and other tobacco products. Temperatures in burning cigarettes range from about 400 °C between puffs to about 900 °C during a puff. During the burning of the cigarette tobacco (itself a complex mixture), thousands of chemical substances are generated by combustion, distillation, pyrolysis and pyrosynthesis. Tobacco smoke is used as a fumigant and inhalant.
Composition
The particles in tobacco smoke are liquid aerosol droplets (about 20% water), with a mass median aerodynamic diameter (MMAD) that is submicrometer (and thus, fairly "lung-respirable" by humans). The droplets are present in high concentrations (some estimates are as high as 1010 droplets per cm3).
Tobacco smoke may be grouped into a particulate phase (trapped on a glass-fiber pad, and termed "TPM" (total particulate matter)) and a gas/vapor phase (which passes through such a glass-fiber pad). "Tar" is mathematically determined by subtracting the weight of the nicotine and water from the TPM. However, several components of tobacco smoke (e.g., hydrogen cyanide, formaldehyde, phenanthrene, and pyrene) do not fit neatly into this rather arbitrary classification, because they are distributed among the solid, liquid and gaseous phases.
Tobacco smoke contains a number of toxicologically significant chemicals and groups of chemicals, including polycyclic aromatic hydrocarbons (benzopyrene), tobacco-specific nitrosamines (NNK, NNN), aldehydes (acrolein, formaldehyde), carbon monoxide, hydrogen cyanide, nitrogen oxides (nitrogen dioxide), benzene, toluene, phenols (phenol, cresol), aromatic amines (nicotine, ABP (4-aminobiphenyl)), and harmala alkaloids. The radioactive element polonium-210 is also known to occur in tobacco smoke. The chemical composition of smoke depends on puff frequency, intensity, volume, and duration at different stages of cigarette consumption.
Between 1933 and the late 1940s, the yields from an average cigarette varied from 33 to 49 mg "tar" and from less than 1 to 3 mg nicotine. In the 1960s and 1970s, the average yield from cigarettes in Western Europe and the USA was around 16 mg tar and 1.5 mg nicotine per cigarette. Current average levels are lower. This has been achieved in a variety of ways including use of selected strains of tobacco plant, changes in agricultural and curing procedures, use of reconstituted sheets (reprocessed tobacco leaf wastes), incorporation of tobacco stalks, reduction of the amount of tobacco needed to fill a cigarette by expanding it (like puffed wheat) to increase its "filling power", and by the use of filters and high-porosity wrapping papers. The development of lower "tar" and nicotine cigarettes has tended to yield products that lacked the taste components to which the smoker had become accustomed. In order to keep such products acceptable to the consumer, the manufacturers reconstitute aroma or flavor.
Tobacco polyphenols (e. g., caffeic acid, chlorogenic acid, scopoletin, rutin) determine the taste and quality of the smoke. Freshly cured tobacco leaf is unfit for use because of its pungent and irritating smoke. After fermentation and aging, the leaf delivers mild and aromatic smoke.
Tumorigenic agents
Safety
Tobacco smoke, besides being an irritant and significant indoor air pollutant, is known to cause lung cancer, heart disease, chronic obstructive pulmonary disease (COPD), emphysema, and other serious diseases in smokers (and in non-smokers as well). The actual mechanisms by which smoking can cause so many diseases remain largely unknown. Many attempts have been made to produce lung cancer in animals exposed to tobacco smoke by the inhalation route, without success. It is only by collecting the "tar" and repeatedly painting this on to mice that tumors are produced, and these tumors are very different from those tumors exhibited by smokers. Tobacco smoke is associated with an increased risk of developing respiratory conditions such as bronchitis, pneumonia, and asthma. Tobacco smoke aerosols generated at temperatures below 400 °C did not test positive in the Ames assay.
In spite of all changes in cigarette design and manufacturing since the 1960s, the use of filters and "light" cigarettes has neither decreased the nicotine intake per cigarette, nor has it lowered the incidence of lung cancers (NCI, 2001; IARC 83, 2004; U.S. Surgeon General, 2004). The shift over the years from higher- to lower-yield cigarettes may explain the change in the pathology of lung cancer. That is, the percentage of lung cancers that are adenocarcinomas has increased, while the percentage of squamous cell cancers has decreased. The change in tumor type is believed to reflect the higher nitrosamine delivery of lower-yield cigarettes and the increased depth or volume of inhalation of lower-yield cigarettes to compensate for lower level concentrations of nicotine in the smoke.
In the United States, lung cancer incidence and mortality rates are particularly high among African American men. Lung cancer tends to be most common in developed countries, particularly in North America and Europe, and less common in developing countries, particularly in Africa and South America.
See also
Liquid smoke
Electronic cigarette aerosol
Tobacco smoke enema
References
Aerosols
Tobacco smoking
Tobacco
Toxicology
Smoke | Tobacco smoke | Chemistry,Environmental_science | 1,175 |
35,683,128 | https://en.wikipedia.org/wiki/Oglethorpe%20Plan | The Oglethorpe Plan is an urban planning idea that was most notably used in Savannah, Georgia, one of the Thirteen Colonies, in the 18th century. The plan uses a distinctive street network with repeating squares of residential blocks, commercial blocks, and small green parks to create integrated, walkable neighborhoods.
James Edward Oglethorpe founded the Georgia Colony, and the town of Savannah, in 1733. The new Georgia colony was authorized under a grant from George II to a group constituted by Oglethorpe as the Trustees for the Establishment of the Colony of Georgia in America, or simply the Georgia Trustees.
Oglethorpe's plan for settlement of the new colony had been in the works since 1730, three years before the founding of Savannah. The multifaceted plan sought to achieve several goals through interrelated policy and design elements, including the spacing of towns, the layout of towns and eventually their surrounding counties, equitable allocation of land, and limits to growth to preserve a sustainable agrarian economy.
Historical significance
The Oglethorpe Plan was an embodiment of all of the major themes of the Enlightenment, including science, humanism, and secular government. Georgia became the only American colony infused at its creation with Enlightenment ideals: the last of the Thirteen Colonies, it would become the first to embody the principles later embraced by the founders. Remnants of the Oglethorpe Plan exist today in Savannah, showcasing a town plan that retains the vibrancy of ideas behind its conception.
At the heart of Oglethorpe's comprehensive and multi-faceted plan there was a vision of social equity and civic virtue. The mechanisms supporting that vision, including yeoman governance, equitable land allocation, stable land tenure, prohibition of slavery, and secular administration, were among the ideas debated during the British Enlightenment. Many of those ideals have been carried forward, and are found today in Savannah's Tricentennial Plan and other policy documents.
Sources for the Oglethorpe Plan
The Grand Model for the Province of Carolina was cited by the Georgia Trustees as a source of their plan for Georgia, although with the major difference that it would have neither aristocracy nor slavery. Oglethorpe wrote that the plan was conceived with "toleration" and "wholesome regulations." Benjamin Martyn, the trustees' secretary, wrote, "We are indebted to the Lord Shaftsbury, and that truly wise man Mr. Locke, for the excellent laws which they drew up for the first settlement of Carolina." Other sources are speculative, since they were not cited by Oglethorpe or the trustees. Such possible inspirations include classical planning concepts dating to Vitruvius and Roman colonial planning (e.g., Timgad), Renaissance concepts of the ideal city, and later plans such as the Vauban plan of Neuf-Brisach.
Notable comments
Many prominent planners and urban theorists have commented on various attributes of the Oglethorpe Plan, particularly the layout of Savannah. The quotes cited below are only a few of many laudatory comments.
"The famous Oglethorpe plan for Savannah … made a unique use of the square in the design, nothing like it having appeared in a town plan before or since. Here, in Savannah, the square by frequent repetition becomes an integral part of the street pattern and creates a series of rhythmically placed openings which give a wonderful sense of space in a solidly built townscape." –Paul Zucker
"… a plan so exalted that it remains as one of the finest diagrams for city organization and growth in existence." –Edmund Bacon
"[T]he grid pattern of Savannah . . . is like no other we know in its fineness and its distinguishable squares. . . . [O]nce seen it is unforgettable, and it carries over into real life experience." --Allan Jacobs
"Savannah occupies a unique position in the history of city planning. No complete precedents exist for its pattern of multiple open spaces…."—John W. Reps
Such comments nearly always apply to the ward layout found in the Savannah historic district, where the city preserved and elaborated on the original town plan laid out by Oglethorpe. Though seldom mentioned, notable vestiges of the Oglethorpe Plan can be found in the land use pattern surrounding Savannah; in the cities of Darien, Georgia; Brunswick, Georgia; and at Fort Frederica National Monument on St. Simons Island, Georgia.
Implementation
Oglethorpe developed a town plan in which the basic design unit was the ward. Wards were composed of four tything (residential) blocks and four trust (civic) blocks, arrayed around a central square. The tything blocks contained ten houses, which was the basic organizational unit for administration, farming, and defense. Each tything was assigned a square mile tract outside town for farming, with each family farming a 45-acre plot within that tract. The tything trained together for militia duty, a necessity on the frontier. Families were also assigned five-acre kitchen gardens near town.
Oglethorpe's town plan was initially developed for Savannah, which grew largely in accordance with the original design. The same basic plan was intended for replication in towns throughout the colony; however, the original design survives in few towns. Recently, Brunswick, Georgia, adopted a version of the design modeled after the Trustee period
The City of Savannah has preserved the ward design within its National Historic Landmark District. Oglethorpe originally laid out six wards in Savannah. The design proved remarkably adaptable as the city grew, and city officials perpetuated the same basic model for more than a century. Ultimately, twenty-four wards were laid out in general accordance with the original design, filling most of the original square-mile town common.
The city's modern street grid outside of the historic district follows much of the original system of rights-of-way established under the Oglethorpe Plan for the gardens, farms, and villages that made up the Savannah region.
Legacy
Relevance of the plan today
Many of the principles found in the Oglethorpe Plan are as relevant today as the democratic principles articulated at the dawn of the American Revolution. Urban theorists have duly acknowledged Oglethorpe's remarkable design legacy in Savannah, but most have said little about the plan's larger purpose of fostering social equity. Urban planners and designers in Savannah have rediscovered Oglethorpe's principles of integrated town planning, incorporating them in the city's comprehensive plan and various implementing ordinances. The city's success in doing so now stands as a model inviting wider application.
The plan is fundamentally different from modern town or community plans by allowing for growth in small, interlocking units, or wards, of approximately 10 acres (4 hectares). The exact size of a ward will vary depending on the width of streets that bound it. In Savannah, streets between wards vary from 45 feet to 120 feet, including sidewalks and landscaped medians. While the size of a ward may vary, it is important to keep it within about 15 percent of Oglethorpe's original layout. In following this standard automobile traffic is naturally limited to speeds of about 20 mph (30 km/h), the threshold for pedestrian comfort in a mixed-modal, shared space environment.
Another way in which the plan is fundamentally different from most designs today is in maximizing lot coverage on buildable lots while minimizing the open space requirement on those lots. Minimizing or eliminating these standards can be done because open space is provided in the public realm. A ward contains approximately 50% developable area and 50% public area (depending on the width of bounding streets), and because the public area is shared space streets contribute to open space both aesthetically and functionally.
Influence in contemporary planning
The Savannah town plan has been praised profusely, as mentioned earlier, but no recent or contemporary replicas of it exist either in infill or suburban developments, even in Savannah's own districts that lie beyond the original square mile commons. Its cellular ward system has been cited as a unique example of fractal, or "organic", city growth in which each ward cell is a microcosm of the entire city.
Its recognized and praised advantages have recently been incorporated in a planning model, which is also cellular, that shows the influence of the Oglethorpe plan – the Fused Grid. Diagrammatic and approved plans, based on this model reflect the Savannah plan principle of organizing buildable space around open space. In this reformulated expression of it, which accommodates contemporary planning, technological and cultural priorities, Oglethorpe's Town Plan could find a renewed appreciation and wider replication.
References
Bibliography
Bacon, Edmund N. Design of Cities. New York: Penguin Books, 1974.
Ettinger, Amos Aschbach. James Edward Oglethorpe: Imperial Idealist. Archon Books, 1968. Reprinted with permission of Oxford University Press.
Fries, Sylvia Doughty. The Urban Idea in Colonial America. Philadelphia: Temple University Press, 1977.
Harrington, James. The Commonwealth of Oceana. Tutis Digital Publishing Pvt Ltd. 2008. Originally published in 1656.
Jacobs, Allan B. Great Streets. Cambridge, MA: MIT Press, 1993.
Lane, Mills B., editor. General Oglethorpe’s Georgia: Colonial Letters, 1733–1743. Two volumes. Savannah: Beehive Press, 1990.
Manuscripts of the Earl of Egmont: Diary of Viscount Percival, Afterwards First Earl of Egmont. Three volumes. London: Historical Manuscripts Commission, 1920–23.
Martyn, Benjamin. "Some Account of the Trustees Design for the Establishment of the Colony of Georgia in America." London: 1732.
Oglethorpe, James. "A New and Accurate Account of the Provinces of South Carolina and Georgia," 1732
Oglethorpe, James. The Publications of James Edward Oglethorpe. Rodney M. Baine, editor. Athens: University of Georgia Press, 1994.
Pocock, J. G. A. Politics, Language and Time: Essays on Political Thought and History. New York: Athenum, 1971.
Reps, John W. "C2 + L2 S2? Another Look at the Origins of Savannah’s Town Plan." In Forty Years of Diversity: Essays on Colonial Georgia, edited by Harvey H. Jackson and Phinizy Spalding, 101–51. Athens: University of Georgia Press, 1984.
Reps, John. The Making of Urban America: A History of City Planning in the United States. Princeton, NJ: Princeton University Press, 1965.
Spalding, Phinizy. "James Edward Oglethorpe’s Quest for the American Zion." Jackson, Harvey H., and Phinizy Spalding, eds. Forty Years of Diversity. Essays on Colonial Georgia. Athens: University of Georgia Press, 1984. 60–79.
Spalding, Phinizy. Oglethorpe in America. Athens, Georgia: University of Georgia Press (Brown Thrasher Book), 1984.
Taylor, Paul S. Georgia Plan: 1732–1752. Berkeley: Institute of Business and Economic Research, 1972.
Chatham County-Savannah Tricentennial Plan. Chatham County-Savannah Metropolitan Planning Commission, 2006.
Wilson, Thomas D. The Oglethorpe Plan: Enlightenment Design in Savannah and Beyond. Charlottesville, VA: University of Virginia Press, 2012.
Zucker, Paul. Town and Square: From the Agora to the Village Green. New York: Columbia University Press, 1959.
History of Georgia (U.S. state)
Urban planning in Georgia (U.S. state)
Urban planning
Squares of Savannah, Georgia | Oglethorpe Plan | Engineering | 2,375 |
20,293,300 | https://en.wikipedia.org/wiki/Cross-platform%20support%20middleware | A cross-platform support middleware (CPSM) is a software abstraction layer that guarantees the existence, and correct implementation, of a set of services on top a set of platforms.
Abstraction method
The abstraction method in the CPSM development is the method used to compile the concrete source code for a given platform without compromising the abstract interfaces provided.
The most commonly used abstraction methods in CPSM development are: conditional compilation and directory separation of sources.
The first method consists in embedding preprocessor instructions in the source code to conditionally select the source subtree compatible with a given platform.
The second method takes advantage of the filesystem organization to divide the source code in different folders, one for each incompatible platform. Thus delegating the selection problem to the build system.
Some distributions like MSYS and Cygwin may help build the cross-platform code in a Unix-like environment even on Microsoft Windows. Both distributions provide a decent version of GNU Make that can direct the build process in a cross-platform fashion.
See also
Adaptive Communication Environment
Boost C++ libraries
GTK+
Netscape Portable Runtime
Simple DirectMedia Layer
wxWidgets
References
Middleware | Cross-platform support middleware | Technology,Engineering | 241 |
41,733,409 | https://en.wikipedia.org/wiki/Rigidity%20theory%20%28physics%29 | Rigidity theory, or topological constraint theory, is a tool for predicting properties of complex networks (such as glasses) based on their composition. It was introduced by James Charles Phillips in 1979 and 1981, and refined by Michael Thorpe in 1983. Inspired by the study of the stability of mechanical trusses as pioneered by James Clerk Maxwell, and by the seminal work on glass structure done by William Houlder Zachariasen, this theory reduces complex molecular networks to nodes (atoms, molecules, proteins, etc.) constrained by rods (chemical constraints), thus filtering out microscopic details that ultimately don't affect macroscopic properties. An equivalent theory was developed by P. K. Gupta and A. R. Cooper in 1990, where rather than nodes representing atoms, they represented unit polytopes. An example of this would be the SiO tetrahedra in pure glassy silica. This style of analysis has applications in biology and chemistry, such as understanding adaptability in protein-protein interaction networks. Rigidity theory applied to the molecular networks arising from phenotypical expression of certain diseases may provide insights regarding their structure and function.
In molecular networks, atoms can be constrained by radial 2-body bond-stretching constraints, which keep interatomic distances fixed, and angular 3-body bond-bending constraints, which keep angles fixed around their average values. As stated by Maxwell's criterion, a mechanical truss is isostatic when the number of constraints equals the number of degrees of freedom of the nodes. In this case, the truss is optimally constrained, being rigid but free of stress. This criterion has been applied by Phillips to molecular networks, which are called flexible, stressed-rigid or isostatic when the number of constraints per atoms is respectively lower, higher or equal to 3, the number of degrees of freedom per atom in a three-dimensional system.
The same condition applies to random packing of spheres, which are isostatic at the jamming point.
Typically, the conditions for glass formation will be optimal if the network is isostatic, which is for example the case for pure silica. Flexible systems show internal degrees of freedom, called floppy modes, whereas stressed-rigid ones are complexity locked by the high number of constraints and tend to crystallize instead of forming glass during a quick quenching.
Derivation of isostatic condition
The conditions for isostaticity can be derived by looking at the internal degrees of freedom of a general 3D network. For nodes, constraints, and equations of equilibrium, the number of degrees of freedom is
The node term picks up a factor of 3 due to there being translational degrees of freedom in the x, y, and z directions. By similar reasoning, in 3D, as there is one equation of equilibrium for translational and rotational modes in each dimension. This yields
This can be applied to each node in the system by normalizing by the number of nodes
where , , and the last term has been dropped since for atomistic systems . Isostatic conditions are achieved when , yielding the number of constraints per atom in the isostatic condition of .
An alternative derivation is based on analyzing the shear modulus of the 3D network or solid structure. The isostatic condition, which represents the limit of mechanical stability, is equivalent to setting in a microscopic theory of elasticity that provides as a function of the internal coordination number of nodes and of the number of degrees of freedom. The problem has been solved by Alessio Zaccone and E. Scossa-Romano in 2011, who derived the analytical formula for the shear modulus of a 3D network of central-force springs (bond-stretching constraints):
.
Here, is the spring constant, is the distance between two nearest-neighbor nodes, the average coordination number of the network (note that here and ), and in 3D. A similar formula has been derived for 2D networks where the prefactor is instead of .
Hence, based on the Zaccone–Scossa-Romano expression for , upon setting , one obtains , or equivalently in different notation, , which defines the Maxwell isostatic condition.
A similar analysis can be done for 3D networks with bond-bending interactions (on top of bond-stretching), which leads to the isostatic condition , with a lower threshold due to the angular constraints imposed by bond-bending.
Developments in glass science
Rigidity theory allows the prediction of optimal isostatic compositions, as well as the composition dependence of glass properties, by a simple enumeration of constraints. These glass properties include, but are not limited to, elastic modulus, shear modulus, bulk modulus, density, Poisson's ratio, coefficient of thermal expansion, hardness, and toughness. In some systems, due to the difficulty of directly enumerating constraints by hand and knowing all system information a priori, the theory is often employed in conjunction with computational methods in materials science such as molecular dynamics (MD). Notably, the theory played a major role in the development of Gorilla Glass 3. Extended to glasses at finite temperature and finite pressure, rigidity theory has been used to predict glass transition temperature, viscosity and mechanical properties. It was also applied to granular materials and proteins.
In the context of soft glasses, rigidity theory has been used by Alessio Zaccone and Eugene Terentjev to predict the glass transition temperature of polymers and to provide a molecular-level derivation and interpretation of the Flory–Fox equation. The Zaccone–Terentjev theory also provides an expression for the shear modulus of glassy polymers as a function of temperature which is in quantitative agreement with experimental data, and is able to describe the many orders of magnitude drop of the shear modulus upon approaching the glass transition from below.
In 2001, Boolchand and coworkers found that the isostatic compositions in glassy alloys—predicted by rigidity theory—exist not just at a single threshold composition; rather, in many systems it spans a small, well-defined range of compositions intermediate to the flexible (under-constrained) and stressed-rigid (over-constrained) domains. This window of optimally constrained glasses is thus referred to as the intermediate phase or the reversibility window, as the glass formation is supposed to be reversible, with minimal hysteresis, inside the window. Its existence has been attributed to the glassy network consisting almost exclusively of a varying population of isostatic molecular structures. The existence of the intermediate phase remains a controversial, but stimulating topic in glass science.
See also
Rigidity Percolation
References
Materials science
Glass physics | Rigidity theory (physics) | Physics,Materials_science,Engineering | 1,334 |
25,495,851 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20March%2019%2C%202072 | A partial solar eclipse will occur at the Moon's descending node of orbit on Saturday, March 19, 2072, with a magnitude of 0.7199. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth.
The partial solar eclipse will be visible for parts of Antarctica and southern South America.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2072
A total lunar eclipse on March 4.
A partial solar eclipse on March 19.
A total lunar eclipse on August 28.
A total solar eclipse on September 12.
Metonic
Preceded by: Solar eclipse of May 31, 2068
Followed by: Solar eclipse of January 6, 2076
Tzolkinex
Preceded by: Solar eclipse of February 5, 2065
Followed by: Solar eclipse of May 1, 2079
Half-Saros
Preceded by: Lunar eclipse of March 14, 2063
Followed by: Lunar eclipse of March 25, 2081
Tritos
Preceded by: Solar eclipse of April 20, 2061
Followed by: Solar eclipse of February 16, 2083
Solar Saros 150
Preceded by: Solar eclipse of March 9, 2054
Followed by: Solar eclipse of March 31, 2090
Inex
Preceded by: Solar eclipse of April 9, 2043
Followed by: Solar eclipse of February 28, 2101
Triad
Preceded by: Solar eclipse of May 19, 1985
Followed by: Solar eclipse of January 19, 2159
Solar eclipses of 2069–2072
Saros 150
Metonic series
Tritos series
Inex series
References
External links
2072 3 19
2072 in science
2072 3 19
2072 3 19 | Solar eclipse of March 19, 2072 | Astronomy | 505 |
78,422,234 | https://en.wikipedia.org/wiki/S5%202007%2B777 | S5 2007+777 is a classical BL Lacertae object located in the constellation of Draco. This object has a redshift of (z) 0.342 and was first discovered in 1981 as a flat-spectrum astronomical radio source. It has characteristics of different Fanaroff-Riley classes on both sides of its active nucleus making it a rare type of Hybrid morphology radio sources (HYMORs). It has an estimated V magnitude of 16.5.
Description
S5 2007+777 is classified as a blazar showing variability across the electromagnetic spectrum with amplitudes rising steadily along with frequency. It is also an Intraday Variable (IDV) source exhibiting variations as a whole as well as in polarized intensity on time scales ranging from 2 to 6 days at centimeter (cm) wavelengths. In dereddened B and I-band light curves during observations conducted in 2001, S5 2007+777 shows a smaller amplitude variation of 10%. Subsequent observations conducted in both 2002 and 2004, shows the object having minimum to maximum variations of order 30-40% on 2-4 day time scales.
Although mostly in a quiescent state, one outburst was detected in S5 2007+777 between 1991 and 1992 with the peak flux of this source reaching 3.69 Jansky at 14.5 GHz. A gamma ray flare was detected in February 2016 during an observation from the Foligno Observatory via a 30 cm telescope.
According to radio imaging of S5 2007+777 made by Very Long Baseline Array, a one-sided core-jet structure is found with one of the components exhibiting proper motion and greater flux density. Imaging by Very Large Array and Very Long Baseline Interferometry, shows the object as a core-dominated source instead, consisting of a bright radio lobe on the eastern side and a long jet on the western side of the nucleus which terminates without a clear hot spot upon reaching at 10 arcseconds from the nucleus. This jet is known to show superluminal motion, being aligned 24° to the line of sight with its 4.9 GHz luminosity calculated to be 1032 erg s−1 Hz−1.
The radio emission of the jet in S5 2007+777 shows several unique radio knots with the brightest one located midstream. The jet itself imaged at 1.49 GHz, has an extended structure in linear direction which it bends west at a changed position angle of 20°. An extended X-ray jet was also found by Chandra on kiloparsec-scales, making S5 2007+777 only one of the four BL Lacertae objects to have this feature.
References
External links
S5 2007+777 on SIMBAD
S5 2007+777 on NASA/IPAC Database
BL Lacertae objects
Draco (constellation)
Blazars
Quasars
Active galaxies
Astronomical objects discovered in 1981 | S5 2007+777 | Astronomy | 597 |
1,730,537 | https://en.wikipedia.org/wiki/March%20equinox | The March equinox or northward equinox is the equinox on the Earth when the subsolar point appears to leave the Southern Hemisphere and cross the celestial equator, heading northward as seen from Earth. The March equinox is known as the vernal equinox (spring equinox) in the Northern Hemisphere and as the autumnal equinox (autumn equinox or fall equinox) in the Southern Hemisphere.
On the Gregorian calendar at 0° longitude, the northward equinox can occur as early as 19 March (which happened most recently in 1796, and will happen next in 2044). And it can occur as late as 21 March (which happened most recently in 2007, and will happen next in 2102). For a common year the computed time slippage is about 5 hours 49 minutes later than the previous year, and for a leap year about 18 hours 11 minutes earlier than the previous year. Balancing the increases of the common years against the losses of the leap years keeps the calendar date of the March equinox from drifting more than one day from 20 March each year.
The March equinox may be taken to mark the beginning of astronomical spring and the end of astronomical winter in the Northern Hemisphere but marks the beginning of astronomical autumn and the end of astronomical summer in the Southern Hemisphere.
In astronomy, the March equinox is the zero point of sidereal time and, consequently, the right ascension and ecliptic longitude. It also serves as a reference for calendars and celebrations in many cultures and religions.
Constellation
The point where the Sun crosses the celestial equator northwards is called the First Point of Aries. However, due to the precession of the equinoxes, this point is no longer in the constellation Aries, but rather in Pisces. By the year 2600 it will be in Aquarius. The Earth's axis causes the First Point of Aries to travel westwards across the sky at a rate of roughly one degree every 72 years. Based on the modern constellation boundaries, the northward equinox passed from Taurus into Aries in the year −1865 (1866 BC), passed into Pisces in the year −67 (68 BC), will pass into Aquarius in the year 2597, and will pass into Capricornus in the year 4312. It passed by (but not into) a 'corner' of Cetus at 0°10′ distance in the year 1489.
Apparent movement of the Sun
In its apparent motion on the day of an equinox, the Sun's disk crosses the Earth's horizon directly to the east at sunrise; and again, some 12 hours later, directly to the west at sunset. The March equinox, like all equinoxes, is characterized by having an almost exactly equal amount of daylight and night across most latitudes on Earth.
Culture
Calendars
The Babylonian calendar began with the first new moon after the March equinox, the day after the return of the Sumerian goddess Inanna (later known as Ishtar) from the underworld, in the Akitu ceremony, with parades through the Ishtar Gate to the Eanna temple and the ritual re-enactment of the marriage to Tammuz, or Sumerian Dummuzi.
The Persian calendar begins each year at the northward equinox, observationally determined at Tehran.
The Indian national calendar starts the year on the day next to the vernal equinox on 22 March (21 March in leap years) with a 30-day month (31 days in leap years), then has 5 months of 31 days followed by 6 months of 30 days.
Julian calendar
The Julian calendar reform lengthened seven months and replaced the intercalary month with an intercalary day to be added every four years to February. It was based on a length for the year of 365 days and 6 hours (365.25 d), while the mean tropical year is about 11 minutes and 15 seconds less than that. This had the effect of adding about three quarters of an hour every four years. The effect accumulated from inception in 45 BC until the 16th century, when the northern vernal equinox fell on 10 or 11 March.
The date in 1452 was 11 March, 11:52 (Julian).
In 2547 it will be 20 March, 21:18 (Gregorian) and 3 March, 21:18 (Julian).
Commemorations
Abrahamic tradition
The Jewish Passover usually falls on the first full moon after the Northern Hemisphere vernal equinox, although occasionally (currently three times every 19 years) it will occur on the second full moon.
The Christian Churches calculate Easter as the first Sunday after the first full moon on or after the March equinox. The official church definition for the equinox is 21 March. The Eastern Orthodox Churches use the older Julian calendar, while the western churches use the Gregorian calendar, and the western full moons currently fall four, five or 34 days before the eastern ones. The result is that the two Easters generally fall on different days but they sometimes coincide. The earliest possible western Easter date in any year is 22 March on each calendar. The latest possible western Easter date in any year is 25 April.
Iranian tradition
The northward equinox marks the first day of various calendars including the Iranian calendar. The ancient Iranian peoples' new year's festival of Nowruz can be celebrated 20 March or 21 March. According to the ancient Persian mythology Jamshid, the mythological king of Persia, ascended to the throne on this day and each year this is commemorated with festivities for two weeks. Along with Iranian peoples, it is also a holiday celebrated by Turkic people, the North Caucasus and in Albania. It is also a holiday for Zoroastrians, adherents of the Baháʼí Faith and Nizari Ismaili Muslims irrespective of ethnicity.
West Asia and North Africa
In many Arab countries, Mother's Day is celebrated on the northward equinox.
Sham el-Nessim is a modern celebration which is claimed by some to have been celebrated in ancient Egypt but with little evidence. It is one of the public holidays in Egypt. It is assumed by some that sometime during Egypt's Christian period (–639) the date moved to Easter Monday, but before then it coincided with the vernal equinox.
South and Southeast Asia
According to the sidereal solar calendar, celebrations which originally coincided with the March equinox now take place throughout South Asia and parts of Southeast Asia on the day when the Sun enters the sidereal Aries, generally around 14 April.
In Cambodia, the Angkor Wat Equinox is a solar phenomenon which dates back to the reign of Suryavarman II.
East Asia
The traditional East Asian calendars divide a year into 24 solar terms (节气, literally "climatic segments"), and the vernal equinox (Chūnfēn, ) marks the middle of the spring. In this context, the Chinese character 分 means "(equal) division" (within a season).
In Japan, Vernal Equinox Day (春分の日 Shunbun no hi) is an official national holiday, and is spent visiting family graves and holding family reunions. Higan (お彼岸) is a Buddhist holiday exclusively celebrated by Japanese sects during both the Spring and Autumnal Equinox.
Europe
Dita e Verës or Verëza is the Albanian pagan feast that celebrates the spring equinox: the beginning of the spring-summer period. It is traditionally celebrated throughout Albanian inhabited territories, also officially in Albania.
Hilaria was an ancient Roman festival commemorating the death and resurrection of Attis.
Lieldienas
in Norse paganism, a Dísablót was celebrated on the vernal equinox.
Drowning of Marzanna
The Americas
Spring equinox in Teotihuacán
The reconstructed Cahokia Woodhenge, a large timber circle located at the Mississippian culture Cahokia archaeological site near Collinsville, Illinois, is the site of annual equinox and solstice sunrise observances. Out of respect for Native American beliefs these events do not feature ceremonies or rituals of any kind.
Modern culture
World Storytelling Day is a global celebration of the art of oral storytelling, celebrated every year on the day of the northward equinox.
World Citizen Day occurs on the northward equinox.
The Baháʼí calendar year starts at the sunset preceding the March equinox calculated for Tehran.
In Annapolis, Maryland, United States, boatyard employees and sailboat owners celebrate the spring equinox with the "Burning of the Socks" festival. Traditionally, the boating community wears socks only during the winter. These are burned at the approach of warmer weather, which brings more customers and work to the area. Officially, nobody then wears socks until the next equinox.
Neopagans observe the March equinox (referred to as Ostara) as a cardinal point on the Wheel of the Year. In the northern hemisphere some varieties of paganism adapt vernal equinox celebrations, while in the southern hemisphere pagans adapt autumnal traditions.
International Astrology Day
On 20 March 2014 and 20 March 2018, the March equinox was commemorated by an animated Google Doodle.
See also
June solstice
September equinox
December solstice
References
External links
Spring Starts Today All Over America, Which Is Weird (19 March 2020)
Spherical astronomy
Dynamics of the Solar System
Equinox
Astronomical events of the Solar System
Spring equinox
Autumn equinox | March equinox | Astronomy | 1,972 |
13,406,286 | https://en.wikipedia.org/wiki/Handy%20billy | Handy billy — also known as Handy-billie —is an emergency portable pump that for decades was commonly placed aboard most U.S. Navy ships from World War I on, as well as later use on civilian craft.
Purpose of the pump
The handy billy, formally designated "P50", because it pumped 50 gallons per minute, was gasoline-powered and could be used, during flooding conditions, in conjunction with other pumps on the ship. However, it was especially valuable when the ship lost electrical power and normal pumping ability was lost.
On smaller ships, it was a critical piece of equipment.
The pump gained its name because it was very “handy” and dependable. It was especially handy because it could be easily transported from place to place by two strong crew members, one at each end, as it weighed 160 pounds during World War II.
Versatility
The handy billy could be used for fire-fighting and/or pumping water from flooded spaces aboard ship.
Example of use
See
See also
Pump
References
External links
USS ATLANTA CL-51 - Battle damage during evening of 12 November 1942
Abandonment of the "Duncan" and Rescue of Her Survivors by the "McCalla"
Fire pump aboard ship to pump sea water.
Nautical terminology
Pumps | Handy billy | Physics,Chemistry | 249 |
15,442,567 | https://en.wikipedia.org/wiki/Ouvrage%20Bersillies | Ouvrage Bersillies is a petit ouvrage of the Maginot Line, built as part of the "New Fronts" program to address shortcomings in the Line's coverage of the border with Belgium. Like the other three ouvrages near Maubeuge, it is built on an old Séré de Rivières-system fortification, near the town of Bersillies. The preserved Ouvrage La Salmagne is nearby to the southeast. Bersillies is not open to the public.
Séré de Rivières
The original Ouvrage de Bersillies was built to the north of Bersillies in 1884-1895 as part of the Séré de Rivières fortifications of Maubeuge. The trapezoidal fort is surrounded by a ditch defended by counterscarps. It was armed with four 95 mm guns and several smaller pieces. The small infantry shelters or abris housed the troops. The position was planned to cover the D228 road. Bersillies was attacked by German forces in 1914 during the Siege of Maubeuge. Isolated far behind the front lines, it surrendered to the Germans with the other Maubeuge fortifications in early September 1914.
Design and construction
The Maginot-era site was approved in 1934. Work cost 6.04 million francs.
Description
The Maginot-era improvements to Bersillies comprise two combat blocks. The ouvrage was built within the walls of the old Ouvrage de Bersillies. An underground gallery connects the two blocks, with underground service and barracks spaces along the short gallery.
Block 1: infantry/entry block with one automatic rifle cloche (GFM-B), one mixed-arms cloche (AM), one grenade launcher cloche (LG), one automatic rifle embrasure and one and one machine gun/47mm anti-tank gun (JM/AC47) embrasure.
Block 2: infantry/entrance block with two GFM cloches, two AM cloches and two retractable twin machine gun turrets.
A number of small blockhouses are associated with Bersillies, as well as a casemate:
Casemate de Crèvecoeur: Double block with two JM/AC47 embrasures, two JM embrasures, one AM cloche and two GFM-B cloches. It is not connected to the ouvrage.
Manning
The 1940 manning of the ouvrage under the command of Captain Pujade comprised 97 men and 3 officers of the 84th Fortress Infantry Regiment. The units were under the umbrella of the 101st Fortress Infantry Division, 1st Army, Army Group 1.
History of the Maginot ouvrage
See Fortified Sector of Maubeuge for a broader discussion of the events of 1940 in the Maubeuge sector of the Maginot Line.
During the Battle of France in 1940, the invading German forces approached Maubeuge from the south and east, to the rear of the defensive line. The German 28th Infantry Division moved along the line of fortifications 19–22 May, rolling up blockhouses and larger fortifications. Bersillies was under attack on the morning of 22 May. Heavy close-range artillery fire on 23 May destroyed cloches and air intakes. With German troops on top of the ouvrage, Captain Pujade ceased firing and surrendered his garrison.
Current condition
Bersillies is owned by a hunting society. It is secured, but is not open for sightseeing. It features an unusual number of murals painted by its garrison, which remain in a good state of preservation.
See also
List of all works on Maginot Line
Siegfried Line
Atlantic Wall
Czechoslovak border fortifications
Notes
References
Bibliography
Allcorn, William. The Maginot Line 1928-45. Oxford: Osprey Publishing, 2003.
Kaufmann, J.E. and Kaufmann, H.W. Fortress France: The Maginot Line and French Defenses in World War II, Stackpole Books, 2006.
Kaufmann, J.E., Kaufmann, H.W., Jancovič-Potočnik, A. and Lang, P. The Maginot Line: History and Guide, Pen and Sword, 2011.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 1. Paris, Histoire & Collections, 2001.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 2. Paris, Histoire & Collections, 2003.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 3. Paris, Histoire & Collections, 2003.
Mary, Jean-Yves; Hohnadel, Alain; Sicard, Jacques. Hommes et Ouvrages de la Ligne Maginot, Tome 5. Paris, Histoire & Collections, 2009.
External links
Bersillies (petit ouvrage de) at fortiff.be .
Petit ouvrage de Bersillies at lignemaginot.com
Petit Ouvrage de Bersillies at tracesofwar.com
Le PO de Bersillies at wikimaginot.eu
BERS
Maginot Line
Séré de Rivières system
Fortifications of Maubeuge | Ouvrage Bersillies | Engineering | 1,147 |
4,686,837 | https://en.wikipedia.org/wiki/European%20Home%20Systems%20Protocol | European Home Systems (EHS) Protocol was a communication protocol aimed at home appliances control and communication using power line communication (PLC), developed by the European Home Systems Association (EHSA).
After merging with two other protocols, it is a part of the KNX standard, which complies with the European Committee for Electrotechnical Standardization (CENELEC) norm EN 50090 and has a chance to be a basis for the first open standard for home and building control.
See also
Building automation
Home automation
External links
Home Automation with EHS: Cheap But Slow - Nikkei Electronics Asia
www.cenelec.eu - European Committee for Electrotechnical Standardization
www.konnex.org - association aimed at development of home and building control systems.
Home automation
Network protocols | European Home Systems Protocol | Technology | 159 |
31,467,742 | https://en.wikipedia.org/wiki/GlycomeDB | GlycomeDB is a database of carbohydrates including structural and taxonomic data.
See also
Glycomics
References
External links
http://www.glycome-db.org
Biological databases
Omics
Glycomics
Carbohydrate chemistry
Carbohydrates | GlycomeDB | Chemistry,Biology | 64 |
17,971,467 | https://en.wikipedia.org/wiki/Box%20blade | A box blade is a type of implement used on tractors for smoothing and contouring land. It is almost always unpowered, though can have auxiliary hydraulics attached in order for adjustments to be made without leaving the seat of the tractor. It is attached to the tractor via the three point hitch.
Box blades are a variation of the rear blade which has developed into its own implement, with uses that parallel that of a rear blade, but are distinct. They consist of a heavy metal 3 sided box, with the front, top and bottom open. The front has retractable scarifiers, which can be used to break up hard ground. The rear has a forward and reverse cutting edge mounted at the bottom, with the reverse cutting edge often gated or floating on more expensive models. Box blades are usually very heavy due to the forces placed on them - even a lightweight one will weigh 500 pounds. The implement works going forward by scraping soil, carrying it forward as it rolls inside the box, and allowing some to work out as the blade passes over low spots. It works going back much like a dozer blade, pushing dirt, which can spill out the bottom or sides. Unlike a dozer blade, box blades are not adjustable relative to the tractor more than a few degrees in any direction, and generally only when the operator dismounts the tractor.
Box blades are primarily used for moving dirt, smoothing land, and contouring land. Most major tractor implement manufacturers make box blades, including Cammond, Woods, Gannon, & BushHog. Commercial laser guided models are coming into use which automatically level the blade via reference to an external laser.
Three point road graders can be viewed as a specific type of box blade, and are used to grade and maintain dirt and gravel roads. They usually have 2 parallel angled cutting edges, and long, low sidepieces. Manufacturers include DoMor Equipment Dura-Grader, Road Boss, and RoadRunner.
Construction equipment | Box blade | Engineering | 400 |
62,623,184 | https://en.wikipedia.org/wiki/Ethernet%20VPN | Ethernet VPN (EVPN) is a technology for carrying layer 2 Ethernet traffic as a virtual private network using wide area network protocols. EVPN technologies include Ethernet over MPLS and Ethernet over VXLAN.
EVPN uses encapsulation methods to ensure efficient and scalable transmission of Ethernet traffic over MPLS or IP-based networks. The encapsulation encapsulates Ethernet frames within MPLS or VXLAN headers for transport.
MPLS Encapsulation
In MPLS-based EVPN, Ethernet frames are encapsulated with:
MPLS Label Stack: Each EVPN instance is associated with a unique label that helps in identifying the destination bridge domain.
Control Word (Optional): Provides additional information for synchronization and alignment in certain scenarios.
The encapsulated packet flow includes:
Original Ethernet Frame
MPLS Labels
Outer IP Header (in case of IP/MPLS networks)
EVPNs are covered by a number of Internet RFCs, including:
"Requirements for Ethernet VPN (EVPN)",
"BGP MPLS-Based Ethernet VPN",
"A Network Virtualization Overlay Solution Using Ethernet VPN (EVPN)",
"Ethernet-Tree (E-Tree) Support in Ethernet VPN (EVPN) and Provider Backbone Bridging EVPN (PBB-EVPN)".
"Operational Aspects of Proxy ARP/ND in Ethernet Virtual Private Networks".
References
See also
Virtual Private LAN Service
Ethernet
Tunneling protocols | Ethernet VPN | Technology,Engineering | 319 |
67,454,425 | https://en.wikipedia.org/wiki/NGC%205384 | NGC 5384 is a lenticular galaxy in the constellation Virgo. It was discovered on May 8, 1864, by the astronomer Albert Marth. It is located about 250 million light-years (79.21 megaparsecs) away.
References
External links
Virgo (constellation)
5384
Lenticular galaxies | NGC 5384 | Astronomy | 67 |
60,119,704 | https://en.wikipedia.org/wiki/Foldable%20smartphone | A foldable smartphone (also known as a foldable phone or simply foldable) is a smartphone with a folding form factor. It is reminiscent of the clamshell (or "flip phone") design of many earlier feature phones. Some variants of the concept use multiple touchscreen panels on a hinge, while other designs utilise a flexible display. Concepts of such devices date back as early as Nokia's "Morph" concept in 2008, and a concept presented by Samsung Electronics in 2013 (as part of a larger set of concepts utilizing flexible OLED displays), while the first commercially available folding smartphones with OLED displays began to emerge in November 2018.
Some devices may fold out on a vertical axis to into a wider, tablet-like form, but are still usable in a smaller, folded state; the display may either wrap around to the back of the device when folded (as with the Royole FlexPai and Huawei Mate X), or use a booklet-like design where the larger, folded screen is located on the interior, and a screen on its "cover" allows the user to interact with the device without opening it (such as the Samsung Galaxy Fold series). Horizontally-folding smartphones have also been produced, typically using a clamshell form factor.
The first generation of commercially released foldable smartphones faced concerns over their durability, as well as their high prices. In 2023, around 1% of worldwide smartphone ownership was foldable smartphones.
History
In 2006, Polymer Vision showed a roll-able concept and a foldable smartphone, the Readius (zh), at the Mobile World Congress (MWC) which also serves as a reader.
In 2008, Nokia presented animated concepts of a flexible device it dubbed "Morph", which had a tri-fold design that could be bended into various forms, such as a large unfolded device, a feature phone-sized unit, and a smart wristband. In a 2019 retrospective on the concept, CNET noted that Morph could be considered a forerunner to the first wave of commercially produced folding phones, as well as a showcase of future possibilities.
In 2011, Kyocera released a dual-touchscreen Android smartphone known as the Echo, which featured a pair of 3.5-inch touchscreens. When folded, the top screen continued to face the user while covering the secondary screen. Two individual apps could be shown on the displays, a single app could span across them, while specific apps also featured "optimized" two-pane layouts. Two years later, NEC released the Medias W in Japan. Unlike the Echo, the secondary screen could be folded behind the phone. The camera rotated with the screen so that the same sensor could face both forward and rear In 2017, ZTE released the Axon M with a similar hinge to the Medias W. ZTE stated that the more powerful hardware of modern smartphones, and improvements to multitasking and tablet support on Android, helped to improve this experience.
The development of thin, flexible OLED displays enabled the possibility for new designs and form factors. During its Consumer Electronics Show keynote in 2013, Samsung presented several concepts—codenamed Youm - for smartphones incorporating flexible displays. One such concept was a smartphone that could fold outward into a single, uninterrupted tablet-sized display. The first Youm concept to make it to a production model was the Galaxy Note Edge—a phablet with a portion of the screen that sloped over the right-hand bezel.
Speculation surrounding the development of folding phones using OLED displays began to emerge more rapidly in 2018. In January 2018, it was reported that LG Electronics had obtained a design patent for a folding smartphone. Later in June, it was reported that Microsoft had been developing a similar device as part of its Surface line, codenamed "Andromeda" (itself a spiritual successor to a dual-screen booklet tablet prototype Microsoft had been exploring in the late-2000s known as Courier), while Samsung was also said to be developing such a device.
In November 2018, the Chinese startup Royole released the first commercially available foldable smartphone with an OLED display, the Royole Flexpai. It featured a single 7.8-inch display that folds outwards, leaving the display exposed when folded. Later that month at its developers' conference, Samsung officially teased a prototype of its folding smartphone, which would be produced "in the coming months". The prototype used a booklet-style layout, with an "InfinityFlex" display located on the inside of the device, and a smaller "cover" screen on the front of the device to allow access when the screen is closed. At a concurrent developers' summit, Android VP of engineering Dave Burke stated that the next version of the platform would provide enhancements and guidance relevant to folding devices, leveraging existing features.
In January 2019, Xiaomi CEO Lin Bin published a video on Sina Weibo, featuring him demonstrating a prototype smartphone with two flaps capable of being folded inward. Samsung officially unveiled the Galaxy Fold during its media event at Mobile World Congress in February 2019. Alongside the Galaxy Fold, the convention also saw other foldable phones being unveiled or teased, such as the Huawei Mate X, and TCL presenting various prototype concepts featuring its "DragonHinge" technology (including a bracelet-styled device). LG did not unveil a folding device, citing a desire to focus more on re-gaining market share in the smartphone market. It did, however, unveil a "Dual Screen" case accessory for its LG V50 smartphone—a folio-styled case containing a secondary display panel inside.
Other companies expressed interest in the concept, or have received patents on designs (such as hinge implementations and overall designs) relating to foldable phones. Motorola Mobility had received patents for a horizontal folding smartphone reminiscent of clamshell feature phones.
In April 2019, the impending launch of the Galaxy Fold was met with quality concerns from critics, after widespread reports of review units experiencing varying forms of display failure (in some instances caused by accidental removal of a plastic layer meant to protect the screen in lieu of glass, along with other failures). Samsung indefinitely postponed the device's release, stating that it needed time to investigate the failures and improve the device's durability. Huawei also delayed its Huawei Mate X, with the company citing its desire to take a "cautious" approach due to the Samsung Galaxy Fold.
In November 2019, Motorola unveiled its horizontal-folding Razr—inspired by its former Razr feature phone line released on 6 February 2020.
Samsung also announced a similar device known as the Galaxy Z Flip.
Huawei announced the Mate Xs on 24 February 2020 as a hardware revision of the original Mate X; it was released in "global markets" outside China in March 2020. The device features a more durable display, improved hinge function and a redesigned cooling system, as well as the newer Kirin 990 5G SoC and Android 10 with EMUI 10. Samsung later revealed the Samsung Galaxy Z Fold 2 in September 2020.
On 25 February 2021, Huawei released the Huawei Mate X2.
In March 2021, Xiaomi Technology announced the Xiaomi Mi MIX Fold.
In August 2021, the Samsung Galaxy Z Fold 3 and Samsung Galaxy Z Flip 3 were released.
On 15 December 2021, OPPO announced the OPPO Find N.
On 11 April 2022, Vivo introduced the Vivo X Fold. On 11 August, 2022, Xiaomi released the Xiaomi MIX Fold 2.
The Samsung Galaxy Z Fold 4 and Samsung Galaxy Z Flip 4 were announced at the August 2022 edition of Galaxy Unpacked. The Galaxy Z Fold 4 was released on 25 August 2022, and the Galaxy Z Flip 4 was released on 26 August 2022.
Motorola Mobility launched the Moto Razr 2022 on 11 August 2022, it is currently only available to the Chinese market but there is speculation that it could become available to other world markets at a later date. In June 2023, Motorola Mobility announced the new Razr (2023) and Razr+ (2023) to the U.S. market.
The Huawei Mate XT is the world's first double-folding, or tri-fold foldable smartphone, released in September, 2024. The device can be used with a case that has a kickstand, and a foldable keyboard with a built in trackpad to provide a desktop PC-like experience.
Components
Display materials
Foldable smartphones typically use flexible, plastic OLED displays rather than glass (such as Corning's Gorilla Glass product, which is used in the majority of mid and high-end smartphones). Plastic displays are naturally capable of sustaining the required bend radius for a foldable smartphone, but they are more susceptible to blemishes and scratches than traditional glass smartphone displays. Although Corning does produce a flexible glass product known as Willow Glass, the company states that its manufacturing process requires use of a salt solution—thus making it unsuitable for electronic displays because the salt can damage the transistors used in OLED panels (which are built directly on the panel). Nonetheless, the company stated in March 2019 that it was in the process of developing a flexible glass suitable for smartphones, which would be thick and have a bend radius.
Samsung marketed its Galaxy Z Flip as featuring -thick "ultra-thin glass" with a plastic layer similar to the Galaxy Fold, manufactured by Samsung with materials from Schott AG, which is "produced using an intensifying process to enhance its flexibility and durability", and injected with a "special material up to an undisclosed depth to achieve a consistent hardness". A stress test by YouTube channel JerryRigEverything showed the screen was scratched when rubbed with a pick with a Mohs rating of 2 (in comparison, most smartphones tested by the channel begin to experience scratches with 6 and 7-rated picks), placing its durability in line with other folding phones. However, The Verge did note Samsung's statement that the device contained a protective polymer layer similar to that of the Galaxy Fold.
List of foldable smartphone manufacturers
Google
Honor
Huawei
Motorola
OnePlus
OPPO
Samsung
Vivo
Xiaomi
ZTE
See also
Clamshell design
Flexible display
Form factor (mobile phones)
Laptop
References
Mobile phones
Tablet computers
Crossover devices | Foldable smartphone | Technology | 2,139 |
14,835,049 | https://en.wikipedia.org/wiki/Covering%20relation | In mathematics, especially order theory, the covering relation of a partially ordered set is the binary relation which holds between comparable elements that are immediate neighbours. The covering relation is commonly used to graphically express the partial order by means of the Hasse diagram.
Definition
Let be a set with a partial order .
As usual, let be the relation on such that if and only if and .
Let and be elements of .
Then covers , written ,
if and there is no element such that . Equivalently, covers if the interval is the two-element set .
When , it is said that is a cover of . Some authors also use the term cover to denote any such pair in the covering relation.
Examples
In a finite linearly ordered set {1, 2, ..., n}, i + 1 covers i for all i between 1 and n − 1 (and there are no other covering relations).
In the Boolean algebra of the power set of a set S, a subset B of S covers a subset A of S if and only if B is obtained from A by adding one element not in A.
In Young's lattice, formed by the partitions of all nonnegative integers, a partition λ covers a partition μ if and only if the Young diagram of λ is obtained from the Young diagram of μ by adding an extra cell.
The Hasse diagram depicting the covering relation of a Tamari lattice is the skeleton of an associahedron.
The covering relation of any finite distributive lattice forms a median graph.
On the real numbers with the usual total order ≤, the cover set is empty: no number covers another.
Properties
If a partially ordered set is finite, its covering relation is the transitive reduction of the partial order relation. Such partially ordered sets are therefore completely described by their Hasse diagrams. On the other hand, in a dense order, such as the rational numbers with the standard order, no element covers another.
References
.
.
.
Binary relations
Order theory | Covering relation | Mathematics | 407 |
906,722 | https://en.wikipedia.org/wiki/Bare-metal%20restore | Bare-metal restore is a technique in the field of data recovery and restoration where the backed up data is available in a form that allows one to restore a computer system from "bare metal", i.e. without any requirements as to previously installed software or operating system.
Typically, the backed up data includes the necessary operating system, applications and data components to rebuild or restore the backed up system to an entirely separate piece of hardware. In some configurations, the hardware receiving the restore needs to have an identical configuration to the hardware that was the source of the backup, although virtualization techniques and careful planning can enable a bare-metal restore to a hardware configuration different from the original.
Disk imaging applications enable bare-metal restores by storing copies (images) of the entire contents of hard disks to networked or other external storage, and then writing those images to other physical disks. The disk image application itself can include an entire operating system, bootable from a live CD or network file server, which contains all the required application code to create and restore the disk images.
Examples of software used for bare-metal recovery
The dd utility on a Linux boot CD can be used to copy file systems between disk images and disk partitions to effect a bare-metal backup and recovery. These disk images can then be used as input to a new partition of the same type but equal or larger size, or alternatively a variety of virtualization technologies as they often represent a more accessible but less efficient representation of the data on the original partition.
The IBM VM/370 operating system provides a command by the name of "ddr," for disk dump and restore. It is a bit by bit backup of a hard drive to a specified media, typically tape, but many choices exist.
Microsoft introduced a new backup utility (Wbadmin) into Windows Server 2008 family of operating system in 2008 which has built-in support for bare-metal recovery. Users of this software can also recover their system to a Hyper-V virtual machine.
Microsoft updated the Windows Recovery Environment features in the Windows 8 family of operating system to be set up to provide built-in support for bare-metal recovery.
Microsoft Windows Server 2012 (R2) offers built-in Bare-Metal-Recovery.
Comparison with other data backup and restoration techniques
Bare-metal restore differs from local disk image restore where a copy of the disk image, and the restoration software, are stored on the computer that is backed up.
Bare-metal restore differs from simple data backups where application data, but neither the applications nor the operating system are backed up or restored as a unit.
See also
Comparison of disk cloning software
References
Backup
Backup software | Bare-metal restore | Engineering | 540 |
4,086,864 | https://en.wikipedia.org/wiki/NGC%204567%20and%20NGC%204568 | NGC 4567 and NGC 4568 (nicknamed the Butterfly Galaxies or Siamese Twins) are a set of unbarred spiral galaxies about 60 million light-years away in the constellation Virgo. They were both discovered by William Herschel in 1784. They are part of the Virgo Cluster of galaxies.
These galaxies are in the process of colliding and merging with each other, as studies of their distributions of neutral and molecular hydrogen show, with the highest star-formation activity in the part where they overlap. However, the system is still in an early phase of interaction. In about 500 million years the galaxies will coalesce into a single elliptical galaxy.
Supernovae
Four supernovae have been observed in the Butterfly Galaxies:
SN 1990B (type Ib, mag. 16) was discovered by Saul Perlmutter and Carlton Pennypacker on 20 January 1990.
SN 2004cc (type Ic, mag. 17.5) was discovered by the Lick Observatory Supernova Search (LOSS) on 10 June 2004.
SN 2020fqv (type IIb, mag. 19) was discovered by the Automatic Learning for the Rapid Classification of Events (ALeRCE) on 31 March 2020.
SN 2023ijd (type II, mag. 16.8) was discovered by ASAS-SN on 14 May 2023.
Naming controversy
The two galaxies were nicknamed "Siamese Twins" because they appear to be connected. On August 5, 2020, NASA announced that they would not use that nickname in an effort to avoid systemic discrimination in their terminology.
See also
Antennae Galaxies
Eyes Galaxies
Notes
References
External links
Kopernik Space Images, Spiral Galaxies NGC 4568 and NGC 4567 aka "The Siamese Twins" : Supernova 2004cc, George Normandin (29 June 2004)
Skyhound, The Siamese Twins
SIMBAD, VCC 1673 : NGC 4567 -- Galaxy in Pair of Galaxies
SIMBAD, VCC 1676 : NGC 4568 -- Galaxy in Pair of Galaxies
NED, VV 219
NED, NGC 4567
NED, NGC 4568
Virgo (constellation)
Virgo Cluster
4567
42064
Interacting galaxies
Unbarred spiral galaxies
Overlapping galaxies | NGC 4567 and NGC 4568 | Astronomy | 456 |
599,009 | https://en.wikipedia.org/wiki/Brass%20knuckles | Brass knuckles (also referred to as brass knucks, knuckledusters, iron fist and paperweight, among other names) are a melee weapon used primarily in hand-to-hand combat. They are fitted and designed to be worn around the knuckles of the human hand. Despite their name, they are often made from other metals, plastics or carbon fibers and not necessarily brass.
Designed to preserve and concentrate a punch's force by directing it toward a harder and smaller contact area, they result in increased tissue disruption, including an increased likelihood of fracturing the intended target's bones on impact. The extended and rounded palm grip also spreads the counter-force across the attacker's palm, which would otherwise have been absorbed primarily by the attacker's fingers. This reduces the likelihood of damage to the attacker's fingers.
The weapon has been controversial for its easy concealability and is illegal to own and use in a number of countries.
History and variations
During the 18th century. Cast iron, brass, lead, and wood knuckles were made in the United States during the American Civil War (1861–1865). Soldiers would often buy cast iron or brass knuckles. If they could not buy them, they would carve their own from wood, or cast them at camp by melting lead bullets and using a mold in the dirt.
Some brass knuckles have rounded rings, which increase the impact of blows from moderate to severe damage. Other instruments (not generally considered to be "brass knuckles" or "metal knuckles" per se) may have spikes, sharp points and cutting edges. These devices come in many variations and are called by a variety of names, including "knuckle knives."
By the late 19th century, knuckledusters were incorporated into various kinds of pistols such as the Apache revolver used by criminals in France in the late 19th to early 20th centuries. During World War I the US Army issued two different knuckle knives, the US model 1917 and US model 1918 Mark I trench knives. Knuckles and knuckle knives were also being made in England at the time and purchased privately by British soldiers. It was advised not to polish brass knuckles as allowing the brass to darken would act as camouflage on the battlefield.
By World War II, knuckles and knuckle knives were quite popular with both American and British soldiers. The Model 1918 trench knives were reissued to American paratroopers. A notable knuckle knife still in use is the Cuchillo de Paracaidista, issued to Argentinian paratroopers. Current-issue models have an emergency blade in the crossguard.
Legality and distribution
Brass knuckles are illegal in several countries, including: Hong Kong, Austria, Belgium, Canada, Denmark, Bosnia, Croatia, Estonia, Cyprus, Finland, France, Germany, Greece, Hungary, Israel, Ireland, Malaysia, the Netherlands, Norway, Poland, Portugal, Russia, Spain, Turkey, Sweden, Singapore, Taiwan, Ukraine, the United Arab Emirates and the United Kingdom.
Import of brass knuckles into Australia is illegal unless a government permit is obtained; permits are available for only limited purposes, such as police and government use, or use in film productions. They are prohibited weapons in the state of New South Wales.
In Brazil, brass knuckles are legal and freely sold. They are called , which means 'English punch', or , which means 'puncher'.
In Canada, brass knuckles (Canadian French , which literally means 'American fist'), or any similar devices made of metal, are listed as prohibited weapons; possession of such weapon is a criminal offence under the Criminal Code. Plastic knuckles have been determined to be legal in Canada.
In France, brass knuckles are illegal. They can be bought as a "collectable" (provided one is over 18), but it is forbidden to carry or use one, whatever the circumstance, including self-defense. The French term is , which literally means 'American punch'.
In Russia, brass knuckles were illegal to purchase or own during Imperial times and are still forbidden according to Article 6 of the 1996 Federal Law on Weapons. They are called (from French , literally 'head breaker').
In Serbia, brass knuckles are legal to purchase and own (for people over 16 years old) but are not legal to carry in public. They are called , literally 'boxer'.
In Taiwan, according to the Law of the Republic of China, possession and sales of brass knuckles are illegal. Under the regulation, brass knuckles are considered weapons. Without the permission of the central regulatory agency, it is against the law to manufacture, sell, transport, transfer, rent, or have them in any collection or on display.
In China, brass knuckles are completely legal as per the Law of the Republic of China. According to Article 32 of the "Public Security Administration Punishment Law of the People's Republic of China", citizens can legally own them for self-defense, but they are prohibited items in certain places. For example, brass knuckles are not allowed to be carried when travelling on the subway, buses, trains, or other public transport. In ancient China, brass knuckles were popular, and were used regularly as a concealed weapon or self-defense tool.
In the United States, brass knuckles are not prohibited at the federal level, but various state, county and city laws, and the District of Columbia, regulate or prohibit their purchase and/or possession. , brass knuckles are prohibited in 21 states. Some state laws require purchasers to be 18 or older. Most states have statutes regulating the carrying of weapons, and some specifically prohibit brass knuckles or "metal knuckles". Brass knuckles can readily be purchased online or, where legal, at flea markets, swap meets, gun shows, and at specialty stores. Some companies manufacture belt buckles or novelty paper weights that function as brass knuckles. Brass knuckles made of plastic, rather than metal, have been marketed as "undetectable by airport metal detectors". Some states that ban metal knuckles also ban plastic knuckles. For example, New York's criminal statutes list both "metal knuckles" and "plastic knuckles" as prohibited weapons, but do not define either.
See also
Bagh nakh
Cestus (boxing)
Gauntlet (glove)
Mark I trench knife
Tekkō
Vajra-mushti
Weighted-knuckle glove
References
Brass
Metallic objects | Brass knuckles | Physics | 1,290 |
12,230,646 | https://en.wikipedia.org/wiki/Comparative%20Biochemistry%20and%20Physiology%20B | Comparative Biochemistry and Physiology Part B: Biochemistry & Molecular Biology is a peer-reviewed scientific journal that covers research in biochemistry, physiology, and molecular biology.
External links
Biochemistry journals
Physiology journals
Elsevier academic journals
Academic journals established in 1971 | Comparative Biochemistry and Physiology B | Chemistry | 47 |
10,708,254 | https://en.wikipedia.org/wiki/System%20safety | The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approach. This is different from traditional safety strategies which rely on control of conditions and causes of an accident based either on the epidemiological analysis or as a result of investigation of individual past accidents. The concept of system safety is useful in demonstrating adequacy of technologies when difficulties are faced with probabilistic risk analysis. The underlying principle is one of synergy: a whole is more than sum of its parts. Systems-based approach to safety requires the application of scientific, technical and managerial skills to hazard identification, hazard analysis, and elimination, control, or management of hazards throughout the life-cycle of a system, program, project or an activity or a product. "Hazop" is one of several techniques available for identification of hazards.
System approach
A system is defined as a set or group of interacting, interrelated or interdependent elements or parts, that are organized and integrated to form a collective unity or a unified whole, to achieve a common objective. This definition lays emphasis on the interactions between the parts of a system and the external environment to perform a specific task or function in the context of an operational environment. This focus on interactions is to take a view on the expected or unexpected demands (inputs) that will be placed on the system and see whether necessary and sufficient resources are available to process the demands. These might take form of stresses. These stresses can be either expected, as part of normal operations, or unexpected, as part of unforeseen acts or conditions that produce beyond-normal (i.e., abnormal) stresses. This definition of a system, therefore, includes not only the product or the process but also the influences that the surrounding environment (including human interactions) may have on the product’s or process’s safety performance. Conversely, system safety also takes into account the effects of the system on its surrounding environment. Thus, a correct definition and management of interfaces becomes very important. Broader definitions of a system are the hardware, software, human systems integration, procedures and training. Therefore, system safety as part of the systems engineering process should systematically address all of these domains and areas in engineering and operations in a concerted fashion to prevent, eliminate and control hazards.
A “system", therefore, has implicit as well as explicit definition of boundaries to which the systematic process of hazard identification, hazard analysis and control is applied. The system can range in complexity from a crewed spacecraft to an autonomous machine tool. The system safety concept helps the system designer(s) to model, analyse, gain awareness about, understand and eliminate the hazards, and apply controls to achieve an acceptable level of safety. Ineffective decision making in safety matters is regarded as the first step in the sequence of hazardous flow of events in the "Swiss cheese" model of accident causation. Communications regarding system risk have an important role to play in correcting risk perceptions by creating, analysing and understanding information model to show what factors create and control the hazardous process. For almost any system, product, or service, the most effective means of limiting product liability and accident risks is to implement an organized system safety function, beginning in the conceptual design phase and continuing through to its development, fabrication, testing, production, use and ultimate disposal. The aim of the system safety concept is to gain assurance that a system and associated functionality behaves safely and is safe to operate. This assurance is necessary. Technological advances in the past have produced positive as well as negative effects.
Root cause analysis
A root cause analysis identifies the set of multiple causes that together might create a potential accident. Root cause techniques have been successfully borrowed from other disciplines and adapted to meet the needs of the system safety concept, most notably the tree structure from fault tree analysis, which was originally an engineering technique. The root cause analysis techniques can be categorised into two groups: a) tree techniques, and b) check list methods. There are several root causal analysis techniques, e.g. Management Oversight and Risk Tree (MORT) analysis. Others are Event and Causal Factor Analysis (ECFA), Multilinear Events Sequencing, Sequentially Timed Events Plotting Procedure, and Savannah River Plant Root Cause Analysis System.
Use in other fields
Safety engineering
Safety engineering describes some methods used in nuclear and other industries. Traditional safety engineering techniques are focused on the consequences of human error and do not investigate the causes or reasons for the occurrence of human error. System safety concept can be applied to this traditional field to help identify the set of conditions for safe operation of the system. Modern and more complex systems in military and NASA with computer application and controls require functional hazard analyses and a set of detailed specifications at all levels that address safety attributes to be inherent in the design. The process following a system safety program plan, preliminary hazard analyses, functional hazard assessments and system safety assessments are to produce evidence based documentation that will drive safety systems that are certifiable and that will hold up in litigation. The primary focus of any system safety plan, hazard analysis and safety assessment is to implement a comprehensive process to systematically predict or identify the operational behavior of any safety-critical failure condition or fault condition or human error that could lead to a hazard and potential mishap. This is used to influence requirements to drive control strategies and safety attributes in the form of safety design features or safety devices to prevent, eliminate and control (mitigation) safety risk. In the distant past hazards were the focus for very simple systems, but as technology and complexity advanced in the 1970s and 1980s more modern and effective methods and techniques were invented using holistic approaches. Modern system safety is comprehensive and is risk based, requirements based, functional based and criteria based with goal structured objectives to yield engineering evidence to verify safety functionality is deterministic and acceptable risk in the intended operating environment. Software intensive systems that command, control and monitor safety-critical functions require extensive software safety analyses to influence detail design requirements, especially in more autonomous or robotic systems with little or no operator intervention. Systems of systems, such as a modern military aircraft or fighting ship with multiple parts and systems with multiple integration, sensor fusion, networking and interoperable systems will require much partnering and coordination with multiple suppliers and vendors responsible for ensuring safety is a vital attribute planned in the overall system.
Weapon system safety
Weapon System Safety is an important application of the system safety field, due to the potentially destructive effects of a system failure or malfunction. A healthy skeptical attitude towards the system, when it is at the requirements definition and drawing-board stage, by conducting functional hazard analyses, would help in learning about the factors that create hazards and mitigations that control the hazards. A rigorous process is usually formally implemented as part of systems engineering to influence the design and improve the situation before the errors and faults weaken the system defences and cause accidents.
Typically, weapons systems pertaining to ships, land vehicles, guided missiles and aircraft differ in hazards and effects; some are inherent, such as explosives, and some are created due to the specific operating environments (as in, for example, aircraft sustaining flight). In the military aircraft industry safety-critical functions are identified and the overall design architecture of hardware, software and human systems integration are thoroughly analyzed and explicit safety requirements are derived and specified during proven hazard analysis process to establish safeguards to ensure essential functions are not lost or function correctly in a predictable manner. Conducting comprehensive hazard analyses and determining credible faults, failure conditions, contributing influences and causal factors, that can contribute to or cause hazards, are an essentially part of the systems engineering process. Explicit safety requirements must be derived, developed, implemented, and verified with objective safety evidence and ample safety documentation showing due diligence. Highly complex software intensive systems with many complex interactions affecting safety-critical functions requires extensive planning, special know-how, use of analytical tools, accurate models, modern methods and proven techniques. Prevention of mishaps is the objective.
References
External links
Organisations
System Safety Society
Naval Safety Center
Naval Ordnance Safety & Security Activity
System safety guidance
FAA System Safety Handbook
Safety engineering | System safety | Engineering | 1,649 |
97,275 | https://en.wikipedia.org/wiki/Victoria%20and%20Albert%20Museum | The Victoria and Albert Museum (abbreviated V&A) in London is the world's largest museum of applied arts, decorative arts and design, housing a permanent collection of over 2.8 million objects. It was founded in 1852 and named after Queen Victoria and Prince Albert.
The V&A is in the Royal Borough of Kensington and Chelsea, in an area known as "Albertopolis" because of its association with Prince Albert, the Albert Memorial, and the major cultural institutions with which he was associated. These include the Natural History Museum, the Science Museum, the Royal Albert Hall and Imperial College London. The museum is a non-departmental public body sponsored by the Department for Digital, Culture, Media and Sport. As with other national British museums, entrance is free.
The V&A covers and 145 galleries. Its collection spans 5,000 years of art, from ancient history to the present day, from the cultures of Europe, North America, Asia and North Africa. However, the art of antiquity in most areas is not collected. The holdings of ceramics, glass, textiles, costumes, silver, ironwork, jewellery, furniture, medieval objects, sculpture, prints and printmaking, drawings and photographs are among the largest and most comprehensive in the world.
The museum owns the world's largest collection of post-classical sculpture, with the holdings of Italian Renaissance sculpture being the largest outside Italy. The departments of Asia include art from South Asia, China, Japan, Korea and the Islamic world. The East Asian collections are among the best in Europe, with particular strengths in ceramics and metalwork, while the Islamic collection is amongst the largest in the Western world. Overall, it is one of the largest museums in the world.
Since 2001 the museum has embarked on a major £150m renovation programme. The new European galleries for the 17th century and the 18th century were opened on 9 December 2015. These restored the original Aston Webb interiors and host the European collections 1600–1815. The Young V&A in east London is a branch of the museum, and a new branch in London – V&A East – is being planned. The first V&A museum outside London, V&A Dundee opened on 15 September 2018.
History
Foundation
The Victoria and Albert Museum has its origins in the Great Exhibition of 1851. Henry Cole was the museum's first director, he was also involved in the planning. Initially the V&A was known as the Museum of Manufactures. The first opening to the general public was in May 1852 at Marlborough House. By September the collection had been transferred to Somerset House. At this stage, the collections covered both applied art and science. Several of the exhibits from the opening Exhibition were purchased by the museum to form the kernel of the V&A collection.
By February 1854 discussions were underway to transfer the museum to the current site and the museum was renamed South Kensington Museum. In 1855 the German architect Gottfried Semper, at the request of Cole, produced a design for the museum, but it was rejected by the Board of Trade as too expensive. The current site was occupied by Brompton Park House, which was extended in 1857 to include the first refreshment rooms. The V&A was the first museum in the world to provide researchers and guests a catering service.
The official opening by Queen Victoria was on 20 June 1857. In the following year, late-night openings were introduced, made possible by the use of gas lighting. In the words of museum director Cole gas lighting was introduced "to ascertain practically what hours are most convenient to the working classes". To raise interest for the museum among the target audience, the museum exhibited its collections on both applied art and science. The museum aimed to provide educational resources and thus boost the productive industry.
In these early years the practical use of the collection was very much emphasised as opposed to that of "High Art" at the National Gallery and scholarship at the British Museum. George Wallis (1811–1891), the first Keeper of Fine Art Collection, passionately promoted the idea of wide art education through the museum collections. This led to the transfer to the museum of the School of Design that had been founded in 1837 at Somerset House; after the transfer, it was referred to as the Art School or Art Training School, later to become the Royal College of Art which finally achieved full independence in 1949. From the 1860s to the 1880s the scientific collections had been moved from the main museum site to various improvised galleries to the west of Exhibition Road. In 1893 the "Science Museum" had effectively come into existence when a separate director was appointed.
Queen Victoria returned to lay the foundation stone of the Aston Webb building (to the left of the main entrance) on 17 May 1899. It was during this ceremony that the change of name from 'South Kensington Museum' to 'Victoria and Albert Museum' was made public. Queen Victoria's address during the ceremony, as recorded in The London Gazette, ended: "I trust that it will remain for ages a Monument of discerning Liberality and a Source of Refinement and Progress."
The exhibition which the museum organised to celebrate the centennial of the 1899 renaming, A Grand Design, first toured in North America from 1997 (Baltimore Museum of Art, Museum of Fine Arts, Boston, Royal Ontario Museum, Toronto, Museum of Fine Arts, Houston and the Fine Arts Museums of San Francisco), returning to London in 1999. To accompany and support the exhibition, the museum published a book, Grand Design, which it has made available for reading online on its website.
1900–1950
The opening ceremony for the Aston Webb building by King Edward VII and Queen Alexandra took place on 26 June 1909. In 1914 the construction commenced of the Science Museum, signaling the final split of the science and art collections.
In 1939 on the outbreak of the Second World War, most of the collection was sent to a quarry in Wiltshire, to Montacute House in Somerset, or to a tunnel near Aldwych tube station, with larger objects remaining in situ, sand-bagged and bricked in. Between 1941 and 1944 some galleries were used as a school for children evacuated from Gibraltar. The South Court became a canteen, first for the Royal Air Force and later for Bomb Damage Repair Squads.
Before the return of the collections after the war, the Britain Can Make It exhibition was held between September and November 1946, attracting nearly a million-and-a-half visitors. This was organised by the Council of Industrial Design, established by the British government in 1944 "to promote by all practicable means the improvement of design in the products of British industry". The success of this exhibition led to the planning of the Festival of Britain to be held in 1951. By 1948 most of the collections had been returned to the museum.
Since 1950
In July 1973 as part of its outreach programme to young people, the V&A became the first museum in Britain to present a rock concert. The V&A presented a combined concert/lecture by the British progressive folk-rock band Gryphon, who explored the lineage of medieval music and instrumentation and related how those contributed to contemporary music 500 years later. This innovative approach to bringing young people to museums was a hallmark of the directorship of Sir Roy Strong and was subsequently emulated by some other British museums.
In the 1980s Strong renamed the museum as "The Victoria and Albert Museum, the National Museum of Art and Design". Strong's successor Elizabeth Esteve-Coll oversaw a turbulent period for the institution in which the museum's curatorial departments were re-structured, leading to public criticism from some staff. Esteve-Coll's attempts to make the V&A more accessible included a criticised marketing campaign emphasising the café over the collection.
In 2001 the museum embarked on a major £150m renovation programme, called the "FuturePlan". The plan involves redesigning all the galleries and public facilities in the museum that have yet to be remodelled. This is to ensure that the exhibits are better displayed, more information is available, access for visitors is improved, and the museum can meet modern expectations for museum facilities. A planned Spiral building was abandoned; in its place a new Exhibition Road Quarter designed by Amanda Levete's AL_A was created. It features a new entrance on Exhibition Road, a porcelain-tiled courtyard (inaugurated in 2017 as the Sackler Courtyard and renamed the Exhibition Road Courtyard in 2022) and a new 1,100-square-metre underground gallery space (the Sainsbury Gallery) accessed through the Blavatnik Hall. The Exhibition Road Quarter project provided 6,400 square metres of extra space, which is the largest expansion at the museum in over 100 years. It opened on 29 June 2017.
In March 2018, it was announced that the Duchess of Cambridge would become the first royal patron of the museum. On 15 September 2018, the first V&A museum outside London, V&A Dundee, opened. The museum, built at a cost of £80.11m, is located on Dundee's waterfront, and is focused on Scottish design, furniture, textiles, fashion, architecture, engineering and digital design. Although it uses the V&A name, its operation and funding is independent of the V&A.
The museum also runs the Young V&A at Bethnal Green, which reopened on 1 July 2023; it used to run Apsley House, and also the Theatre Museum in Covent Garden. The Theatre Museum is now closed; the V&A Theatre Collections are now displayed within the South Kensington building.
Architecture
Victorian parts of the building have a complex history, with piecemeal additions by different architects. Founded in May 1852, it was not until 1857 that the museum moved to its present site. This area of London, previously known as Brompton, had been renamed 'South Kensington'. The land was occupied by Brompton Park House, which was extended, most notably by the "Brompton Boilers", which were starkly utilitarian iron galleries with a temporary look and were later dismantled and used to build the V&A Museum of Childhood. The first building to be erected that still forms part of the museum was the Sheepshanks Gallery in 1857 on the eastern side of the garden. Its architect was civil engineer Captain Francis Fowke, Royal Engineers, who was appointed by Cole. The next major expansions were designed by the same architect, the Turner and Vernon galleries built in 1858–1859 to house the eponymous collections (later transferred to the Tate Gallery) and now used as the picture galleries and tapestry gallery respectively. The North and South Courts were then built, both of which opened by June 1862. They now form the galleries for temporary exhibitions and are directly behind the Sheepshanks Gallery. On the very northern edge of the site is situated the Secretariat Wing; also built in 1862, this houses the offices and boardroom, etc. and is not open to the public.
An ambitious scheme of decoration was developed for these new areas: a series of mosaic figures depicting famous European artists of the Medieval and Renaissance period. These have now been removed to other areas of the museum. Also started were a series of frescoes by Lord Leighton: Industrial Arts as Applied to War 1878–1880 and Industrial Arts Applied to Peace, which was started but never finished. To the east of this were additional galleries, the decoration of which was the work of another designer, Owen Jones; these were the Oriental Courts (covering India, China and Japan), completed in 1863. None of this decoration survives.
Part of these galleries became the new galleries covering the 19th century, opened in December 2006. The last work by Fowke was the design for the range of buildings on the north and west sides of the garden. This includes the refreshment rooms, reinstated as the Museum Café in 2006, with the silver gallery above (at the time the ceramics gallery); the top floor has a splendid lecture theatre, although this is seldom open to the general public. The ceramic staircase in the northwest corner of this range of buildings was designed by F. W. Moody and has architectural details of moulded and coloured pottery. All the work on the north range was designed and built in 1864–69. The style adopted for this part of the museum was Italian Renaissance; much use was made of terracotta, brick and mosaic. This north façade was intended as the main entrance to the museum, with its bronze doors, designed by James Gamble and , having six panels, depicting Humphry Davy (chemistry); Isaac Newton (astronomy); James Watt (mechanics); Bramante (architecture); Michelangelo (sculpture); and Titian (painting); The panels thus represent the range of the museum's collections. Godfrey Sykes also designed the terracotta embellishments and the mosaic in the pediment of the North Façade commemorating the Great Exhibition, the profits from which helped to fund the museum. This is flanked by terracotta statue groups by Percival Ball. This building replaced Brompton Park House, which could then be demolished to make way for the south range.
The interiors of the three refreshment rooms were assigned to different designers. The Green Dining Room (1866–68) was the work of Philip Webb and William Morris, and displays Elizabethan influences. The lower part of the walls is paneled in wood with a band of paintings depicting fruit and the occasional figure, with moulded plaster foliage on the main part of the wall and a plaster frieze around the decorated ceiling and stained-glass windows by Edward Burne-Jones. The Centre Refreshment Room (1865–77) was designed in a Renaissance style by James Gamble. The walls and even the Ionic columns in this room are covered in decorative and moulded ceramic tile, the ceiling consists of elaborate designs on enamelled metal sheets and matching stained-glass windows, and the marble fireplace was designed and sculpted by Alfred Stevens and was removed from Dorchester House prior to that building's demolition in 1929. The Grill Room (1876–81) was designed by Sir Edward Poynter; the lower part of its walls consist of blue and white tiles with various figures and foliage enclosed by wood panelling, while above there are large tiled scenes with figures depicting the four seasons and the twelve months, painted by ladies from the Art School then based in the museum. The windows are also stained glass; there is an elaborate cast-iron grill still in place.
With the death of Captain Francis Fowke of the Royal Engineers, the next architect to work at the museum was Colonel (later Major General) Henry Young Darracott Scott, also of the Royal Engineers. He designed to the northwest of the garden the five-storey School for Naval Architects (also known as the science schools), now the Henry Cole Wing, in 1867–72. Scott's assistant J. W. Wild designed the impressive staircase that rises the full height of the building. Made from Cadeby stone, the steps are in length, while the balustrades and columns are Portland stone. It is now used to jointly house the prints and architectural drawings of the V&A (prints, drawings, paintings and photographs) and Royal Institute of British Architects (RIBA Drawings and Archives Collections), and the Sackler Centre for arts education, which opened in 2008.
Continuing the style of the earlier buildings, various designers were responsible for the decoration. The terracotta embellishments were again the work of Godfrey Sykes, although sgraffito was used to decorate the east side of the building designed by F. W. Moody. A final embellishment was the wrought iron gates made as late as 1885 designed by Starkie Gardner. These lead to a passage through the building. Scott also designed the two Cast Courts (1870–73) to the southeast of the garden (the site of the "Brompton Boilers"); these vast spaces have ceilings in height to accommodate the plaster casts of parts of famous buildings, including Trajan's Column (in two separate pieces). The final part of the museum designed by Scott was the Art Library and what is now the sculpture gallery on the south side of the garden, built in 1877–1883. The exterior mosaic panels in the parapet were designed by Reuben Townroe, who also designed the plaster work in the library. Sir John Taylor designed the bookshelves and cases. This was the first part of the museum to have electric lighting. This completed the northern half of the site, creating a quadrangle with the garden at its centre, but left the museum without a proper façade. In 1890 the government launched a competition to design new buildings for the museum, with architect Alfred Waterhouse as one of the judges; this would give the museum a new imposing front entrance.
Edwardian period
The main façade, built from red brick and Portland stone, stretches along Cromwell Gardens and was designed by Aston Webb after winning a competition in 1891 to extend the museum. Construction took place between 1899 and 1909. Stylistically it is a strange hybrid: although much of the detail belongs to the Renaissance, there are medieval influences at work. The main entrance, consisting of a series of shallow arches supported by slender columns and niches with twin doors separated by the pier, is Romanesque in form but Classical in detail. Likewise, the tower above the main entrance has an open work crown surmounted by a statue of fame, a feature of late Gothic architecture and a feature common in Scotland, but the detail is Classical. The main windows to the galleries are also mullioned and transomed, again a Gothic feature; the top row of windows are interspersed with statues of many of the British artists whose work is displayed in the museum.
Prince Albert appears within the main arch above the twin entrances, and Queen Victoria above the frame around the arches and entrance, sculpted by Alfred Drury. These façades surround four levels of galleries. Other areas designed by Webb include the Entrance Hall and Rotunda, the East and West Halls, the areas occupied by the shop and Asian Galleries, and the Costume Gallery. The interior makes much use of marble in the entrance hall and flanking staircases, although the galleries as originally designed were white with restrained classical detail and mouldings, very much in contrast to the elaborate decoration of the Victorian galleries, although much of this decoration was removed in the early 20th century.
Post-war period
The museum survived the Second World War with only minor bomb damage. The worst loss was the Victorian stained glass on the Ceramics Staircase, which was blown in when bombs fell nearby; pockmarks still visible on the façade of the museum were caused by fragments from the bombs.
In the immediate post-war years, there was little money available for other than essential repairs. The 1950s and early 1960s saw little in the way of building work; the first major work was the creation of new storage space for books in the Art Library in 1966 and 1967. This involved flooring over Aston Webb's main hall to form the book stacks, with a new medieval gallery on the ground floor (now the shop, opened in 2006). Then the lower ground-floor galleries in the south-west part of the museum were redesigned, opening in 1978 to form the new galleries covering Continental art 1600–1800 (late Renaissance, Baroque through Rococo and neo-Classical). In 1974 the museum had acquired what is now the Henry Cole wing from the Royal College of Science. To adapt the building as galleries, all the Victorian interiors except for the staircase were recast during the remodelling. To link this to the rest of the museum, a new entrance building was constructed on the site of the former boiler house, the intended site of the Spiral, between 1978 and 1982. This building is of concrete and very functional, the only embellishment being the iron gates by Christopher Hay and Douglas Coyne of the Royal College of Art. These are set in the columned screen wall designed by Aston Webb that forms the façade.
Recent years
A few galleries were redesigned in the 1990s including the Indian, Japanese, Chinese, ironwork, the main glass galleries, and the main silverware gallery, which was further enhanced in 2002 when some of the Victorian decoration was recreated. This included two of the ten columns having their ceramic decoration replaced and the elaborate painted designs restored on the ceiling. As part of the 2006 renovation the mosaic floors in the sculpture gallery were restored—most of the Victorian floors were covered in linoleum after the Second World War. After the success of the British Galleries, opened in 2001, it was decided to embark on a major redesign of all the galleries in the museum; this is known as "FuturePlan", and was created in consultation with the exhibition designers and masterplanners Metaphor. The plan is expected to take about ten years and was started in 2002. To date several galleries have been redesigned, notably, in 2002: the main Silver Gallery, Contemporary; in 2003: Photography, the main entrance, The Painting Galleries; in 2004: the tunnel to the subway leading to South Kensington tube station, new signage throughout the museum, architecture, V&A and RIBA reading rooms and stores, metalware, Members' Room, contemporary glass, and the Gilbert Bayes sculpture gallery; in 2005: portrait miniatures, prints and drawings, displays in Room 117, the garden, sacred silver and stained glass; in 2006: Central Hall Shop, Islamic Middle East, the new café, and sculpture galleries. Several designers and architects have been involved in this work. Eva Jiřičná designed the enhancements to the main entrance and rotunda, the new shop, the tunnel and the sculpture galleries. Gareth Hoskins was responsible for contemporary and architecture, Softroom, Islamic Middle East and the Members' Room, McInnes Usher McKnight Architects (MUMA) were responsible for the new Cafe and designed the new Medieval and Renaissance galleries which opened in 2009.
Garden
The central garden was redesigned by Kim Wilkie and opened as the John Madejski Garden on 5 July 2005. The design is a subtle blend of the traditional and modern: the layout is formal; there is an elliptical water feature lined in stone with steps around the edge which may be drained to use the area for receptions, gatherings or exhibition purposes. This is in front of the bronze doors leading to the refreshment rooms. A central path flanked by lawns leads to the sculpture gallery. The north, east and west sides have herbaceous borders along the museum walls with paths in front which continues along the south façade. In the two corners by the north façade, there is planted an American Sweetgum tree. The southern, eastern and western edges of the lawns have glass planters which contain orange and lemon trees in summer, which are replaced by bay trees in winter.
At night both the planters and the water feature may be illuminated, and the surrounding façades lit to reveal details normally in shadow. Especially noticeable are the mosaics in the loggia of the north façade. In summer a café is set up in the southwest corner. The garden is also used for temporary exhibits of sculpture; for example, a sculpture by Jeff Koons was shown in 2006. It has also played host to the museum's annual contemporary design showcase, the V&A Village Fete, since 2005.
Exhibition Road Quarter
In 2011 the V&A announced that London-based practice AL A had won an international competition to construct a gallery beneath a new entrance courtyard on Exhibition Road. Planning for the scheme was granted in 2012. It replaced the proposed extension designed by Daniel Libeskind with Cecil Balmond but abandoned in 2004 after failing to receive funding from the Heritage Lottery Fund.
The Exhibition Road Quarter opened in 2017, with a new entrance providing access for visitors from Exhibition Road. A new courtyard, the Sackler Courtyard, has been created behind the Aston Webb Screen, a colonnade built in 1909 to hide the museum's boilers. The colonnade was kept but the wall in the lower part was removed in the construction to allow public access to the courtyard. The new 1,200-square meter courtyard is the world's first all-porcelain courtyard, which is covered with 11,000 handmade porcelain tiles in fifteen different linear patterns glazed in different tone. A pavilion of Modernist design with glass walls and an angular roof covered with 4,300 tiles is located at the corner and contains a cafe. Skylights on the courtyard provide natural light for the stairwell and the exhibition space located below the courtyard created by digging 15m into the ground. The Sainsbury Gallery's column-less space at 1,100 square metres is one of the largest in the country, providing space for temporary exhibitions. The gallery can be assessed through the existing Western Range building where a new entrance to the Blavatnik Hall and the museum has been created, and visitors can descend into the gallery via stairs with lacquered tulipwood balustrades.
Collections
The collecting areas of the museum are not easy to summarize, having evolved partly through attempts to avoid too much overlap with other national museums in London. Generally, the classical world of the West and the Ancient Near East is left to the British Museum, and Western paintings to the National Gallery, though there are all sorts of exceptionsfor example, painted portrait miniatures, where the V&A has the main national collection.
The Victoria & Albert Museum is split into four curatorial departments: Decorative Art and Sculpture; Performance, Furniture, Textiles and Fashion; Art, Architecture, Photography and Design; and Asia. The museum curators care for the objects in the collection and provide access to objects that are not currently on display to the public and scholars.
The collection departments are further divided into sixteen display areas, whose combined collection numbers over 6.5 million objects, not all objects are displayed or stored at the V&A. There is a repository at Blythe House, West Kensington, as well as annex institutions managed by the V&A, also the museum lends exhibits to other institutions. The following lists each of the collections on display and the number of objects within the collection.
The museum has 145 galleries, but given the vast extent of the collections, only a small percentage is ever on display. Many acquisitions have been made possible only with the assistance of the National Art Collections Fund.
Architecture
In 2004, the V&A alongside Royal Institute of British Architects opened the first permanent gallery in the UK covering the history of architecture with displays using models, photographs, elements from buildings and original drawings. With the opening of the new gallery, the RIBA Drawings and Archives Collection has been transferred to the museum, joining the already extensive collection held by the V&A. With over 600,000 drawings, over 750,000 papers and paraphernalia, and over 700,000 photographs from around the world, together they form the world's most comprehensive architectural resource.
Not only are all the major British architects of the last four hundred years represented, but many European (especially Italian) and American architects' drawings are held in the collection. The RIBA's holdings of over 330 drawings by Andrea Palladio are the largest in the world; other Europeans well represented are Jacques Gentilhatre and Antonio Visentini. British architects whose drawings, and in some cases models of their buildings, in the collection, include: Inigo Jones, Sir Christopher Wren, Sir John Vanbrugh, Nicholas Hawksmoor, William Kent, James Gibbs, Robert Adam, Sir William Chambers, James Wyatt, Henry Holland, John Nash, Sir John Soane, Sir Charles Barry, Charles Robert Cockerell, Augustus Welby Northmore Pugin, Sir George Gilbert Scott, John Loughborough Pearson, George Edmund Street, Richard Norman Shaw, Alfred Waterhouse, Sir Edwin Lutyens, Charles Rennie Mackintosh, Charles Holden, Frank Hoar, Lord Richard Rogers, Lord Norman Foster, Sir Nicholas Grimshaw, Zaha Hadid and Alick Horsnell.
As well as period rooms, the collection includes parts of buildings, for example, the two top stories of the facade of Sir Paul Pindar's house dated 1600 from Bishopsgate with elaborately carved woodwork and leaded windows, a rare survivor of the Great Fire of London, there is a brick portal from a London house of the English Restoration period and a fireplace from the gallery of Northumberland house. European examples include a dormer window dated 1523–1535 from the chateau of Montal. There are several examples from Italian Renaissance buildings including, portals, fireplaces, balconies and a stone buffet that used to have a built-in fountain. The main architecture gallery has a series of pillars from various buildings and different periods, for example, a column from the Alhambra. Examples covering Asia are in those galleries concerned with those countries, as well as models and photographs in the main architecture gallery.
In June 2022, the RIBA announced it would be terminating its 20-year partnership with the V&A in 2027, "by mutual agreement", ending the permanent architecture gallery at the museum. Artefacts will be transferred back to the RIBA's existing collections, with some rehoused at the institute's headquarters at 66 Portland Place building, set to become a new House of Architecture following a £20 million refurbishment.
Asia
The V&A's collection of Art from Asia numbers more than 160,000 objects, one of the largest in existence. It has one of the world's most comprehensive and important collections of Chinese art whilst the collection of South Asian Art is the most important in the West. The museum's coverage includes pieces from South and South East Asia, Himalayan kingdoms, China, the Far East and the Islamic world.
Islamic art
The V&A holds over 19,000 objects from the Islamic world, ranging from the early Islamic period (the 7th century) to the early 20th century. The Jameel Gallery of Islamic Art, opened in 2006, houses a representative display of 400 objects with the highlight being the Ardabil Carpet, the centrepiece of the gallery. The displays in this gallery cover objects from Spain, North Africa, the Middle East, Central Asia and Afghanistan. A masterpiece of Islamic art is a 10th-century Rock crystal ewer. Many examples of Qur'āns with exquisite calligraphy dating from various periods are on display. A 15th-century minbar from a Cairo mosque with ivory forming complex geometrical patterns inlaid in wood is one of the larger objects on display. Extensive examples of ceramics especially Iznik pottery, glasswork including 14th-century lamps from mosques and metalwork are on display. The collection of Middle Eastern and Persian rugs and carpets is amongst the finest in the world, many were part of the Salting Bequest of 1909. Examples of tile work from various buildings including a fireplace dated 1731 from Istanbul made of intricately decorated blue and white tiles and turquoise tiles from the exterior of buildings from Samarkand are also displayed.
South Asia
The museum's collections of South and South-East Asian art are the most comprehensive and important in the West comprising nearly 60,000 objects, including about 10,000 textiles and 6,000 paintings, the range of the collection is immense. The Jawaharlal Nehru gallery of Indian art, opened in 1991, contains art from about 500 BC to the 19th century. There is an extensive collection of sculptures, mainly of a religious nature, Hindu, Buddhist and Jain. The gallery is richly endowed with the art of the Mughal Empire and the Maratha Empire, including fine portraits of the emperors and other paintings and drawings, jade wine cups and gold spoons inset with emeralds, diamonds and rubies, also from this period are parts of buildings such as a jaali and pillars. India was a large producer of textiles, from dyed cotton chintz, muslin to rich embroidery work using gold and silver thread, coloured sequins and beads is displayed, as are carpets from Agra and Lahore. Examples of clothing are also displayed. In 1879–80, the collections of the defunct East India Company's India Museum were transferred to the V&A and the British Museum. Items in the collection include Tipu's Tiger, an 18th-century automaton created for Tipu Sultan, the ruler of the Kingdom of Mysore. The personal wine cup of Mughal Emperor Shah Jahan is also on display.
East Asia
The Far Eastern collections include more than 70,000 works of art from the countries of East Asia: China, Japan and Korea. The T. T. Tsui Gallery of Chinese art opened in 1991, displaying a representative collection of the V&As approximately 16,000 objects from China, dating from the 4th millennium BC to the present day. Though the majority of artworks on display date from the Ming and Qing dynasties, there are objects dating from the Tang dynasty and earlier periods, among them a metre-high bronze head of the Buddha dated to about 750 AD, and one of the oldest works, a 2000-year-old jade horse head from a burial. Other sculptures include life-size tomb guardians. Classic examples of Chinese decorative arts on displayt include Chinese lacquer, silk, Chinese porcelain, jade and cloisonné enamel. Two large ancestor portraits of a husband and wife painted in watercolour on silk date from the 18th century. There is a unique Chinese lacquerware table, made in the imperial workshops during the reign of the Xuande Emperor in the Ming dynasty. Examples of clothing are also displayed. One of the largest objects is a bed from the mid-17th century. The work of contemporary Chinese designers is also displayed.
The Toshiba gallery of Japanese art opened in December 1986. The majority of exhibits date from 1550 to 1900, but one of the oldest pieces displayed is the 13th-century sculpture of Amida Nyorai. Examples of classic Japanese armour from the mid-19th century, steel sword blades (Katana), Inrō, lacquerware including the Mazarin Chest dated c1640 is one of the finest surviving pieces from Kyoto, porcelain including Imari, Netsuke, woodblock prints including the work of Andō Hiroshige, graphic works include printed books, as well as a few paintings, scrolls and screens, textiles and dress including kimono are some of the objects on display. One of the finest objects displayed is Suzuki Chokichi's bronze incense burner (koro) dated 1875, standing at over 2.25 metres high and 1.25 metres in diameter it is also one of the largest examples made. The museum also holds some cloisonné pieces from the Japanese art production company, Ando Cloisonné.
The smaller galleries cover Korea, the Himalayan kingdoms and South East Asia. Korean displays include green-glazed ceramics, silk embroideries from officials' robes and gleaming boxes inlaid with mother-of-pearl made between 500 AD and 2000. Himalayan works include important early Nepalese bronze sculptures, repoussé work and embroidery. Tibetan art from the 14th to the 19th century is represented by 14th- and 15th-century religious images in wood and bronze, scroll paintings and ritual objects. Art from Thailand, Burma, Cambodia, Indonesia and Sri Lanka in gold, silver, bronze, stone, terracotta and ivory represents these rich and complex cultures, the displays span the 6th to 19th centuries. Refined Hindu and Buddhist sculptures reflect the influence of India; items on the show include betel-nut cutters, ivory combs and bronze palanquin hooks.
Books
The museum houses the National Art Library, a public library containing over 750,000 books, photographs, drawings, paintings, and prints. It is one of the world's largest libraries dedicated to the study of fine and decorative arts. The library covers all areas and periods of the museum's collections with special collections covering illuminated manuscripts, rare books and artists' letters and archives.
The library consists of three large public rooms, with around a hundred individual study desks. These are the West Room, Centre Room and Reading Room. The centre room contains 'special collection material'.
One of the great treasures in the library is the Codex Forster, one of Leonardo da Vinci's note books. The Codex consists of three parchment-bound manuscripts, Forster I, Forster II, and Forster III, quite small in size, dated between 1490 and 1505. Their contents include a large collection of sketches and references to the equestrian sculpture commissioned by the Duke of Milan Ludovico Sforza to commemorate his father Francesco Sforza. These were bequeathed with over 18,000 books to the museum in 1876 by John Forster. The Reverend Alexander Dyce was another benefactor of the library, leaving over 14,000 books to the museum in 1869. Amongst the books he collected are early editions in Greek and Latin of the poets and playwrights Aeschylus, Aristotle, Homer, Livy, Ovid, Pindar, Sophocles and Virgil. More recent authors include Giovanni Boccaccio, Dante, Racine, Rabelais and Molière.
Writers whose papers are in the library are as diverse as Charles Dickens (that includes the manuscripts of most of his novels) and Beatrix Potter (with the greatest collection of her original manuscripts in the world). Illuminated manuscripts in the library dating from the 12th to 16th centuries include: a leaf from the Eadwine Psalter, Canterbury; Pocket Book of Hours, Reims; Missal from the Royal Abbey of Saint Denis, Paris; the Simon Marmion Book of Hours, Bruges; 1524 Charter illuminated by Lucas Horenbout, London; the Armagnac manuscript of the trial and rehabilitation of Joan of Arc, Rouen. also the Victorian period is represented by William Morris.
The National Art Library at the Victoria and Albert Museum collection catalogue used to be kept in different formats including printed exhibit catalogues, and card catalogues. A computer system called MODES cataloguing system was used from the 1980s to the 1990s, but those electronic files were not available to the library users. All of the archival material at the National Art Library is using Encoded Archival Description (EAD). The Victoria and Albert Museum has a computer system but most of the items in the collection, unless those were newly accessioned into the collection, probably do not show up in the computer system. There is a feature on the Victoria and Albert Museum website called "Search the Collections," but not everything is listed there.
The National Art Library also includes a collection of comics and comic art. Notable parts of the collection include the Krazy Kat Arkive, comprising 4,200 comics, and the Rakoff Collection, comprising 17,000 pieces collected by the writer and editor Ian Rakoff.
The Victoria and Albert Museum's Word and Image Department was under the same pressure being felt in archives around the world, to digitise their collection. A large scale digitisation project began in 2007 in that department. That project was entitled the Factory Project to reference Andy Warhol and to create a factory to completely digitise the collection. The first step of the Factory Project was to take photographs using digital cameras. The Word and Image Department had a collection of old photos but they were in black and white and in variant conditions, so new photos were shot. Those new photographs will be accessible to researchers to the Victoria and Albert Museum web-site. 15,000 images were taken during the first year of the Factory Project, including drawings, watercolors, computer-generated art, photographs, posters, and woodcuts. The second step of the Factory Project is to catalogue everything. The third step of the Factory Project is to audit the collection. All of those items which were photographed and catalogued, must be audited to make sure everything listed as being in the collection was physically found during the creation of the Factory Project. The fourth goal of the Factory Project is conservation, which means performing some basic preventable procedures to those items in the department. There is a "Search the Collections" feature on the Victoria and Albert web-site. The main impetus behind the large-scale digitisation project called the Factory Project was to list more items in the collections in those computer databases.
British galleries
These fifteen galleries—which opened in November 2001—contain around 4,000 objects. The displays in these galleries are based around three major themes: "Style", "Who Led Taste" and "What Was New". The period covered is 1500 to 1900, with the galleries divided into three major subdivisions:
Tudor and Stuart Britain, 1500–1714, covering the Renaissance, Elizabethan, Jacobean, Restoration and Baroque styles
Georgian Britain, 1714–1837, covering Palladianism, Rococo, Chinoiserie, Neoclassicism, the Regency, the influence of Chinese, Indian and Egyptian styles, and the early Gothic Revival
Victorian Britain, 1837–1901, covering the later phases of the Gothic Revival, French influences, Classical and Renaissance revivals, Aestheticism, Japanese style, the continuing influence of China, India, and the Islamic world, the Arts and Crafts movement and the Scottish School.
Not only the work of British artists and craftspeople is on display, but also work produced by European artists that was purchased or commissioned by British patrons, as well as imports from Asia, including porcelain, cloth and wallpaper. Designers and artists whose work is on display in the galleries include Gian Lorenzo Bernini, Grinling Gibbons, Daniel Marot, Louis Laguerre, Antonio Verrio, Sir James Thornhill, William Kent, Robert Adam, Josiah Wedgwood, Matthew Boulton, Canova, Thomas Chippendale, Pugin, William Morris. Patrons who have influenced taste are also represented by works of art from their collections, these include: Horace Walpole (a major influence on the Gothic Revival), William Thomas Beckford and Thomas Hope.
The galleries showcase a number of complete and partial reconstructions of period rooms, from demolished buildings, including:
The parlour from 2 Henrietta Street, London, dated 1727–1728, designed by James Gibbs
The Norfolk House Music Room, St James Square, London, dated 1756, designed by Matthew Brettingham and Giovanni Battista Borra
A section of a wall from the Glass Drawing-Room of Northumberland House, dated 1773–1775, designed by Robert Adam
Some of the more notable works displayed in the galleries include:
Pietro Torrigiani's coloured terracotta bust of Henry VII, dated 1509–1511
Henry VIII's writing desk, dated 1525, made from walnut and oak, lined with leather and painted and gilded with the king's coat of arms
A spinet dated 1570–1580, made for Elizabeth I
The Great Bed of Ware, dated 1590–1600, a large, elaborately carved four-poster bed with marquetry headboard
Gianlorenzo Bernini's bust of Thomas Baker, from the 1630s
17th-century tapestries from the Sheldon and Mortlake Tapestry Works
The wood relief of The Stoning of St Stephen, dated , by Grinling Gibbons
The Macclesfield Wine Set, dated 1719–1720, made by Anthony Nelme, the only complete set known to survive.
The life-size sculpture of George Frederick Handel, dated 1738, by Louis-François Roubiliac
Furniture by Thomas Chippendale and Robert Adam
The sculpture of Bashaw, dated 1831–1834, by Matthew Cotes Wyatt
Aesthetic and Arts & Crafts furniture by Edward William Godwin and Charles Rennie Mackintosh; and carpets and interior textiles by William Morris.
The galleries also link design to wider trends in British culture. For instance, design in the Tudor period was influenced by the spread of printed books and the work of European artists and craftsmen employed in Britain. In the Stuart period, increasing trade, especially with Asia, enabled wider access to luxuries like carpets, lacquered furniture, silks and porcelain. In the Georgian age there was an increasing emphasis on entertainment and leisure. For example, the increase in tea drinking led to the production of tea paraphernalia such as china and caddies. European styles are seen on the Grand Tour also influenced taste. As the Industrial Revolution took hold, the growth of mass production produced entrepreneurs such as Josiah Wedgwood, Matthew Boulton and Eleanor Coade. In the Victorian era new technology and machinery had a significant effect on manufacturing, and for the first time since the reformation, the Anglican and Roman Catholic Churches had a major effect on art and design such as the Gothic Revival. There is a large display on the Great Exhibition which, among other things, led to the founding of the V&A. In the later 19th century, the increasing backlash against industrialisation, led by John Ruskin, contributed to the Arts and Crafts movement.
Cast courts
One of the most dramatic parts of the museum is the Cast Courts, comprising two large, skylighted rooms two storeys high housing hundreds of plaster casts of sculptures, friezes and tombs. One of these is dominated by a full-scale replica of Trajan's Column, cut in half to fit under the ceiling. The other includes reproductions of various works of Italian Renaissance sculpture and architecture, including a full-size replica of Michelangelo's David. Replicas of two earlier Davids by Donatello and Verrocchio, are also included, although for conservation reasons the Verrocchio replica is displayed in a glass case.
The two courts are divided by corridors on both storeys, and the partitions that used to line the upper corridor (the Gilbert Bayes sculpture gallery) were removed in 2004 to allow the courts to be viewed from above.
Ceramics and glass
This is the largest and most comprehensive ceramics and glass collection in the world, with over 80,000 objects from around the world. Every populated continent is represented. Apart from the many pieces in the Primary Galleries on the ground floor, much of the top floor is devoted to galleries of ceramics of all periods covered, which include display cases with a representative selection, but also massed "visible storage" displays of the reserve collection.
Well represented in the collection is Meissen porcelain, from the first factory in Europe to discover the Chinese method of making porcelain. Among the finest examples are the Meissen Vulture from 1731 and the Möllendorff Dinner Service, designed in 1762 by Frederick II the Great. Ceramics from the Manufacture nationale de Sèvres are extensive, especially from the 18th and 19th centuries. The collection of 18th-century British porcelain is the largest and finest in the world. Examples from every factory are represented, the collections of Chelsea porcelain and Worcester porcelain being especially fine. All the major 19th-century British factories are also represented. A major boost to the collections was the Salting Bequest made in 1909, which enriched the museum's stock of Chinese and Japanese ceramics. This bequest forms part of the finest collection of East Asian pottery and porcelain in the world, including Kakiemon ware.
Many famous potters, such as Josiah Wedgwood, William De Morgan and Bernard Leach as well as Mintons & Royal Doulton are represented in the collection. There is an extensive collection of Delftware produced in both Britain and Holland, which includes a circa 1695 flower pyramid over a metre in height. Bernard Palissy has several examples of his work in the collection including dishes, jugs and candlesticks. The largest objects in the collection are a series of elaborately ornamented ceramic stoves from the 16th and 17th centuries, made in Germany and Switzerland. There is an unrivalled collection of Italian maiolica and lustreware from Spain. The collection of Iznik pottery from Turkey is the largest in the world.
The glass collection covers 4000 years of glassmaking, and has over 6000 pieces from Africa, Britain, Europe, America and Asia. The earliest glassware on display comes from Ancient Egypt and continues through the Ancient Roman, Medieval, Renaissance covering areas such as Venetian glass and Bohemian glass and more recent periods, including Art Nouveau glass by Louis Comfort Tiffany and Émile Gallé, the Art Deco style is represented by several examples by René Lalique. There are many examples of crystal chandeliers, both English, displayed in the British galleries, and foreign – for example, a Venetian one attributed to Giuseppe Briati and dated to about 1750. The stained glass collection is possibly the finest in the world, covering the medieval to modern periods, and covering Europe as well as Britain. Several examples of English 16th-century heraldic glass is displayed in the British Galleries. Many well-known designers of stained glass are represented in the collection including, from the 19th century: Dante Gabriel Rossetti, Edward Burne-Jones and William Morris. There is also an example of Frank Lloyd Wright's work in the collection. 20th-century designers include Harry Clarke, John Piper, Patrick Reyntiens, Veronica Whall and Brian Clarke.
The main gallery was redesigned in 1994, the glass balustrade on the staircase and mezzanine are the work of Danny Lane, the gallery covering contemporary glass opened in 2004 and the sacred silver and stained-glass gallery in 2005. In this latter gallery stained glass is displayed alongside silverware starting in the 12th century and continuing to the present. Some of the most outstanding stained glass, dated 1243–1248 comes from the Sainte-Chapelle, is displayed along with other examples in the new Medieval & Renaissance galleries. The important 13th-century glass beaker known as the Luck of Edenhall is also displayed in these galleries. Examples of British stained glass are displayed in the British Galleries. One of the most spectacular works in the collection is the chandelier by Dale Chihuly in the rotunda at the museum's main entrance.
Contemporary
These galleries are dedicated to temporary exhibits showcasing both trends from recent decades and the latest in design and fashion.
Prints and drawings
Prints and drawings from the over 750,000 works in the collection can be seen on request at the print room, the "Prints and Drawings study Room"; booking an appointment is necessary. The collection of drawings includes over 10,000 British and 2,000 old master works, including works by: Dürer, Giovanni Benedetto Castiglione, Bernardo Buontalenti, Rembrandt, Antonio Verrio, Paul Sandby, John Russell, Angelica Kauffman, John Flaxman, Hugh Douglas Hamilton, Thomas Rowlandson, William Kilburn, Thomas Girtin, Jean-Auguste-Dominique Ingres, David Wilkie, John Martin, Samuel Palmer, Sir Edwin Henry Landseer, Lord Leighton, Sir Samuel Luke Fildes and Aubrey Beardsley. Modern British artists represented in the collection include: Paul Nash, Percy Wyndham Lewis, Eric Gill, Stanley Spencer, John Piper, Robert Priseman, Graham Sutherland, Lucian Freud and David Hockney.
The print collection has more than 500,000 objects, covering: posters, greetings cards, bookplates, as well as a comprehensive collection of old master prints from the Renaissance to the present, including works by Rembrandt, William Hogarth, Giovanni Battista Piranesi, Canaletto, Karl Friedrich Schinkel, Henri Matisse and Sir William Nicholson.
Fashion
The costume collection is the most comprehensive in Britain, containing over 14,000 outfits plus accessories, mainly dating from 1600 to the present. Costume sketches, design notebooks, and other works on paper are typically held by the Word and Image department. Because everyday clothing from previous eras has not generally survived, the collection is dominated by fashionable clothes made for special occasions. One of the first significant gifts of the costume came in 1913 when the V&A received the Talbot Hughes collection containing 1,442 costumes and items as a gift from Harrods following its display at the nearby department store.
Some of the oldest works in the collection are medieval vestments, especially Opus Anglicanum. One of the most important pieces in the collection is the wedding suit of James II of England, which is displayed in the British Galleries.
In 1971, Cecil Beaton curated an exhibition of 1,200 20th-century high-fashion garments and accessories, including gowns worn by leading socialites such as Patricia Lopez-Willshaw, Gloria Guinness and Lee Radziwill, and actresses such as Audrey Hepburn and Ruth Ford. After the exhibition, Beaton donated most of the exhibits to the museum in the names of their former owners.
In 1999, V&A began a series of live catwalk events at the museum titled Fashion in Motion featuring pieces from historically significant fashion collections. The first show featured Alexander McQueen in June 1999. Since then, the museum has hosted recreations of various designer shows every year including Anna Sui, Tristan Webber, Elspeth Gibson, Chunghie Lee, Jean Paul Gaultier, Missoni, Gianfranco Ferré, Christian Lacroix, Kenzo and Kansai Yamamoto amongst others.
In 2002, the museum acquired the Costiff collection of 178 Vivienne Westwood costumes. Other famous designers with work in the collection include Coco Chanel, Hubert de Givenchy, Christian Dior, Cristóbal Balenciaga, Yves Saint Laurent, Guy Laroche, Irene Galitzine, Mila Schön, Valentino Garavani, Norman Norell, Norman Hartnell, Zandra Rhodes, Hardy Amies, Mary Quant, Christian Lacroix, Jean Muir and Pierre Cardin. The museum continues to acquire examples of modern fashion to add to the collection.
The V&A runs an ongoing textile and dress conservation programme. For example, in 2008, an important but heavily soiled, distorted and water-damaged 1954 Dior outfit called 'Zemire' was restored to displayable condition for the Golden Age of Couture exhibition.
The V&A Museum has a large collection of shoes around 2000 pairs from different cultures around the world. The collection shows the chronological progression of shoe height, heel shape and materials, revealing just how many styles we consider to be modern have been in and out of fashion across the centuries.
Furniture
In November 2012, the museum opened its first gallery to be exclusively dedicated to furniture. Prior to this date furniture had been exhibited as part of a greater period context, rather than in isolation to showcase its design and construction merits. Among the designers showcased in the new gallery are Ron Arad, John Henry Belter, Joe Colombo, Eileen Gray, Verner Panton, Thonet, and Frank Lloyd Wright.
The furniture collection, while covering Europe and America from the Middle Ages to the present, is predominantly British, dating between 1700 and 1900. Many of the finest examples are displayed in the British Galleries, including pieces by Chippendale, Adam, Morris, and Mackintosh. One of the oldest objects is a chair leg from Middle Egypt dated to 200-395AD.
The Furniture and Woodwork collection also includes complete rooms, musical instruments, and clocks. Among the rooms owned by the museum are the Boudoir of Madame de Sévilly (Paris, 1781–82) by Claude Nicolas Ledoux, with painted panelling by Jean Simeon Rousseau de la Rottière; and Frank Lloyd Wright's Kaufmann Office, designed and constructed between 1934 and 1937 for the owner of a Pittsburgh department store.
The collection includes pieces by William Kent, Henry Flitcroft, Matthias Lock, James Stuart, William Chambers, John Gillow, James Wyatt, Thomas Hopper, Charles Heathcote Tatham, Pugin, William Burges, Charles Voysey, Charles Robert Ashbee, Baillie Scott, Edwin Lutyens, Edward Maufe, Wells Coates and Robin Day. The museum also hosts the national collection of wallpaper, which is looked after by the Prints, Drawings and Paintings department.
The Soulages collection of Italian and French Renaissance objects was acquired between 1859 and 1865, and includes several cassone. The John Jones Collection of French 18th-century art and furnishings was left to the museum in 1882, then valued at £250,000. One of the most important pieces in this collection is a marquetry commode by the ébéniste Jean Henri Riesener dated c1780. Other signed pieces of furniture in the collection include a bureau by Jean-François Oeben, a pair of pedestals with inlaid brass work by André Charles Boulle, a commode by Bernard Vanrisamburgh and a work-table by Martin Carlin. Other 18th-century ébénistes represented in the museum collection include Adam Weisweiler, David Roentgen, Gilles Joubert and Pierre Langlois. In 1901, Sir George Donaldson donated several pieces of art Nouveau furniture to the museum, which he had acquired the previous year at the Paris Exposition Universelle. This was criticised at the time, with the result that the museum ceased to collect contemporary pieces and did not do so again until the 1960s. In 1986 the Lady Abingdon collection of French Empire furniture was bequeathed by Mrs T. R. P. Hole.
There are a set of beautiful inlaid doors, dated 1580 from Antwerp City Hall, attributed to Hans Vredeman de Vries. One of the finest pieces of continental furniture in the collection is the Rococo Augustus Rex Bureau Cabinet dated c1750 from Germany, with especially fine marquetry and ormolu mounts. One of the grandest pieces of 19th-century furniture is the highly elaborate French Cabinet dated 1861–1867 made by M. Fourdinois, made from ebony inlaid with box, lime, holly, pear, walnut and mahogany woods as well as marble with gilded carvings. Furniture designed by Ernest Gimson, Edward William Godwin, Charles Voysey, Adolf Loos and Otto Wagner are among the late 19th-century and early 20th-century examples in the collection. The work of modernists in the collection include Le Corbusier, Marcel Breuer, Charles and Ray Eames, and Giò Ponti.
One of the oldest clocks in the collection is an astronomical clock of 1588 by Francis Nowe. One of the largest is James Markwick the younger's longcase clock of 1725, nearly 3 metres in height and japanned. Other clockmakers with work in the collection include: Thomas Tompion, Benjamin Lewis Vulliamy, John Ellicott and William Carpenter.
Jewellery
The museum's jewellery collection, containing over 6000 pieces is one of the finest and most comprehensive collections of jewellery in the world and includes works dating from Ancient Egypt to the present day, as well as jewellery designs on paper. The museum owns pieces by renowned jewellers Cartier, Jean Schlumberger, Peter Carl Fabergé, Andrew Grima, Hemmerle and Lalique. Other items in the collection include diamond dress ornaments made for Catherine the Great, bracelet clasps once belonging to Marie Antoinette, and the Beauharnais emerald necklace presented by Napoleon to his adopted daughter Hortense de Beauharnais in 1806. The museum also collects international modern jewellery by designers such as Gijs Bakker, Onno Boekhoudt, Peter Chang, Gerda Flockinger, Lucy Sarneel, Dorothea Prühl and Wendy Ramshaw, and African and Asian traditional jewellery. Major bequests include Reverend Chauncy Hare Townshend's collection of 154 gems bequeathed in 1869, Lady Cory's 1951 gift of major diamond jewellery from the 18th and 19th centuries, and jewellery scholar Dame Joan Evans' 1977 gift of more than 800 jewels dating from the Middle Ages to the early 19th century. A new jewellery gallery, funded by William and Judith Bollinger, opened on 24 May 2008.
Metalwork
This collection of more than 45,000 objects covers decorative ironwork, both wrought and cast, bronze, silverware, arms and armour, pewter, brassware and enamels (including many examples Limoges enamel). The main iron work gallery was redesigned in 1995.
There are over 10,000 objects made from silver or gold in the collection, the display (about 15 percent of the collection) is divided into secular and sacred covering both Christian (Roman Catholic, Anglican and Greek Orthodox) and Jewish liturgical vessels and other works. The main silver gallery is divided into these areas: British silver pre-1800; British silver 1800 to 1900; modernist to contemporary silver; European silver. The collection includes the earliest known piece of English silver with a dated hallmark, a silver gilt beaker dated 1496–1497.
Silversmiths whose work is represented in the collection include Paul Storr (whose Castlereagh Inkstand, dated 1817–1819, is one of his finest works) and Paul de Lamerie.
The main iron work gallery covers European wrought and cast iron from the medieval period to the early 20th century. The master of wrought ironwork Jean Tijou is represented by both examples of his work and designs on paper. One of the largest objects is the Hereford Screen, weighing nearly 8 tonnes, 10.5 metres high and 11 metres wide, designed by Sir George Gilbert Scott in 1862 for the chancel in Hereford Cathedral, from which it was removed in 1967. It was made by Skidmore & Company. Its structure of timber and cast iron is embellished with wrought iron, burnished brass and copper. Much of the copper and ironwork is painted in a wide range of colours. The arches and columns are decorated with polished quartz and panels of mosaic.
One of the rarest works in the collection is the 58 cm-high Gloucester Candlestick, dated to c1110, made from gilt bronze; with highly elaborate and intricate intertwining branches containing small figures and inscriptions, it is a tour de force of bronze casting. Also of importance is the Becket Casket dated c1180 to contain relics of St Thomas Becket, made from gilt copper, with enamelled scenes of the saint's martyrdom. Another highlight is the 1351 Reichenau Crozier. The Burghley Nef, a salt-cellar, French, dated 1527–1528, uses a nautilus shell to form the hull of a vessel, which rests on the tail of a parcelgilt mermaid, who rests on a hexagonal gilt plinth on six claw-and-ball feet. Both masts have main and top-sails, and battlemented fighting-tops are made from gold. These items are displayed in the new Medieval & Renaissance galleries.
Musical instruments
Musical instruments are classified as furniture by the museum, although Asian instruments are held by their relevant departments.
Among the more important instruments owned by the museum are a violin by Antonio Stradivari dated 1699, an oboe that belonged to Gioachino Rossini, and a jewelled spinet dated 1571 made by Annibale Rossi. The collection also includes a 1570 virginal said to have belonged to Elizabeth I, and late 19th-century pianos designed by Edward Burne-Jones, and Baillie Scott.
The Musical Instruments gallery closed on 25 February 2010, a decision that was highly controversial. An online petition of over 5,100 names on the Parliamentary website led to Chris Smith asking in Parliament about the future of the collection. The answer, from Bryan Davies, was that the museum intended to preserve and care for the collection and keep it available to the public, with objects being redistributed to the British Galleries, the Medieval & Renaissance Galleries, and the planned new galleries for Furniture and Europe 1600–1800, and that the Horniman Museum and other institutions were possible candidates for loans of material to ensure that the instruments remained publicly viewable. The Horniman went on to host a joint exhibition with the V&A of musical instruments, and has the loan of 35 instruments from the museum.
Paintings (and miniatures)
The collection includes about 1130 British and 650 European oil paintings, 6800 British watercolours, pastels and 2000 miniatures, for which the museum holds the national collection. Also on loan to the museum, from Her Majesty the Queen Elizabeth II, are the Raphael Cartoons: the seven surviving (there were ten) full-scale designs for tapestries in the Sistine Chapel, of the lives of Peter and Paul from the Gospels and the Acts of the Apostles. There is also on display a fresco by Pietro Perugino, dated 1522, from the church of Castello at Fontignano (Perugia) which is amongst the painter's last works. One of the largest objects in the collection is the Spanish retable of St George, , 670 x 486 cm, in tempera on wood, consisting of numerous scenes and painted by Andrés Marzal De Sax in Valencia.
19th-century British artists are well represented. John Constable and J. M. W. Turner are represented by oil paintings, watercolours and drawings. One of the most unusual objects on display is Thomas Gainsborough's experimental showbox with its back-lit landscapes, which he painted on glass, which allowed them to be changed like slides. Other landscape painters with works on display include Philip James de Loutherbourg, Peter De Wint and John Ward.
In 1857 John Sheepshanks donated 233 paintings, mainly by contemporary British artists, and a similar number of drawings to the museum with the intention of forming a 'A National Gallery of British Art', a role since taken on by Tate Britain; artists represented are William Blake, James Barry, Henry Fuseli, Sir Edwin Henry Landseer, Sir David Wilkie, William Mulready, William Powell Frith, Millais and Hippolyte Delaroche. Although some of Constable's works came to the museum with the Sheepshanks bequest, the majority of the artist's works were donated by his daughter Isabel in 1888, including the large number of sketches in oil, the most significant being the 1821 full size oil sketch for The Hay Wain. Other artists with works in the collection include: Bernardino Fungai, Marcus Gheeraerts the Younger, Domenico di Pace Beccafumi, Fioravante Ferramola, Jan Brueghel the Elder, Anthony van Dyck, Ludovico Carracci, Antonio Verrio, Giovanni Battista Tiepolo, Domenico Tiepolo, Canaletto, Francis Hayman, Pompeo Batoni, Benjamin West, Paul Sandby, Richard Wilson, William Etty, Henry Fuseli, Sir Thomas Lawrence, James Barry, Francis Danby, Richard Parkes Bonington and Alphonse Legros.
Richard Ellison's collection of 100 British watercolours was given by his widow in 1860 and 1873 'to promote the foundation of the National Collection of Water-Color Paintings'. Over 500 British and European oil paintings, watercolours and miniatures and 3000 drawings and prints were bequeathed in 1868–1869 by the clergymen Chauncey Hare Townshend and Alexander Dyce.
Several French paintings entered the collection as part of the 260 paintings and miniatures (not all the works were French, for example Carlo Crivelli's Virgin and Child) that formed part of the Jones bequest of 1882 and as such are displayed in the galleries of continental art 1600–1800, including the portrait of François, Duc d'Alençon by François Clouet, Gaspard Dughet and works by François Boucher including his portrait of Madame de Pompadour dated 1758, Jean François de Troy, Jean-Baptiste Pater and their contemporaries.
Another major Victorian benefactor was Constantine Alexander Ionides, who left 82 oil paintings to the museum in 1901, including works by Botticelli, Tintoretto, Adriaen Brouwer, Jean-Baptiste-Camille Corot, Gustave Courbet, Eugène Delacroix, Théodore Rousseau, Edgar Degas, Jean-François Millet, Dante Gabriel Rossetti, Edward Burne-Jones, plus watercolours and over a thousand drawings and prints
The Salting Bequest of 1909 included, among other works, watercolours by J. M. W. Turner. Other watercolourists include: William Gilpin, Thomas Rowlandson, William Blake, John Sell Cotman, Paul Sandby, William Mulready, Edward Lear, James Abbott McNeill Whistler and Paul Cézanne.
There is a copy of Raphael's The School of Athens over 4 metres by 8 metres in size, dated 1755 by Anton Raphael Mengs on display in the eastern Cast Court.
Miniaturists represented in the collection include Jean Bourdichon, Hans Holbein the Younger, Nicholas Hilliard, Isaac Oliver, Peter Oliver, Jean Petitot, Alexander Cooper, Samuel Cooper, Thomas Flatman, Rosalba Carriera, Christian Friedrich Zincke, George Engleheart, John Smart, Richard Cosway and William Charles Ross.
Photography
The collection contains more than 500,000 images dating from the advent of photography, the oldest image dating from 1839. The gallery displays a series of changing exhibits and closes between exhibitions to allow full re-display to take place. Already in 1858, when the museum was called the South Kensington Museum, it had the world's first international photographic exhibition.
The collection includes the work of many photographers from Fox Talbot, Julia Margaret Cameron, Viscountess Clementina Hawarden, Gustave Le Gray, Benjamin Brecknell Turner, Frederick Hollyer, Samuel Bourne, Roger Fenton, Man Ray, Henri Cartier-Bresson, Ilse Bing, Bill Brandt, Cecil Beaton (there are more than 8000 of his negatives), Don McCullin, David Bailey, Jim Lee and Helen Chadwick to the present day.
One of the more unusual collections is that of Eadweard Muybridge's photographs of Animal Locomotion of 1887, this consists of 781 plates. These sequences of photographs taken a fraction of a second apart capture images of different animals and humans performing various actions. There are several of John Thomson's 1876-7 images of Street Life in London in the collection. The museum also holds James Lafayette's society portraits, a collection of more than 600 photographs dating from the late 19th to early 20th centuries and portraying a wide range of society figures of the period, including bishops, generals, society ladies, Indian maharajas, Ethiopian rulers and other foreign leaders, actresses, people posing in their motor cars and a sequence of photographs recording the guests at the famous fancy-dress ball held at Devonshire House in 1897 to celebrate Queen Victoria's diamond jubilee.
In 2003 and 2007 Penelope Smail and Kathleen Moffat generously donated Curtis Moffat's extensive archive to the museum. He created dynamic abstract photographs, innovative colour still-lives and glamorous society portraits during the 1920s and 1930s. He was also a pivotal figure in Modernist interior design. In Paris during the 1920s, Moffat collaborated with Man Ray, producing portraits and abstract photograms or "rayographs".
Sculpture
The sculpture collection at the V&A is the most comprehensive holding of post-classical European sculpture in the world. There are approximately 22,000 objects in the collection that cover the period from about 400 AD to 1914. This covers among other periods Byzantine and Anglo Saxon ivory sculptures, British, French and Spanish medieval statues and carvings, the Renaissance, Baroque, Neo-Classical, Victorian and Art Nouveau periods. All uses of sculpture are represented, from tomb and memorial, to portrait, allegorical, religious, mythical, statues for gardens including fountains, as well as architectural decorations. Materials used include, marble, alabaster, stone, terracotta, wood (history of wood carving), ivory, gesso, plaster, bronze, lead and ceramics.
The collection of Italian, Medieval, Renaissance, Baroque and Neoclassical sculpture (both original and in cast form) is unequalled outside of Italy. It includes Canova's The Three Graces, which the museum jointly owns with National Galleries of Scotland. Italian sculptors whose work is held by the museum include: Bartolomeo Bon, Bartolomeo Bellano, Luca della Robbia, Giovanni Pisano, Donatello, Agostino di Duccio, Andrea Riccio, Antonio Rossellino, Andrea del Verrocchio, Antonio Lombardo, Pier Jacopo Alari Bonacolsi, Andrea della Robbia, Michelozzo di Bartolomeo, Michelangelo (represented by a freehand wax model and casts of his most famous sculptures), Jacopo Sansovino, Alessandro Algardi, Antonio Calcagni, Benvenuto Cellini (Medusa's head dated ), Agostino Busti, Bartolomeo Ammannati, Giacomo della Porta, Giambologna (Samson Slaying a Philistine , his finest work outside Italy), Bernini (Neptune and Triton ), Giovanni Battista Foggini, Vincenzo Foggini (Samson and the Philistines), Massimiliano Soldani Benzi, Antonio Corradini, Andrea Brustolon, Giovanni Battista Piranesi, Innocenzo Spinazzi, Canova, Carlo Marochetti and Raffaelle Monti.
An unusual sculpture is the ancient Roman statue of Narcissus restored by Valerio Cioli with plaster. There are several small scale bronzes by Donatello such as The Ascension with Christ giving the Keys to St Peter and Lamentation of Christ, Alessandro Vittoria, Tiziano Aspetti and Francesco Fanelli in the collection. The largest work from Italy is the Chancel Chapel from Santa Chiara Florence dated 1493–1500, designed by Giuliano da Sangallo it is 11.1 metres in height by 5.4 metres square, it includes a grand sculpted tabernacle by Antonio Rossellino and coloured terracotta decoration.
Rodin is represented by more than 20 works in the museum collection, making it one of the largest collections of the sculptor's work outside France; these were given to the museum by the sculptor in 1914, as acknowledgement of Britain's support of France in the First World War, although the statue of St John the Baptist had been purchased in 1902 by public subscription. Other French sculptors with work in the collection are Hubert Le Sueur, François Girardon, Michel Clodion, Jean-Antoine Houdon, Jean-Baptiste Carpeaux and Jules Dalou.
There are also several Renaissance works by Northern European sculptors in the collection including work by: Veit Stoss, Tilman Riemenschneider, Hendrick de Keyser, Hans Daucher and Peter Flötner. Baroque works from the same area include the work of Adriaen de Vries and Sébastien Slodtz. The Spanish sculptors with work in the collection include Alonso Berruguete and Luisa Roldán represented by her Virgin and Child with St Diego of Alcala .
Sculptors, both British and European, who were based in Britain and whose work is in the collection include Nicholas Stone, Caius Gabriel Cibber, Grinling Gibbons, John Michael Rysbrack, Louis-François Roubiliac, Peter Scheemakers, Sir Henry Cheere, Agostino Carlini, Thomas Banks, Joseph Nollekens, Joseph Wilton, John Flaxman, Sir Francis Chantrey, John Gibson, Edward Hodges Baily, Lord Leighton, Alfred Stevens, Thomas Brock, Alfred Gilbert, George Frampton, and Eric Gill. A sample of some of these sculptors' work is on display in the British Galleries.
With the opening of the Dorothy and Michael Hintze sculpture galleries in 2006 it was decided to extend the chronology of the works on display up to 1950; this has involved loans by other museums, including Tate Britain, so works by Henry Moore and Jacob Epstein along with other of their contemporaries are now on view. These galleries concentrate on works dated between 1600 and 1950 by British sculptors, works by continental sculptors who worked in Britain, and works bought by British patrons from the continental sculptors, such as Canova's Theseus and the Minotaur. The galleries overlooking the garden are arranged by theme, tomb sculpture, portraiture, garden sculpture and mythology. There is also a section that covers late 19th-century and early 20th-century sculpture, which includes work by Rodin and other French sculptors such as Dalou who spent several years in Britain where he taught sculpture.
Smaller-scale works are displayed in the Gilbert Bayes gallery, covering medieval especially English alabaster sculpture, bronzes, wooden sculptures and has demonstrations of various techniques such as bronze casting using lost-wax casting.
The majority of the Medieval and Renaissance sculpture is displayed in the new Medieval and Renaissance galleries (opened December 2009).
One of the largest objects in the collection is the rood loft from St. John's Cathedral ('s-Hertogenbosch), from the Netherlands, dated 1610–1613 this is as much a work of architecture as sculpture, 10.4 metres wide, 7.8 metres high, the architectural framework is of various coloured marbles including columns, arches and balustrade, against which are statues and bas-reliefs and other carvings in alabaster, the work of sculptor Conrad van Norenberch.
Textiles
The collection of textiles consists of more than 53,000 examples, mainly western European though all populated continents are represented, dating from the 1st century AD to the present, this is the largest such collection in the world. Techniques represented include weaving, printing, quilting embroidery, lace, tapestry and carpets. These are classified by technique, countries of origin and date of production. The collections are well represented in these areas: early silks from the Near East, lace, European tapestries and English medieval church embroidery.
The tapestry collection includes a fragment of the Cloth of St Gereon, the oldest known surviving European tapestry. A highlight of the collection is the four Devonshire Hunting Tapestries, very rare 15th-century tapestries, woven in the Netherlands, depicting the hunting of various animals; not just their age but their size make these unique. Both of the major English centres of tapestry weaving of the 16th and 17th centuries respectively, Sheldon & Mortlake are represented in the collection by several examples. Also included are tapestries from John Vanderbank's workshop which was the leading English tapestry manufactory in the late 17th century and early 18th century. Some of the finest tapestries are examples from the Gobelins workshop, including a set of 'Jason and the Argonauts' dating from the 1750s. Other continental centres of tapestry weaving with work in the collection include Brussels, Tournai, Beauvais, Strasbourg and Florence.
One of the earliest surviving examples of European quilting, the late 14th-century Sicilian Tristan Quilt, is also held by the collection. The collection has numerous examples of various types of textiles designed by William Morris, including, embroidery, woven fabrics, tapestries (including The Forest tapestry of 1887), rugs and carpets, as well as pattern books and paper designs. The art deco period is covered by rugs and fabrics designed by Marion Dorn. From the same period there is a rug designed by Serge Chermayeff.
The collection also includes the Oxburgh Hangings, which were made by Mary, Queen of Scots and Bess of Hardwick. However, the Oxburgh Hangings are on permanent long-term loan at Oxburgh Hall.
Theatre and performance
The V&A holds the national collection of performing arts in the UK, including drama, dance, opera, circus, puppetry, comedy, musical theatre, costume, set design, pantomime, popular music and other forms of live entertainment.
The Theatre & Performance collections were founded in the 1920s when private collector, Gabrielle Enthoven, donated her collection of theatrical memorabilia to the V&A. In 1974 two further independent collections were compiled to form a comprehensive performing arts collection at the V&A. The collections were displayed at the Theatre Museum, which operated from Covent Garden until closing in 2007. Theatre & Performance galleries opened at South Kensington in March 2009 tracing the production process of performance and include a temporary exhibition space. Types of objects displayed include costumes, set models, wigs, prompt books, and posters.
The department holds significant archives documenting current practice and the history of performing arts. These include the English Stage Company at the Royal Court Theatre, D'Oyly Carte and the design collection of the Arts Council. Notable personal archives include Vivien Leigh, Peter Brook, Henry Irving and Ivor Novello.
Rock and pop are well represented with the Glastonbury Festival archive, Harry Hammond photographic collection and Jamie Reid archive documenting punk. Costumes include those worn by John Lennon, Mick Jagger, Elton John, Adam Ant, Chris Martin, Iggy Pop, Prince, Shirley Bassey and the stage outfit worn by Roger Daltrey at Woodstock.
In 2024, the museum displayed costumes worn by Taylor Swift over the course of her career. The exhibition, titled Taylor Swift Songbook Trail, was conceived by theatre designer Tom Piper as an "approximately 1 mile long" [...] "journey through V&A South Kensington's galleries" with "13 stops". Most of the displays were dispersed throughout the museum, and the "rarely-seen Prince Consort Gallery" was entirely devoted to the exhibition.
Departments
Education
The education department has wide-ranging responsibilities. It provides information for the casual visitor as well as for school groups, including integrating learning in the museum with the National Curriculum; it provides research facilities for students at degree level and beyond, with information and access to the collections. It also oversees the content of the museum's website in addition to publishing books and papers on the collections, research and other aspects of the museum.
Several areas of the collection have dedicated study rooms, these allow access to objects in the collection that are not currently on display, but in some cases require an appointment to be made.
The new Sackler education suite, occupying the two lower floors of the Henry Cole Wing opened in September 2008. This includes lecture rooms and areas for use by schools, which will be available during school holidays for use by families, and will enable direct handling of items from the collection.
V&A Publishing
V&A Publishing, within the education department, works to raise funds for the museum by publishing around 30 books and digital items each year. The company has around 180 books in print.
Activities for children
Activity backpacks are available for children. These are free to borrow and include hands-on activities such as puzzles, construction games and stories related to themes of the museum.
Activities for adults
The Learning Academy offers adult courses as well as training for professionals in the culture and heritage sector, both nationally and internationally. We also have great facilities in which to teach, study and get closer to our collections.
Research and conservation
Research is a very important area of the museum's work, and includes: identification and interpretation of individual objects; other studies contribute to systematic research, this develops the public understanding of the art and artefacts of many of the great cultures of the world; visitor research and evaluation to discover the needs of visitors and their experiences of the museum. Since 1990, the museum has published research reports; these focus on all areas of the collections.
Conservation is responsible for the long-term preservation of the collections, and covers all the collections held by the V&A and the V&A Museum of Childhood. The conservators specialise in particular areas of conservation. Areas covered by the conservator's work include "preventive" conservation this includes: performing surveys, assessments and providing advice on the handling of items, correct packaging, mounting and handling procedures during movement and display to reduce risk of damaging objects. Activities include controlling the museum environment (for example, temperature and light) and preventing pests (primarily insects) from damaging artefacts. The other major category is "interventive" conservation, this includes: cleaning and reintegration to strengthen fragile objects, reveal original surface decoration, and restore shape. Interventive treatment makes an object more stable, but also more attractive and comprehensible to the viewer. It is usually undertaken on items that are to go on public display.
National Art Library
In the early 2000s the National Art Library merged with the Prints, Paintings and Drawings department to form the Word and Image Department. The library and its reading rooms are located on the second floor of the V&A, though some collections, particularly Archives, are held off-site.
Partnerships
The V&A works with a small number of partner organisations in Sheffield, Dundee and Blackpool to provide a regional presence.
The V&A discussed with the University of Dundee, University of Abertay, Dundee City Council and the Scottish Government in 2007 with a view to opening a new £43 million gallery in Dundee, which would use the V&A brand although it would be funded through and operated independently. Costs was estimated at £76 million in 2015, making it the most expensive gallery project ever undertaken in Scotland. V&A Dundee opened on 15 September 2018. Dundee City Council pays for a major part of the running costs. The V&A does not contribute financially, but it provides expertise, loans and exhibitions.
Plans for a new gallery in Blackpool are also under consideration. This follows earlier plans to move the theatre collection to a new £60m museum in Blackpool, which failed due to lack of funding. The V&A exhibits twice a year at the Millennium Galleries in partnership with Museums Sheffield.
The V&A is one of 17 museums across Europe and the Mediterranean participating in a project called Discover Islamic Art. Developed by the Brussels-based consortium Museum With No Frontiers, this online "virtual museum" brings together more than 1200 works of Islamic art and architecture into a single database. In 2009, the V&A established an art award, the Jameel Prize, for " contemporary art and design inspired by Islamic tradition" in partnership with Art Jameel.
The V&A is an important hub for the London Design Festival and hosts many festival reflated exhibitions and events.
The museum is a non-departmental public body sponsored by the Department for Digital, Culture, Media and Sport. As with other national British museums, entrance is free.
Temporary exhibitions
The V&A has large galleries devoted to temporary exhibitions. A typical year will see more than a dozen different exhibitions being staged, covering all areas of the collections. Notable exhibitions of recent years have been:
Britain Can Make It, 1946
Hats: An Anthology, 2009
Power of Making, 2011
David Bowie Is, 2013
Food: Bigger Than the Plate, 2019
Concealed Histories: Uncovering the Story of Nazi Looting, 2019 – 2021
The V&A came second in London's top paid exhibitions in 2015 with the record-breaking Alexander McQueen show (3,472 a day).
Controversies
In November 2019 the art photographer Nan Goldin led a "die-in" in the Sackler courtyard entrance of the museum, in protest against the V&A's acceptance of donations from the Sackler family, which owned Purdue Pharma, makers of the addictive opioid painkiller OxyContin. The museum's director, Tristram Hunt, defended the museum's relationship with the Sacklers, saying it was proud to have received support from the family over a number of years.
Also in 2019 the V&A received sponsorship for an exhibition on cars from Bosch, which had been fined 90 million euro over its part in the diesel emissions scandal. A V&A spokeswoman said: "Bosch is at the forefront of innovation, with a focus on delivering sustainable solutions for the mobility of the future."
Extinction Rebellion staged a dirty protest outside the V&A Dundee, in protest against Barclays Bank's sponsorship of the museum's Mary Quant exhibition.
In 2021 plans to cut the museum's costs by reorganising its collections by date rather than by material were abandoned after critics said it would lead to staff cuts and thereby a loss of expertise.
Media
Starting in March 2020 BBC Two transmitted a series of six programmes depicting the back-stage work of the curators and restorers of the museum, entitled Secrets of the Museum.
The Sculpture Gallery featured in the 2023 romantic comedy Red, White & Royal Blue.
Galleries
General views
Museum galleries
Asia
British galleries
Metalwork
Paintings
French paintings
Italian paintings
Sculptures
Gothic Art
See also
List of most visited art museums
Director of the Victoria and Albert Museum
Philippa Glanville
V&A Digital Futures events on digital art
List of design museums
Patric Prince
References
External links
V&A websites:
A list of past exhibitions held at the V&A
Historical images of V&A
Construction of V&A Museum
The V&A Museum prior to opening
Victoria and Albert Museum at the Survey of London online:
Architectural history (to 1975) and description
Plans
Architecture of the V&A
Albertopolis: Victoria and Albert Museum
Victoria and Albert Museum within Google Arts & Culture
1852 establishments in England
Prince Albert of Saxe-Coburg and Gotha
Architecture museums in the United Kingdom
Art museums and galleries in London
Art museums and galleries in England
Art museums and galleries established in 1852
Asian art museums in the United Kingdom
Ceramics museums in the United Kingdom
Charities based in London
Decorative arts museums in England
Decorative arts museums
Design museums
Domes
Edwardian architecture in London
English design
Exempt charities
Fashion museums in the United Kingdom
Glass museums and galleries
Grade I listed buildings in the Royal Borough of Kensington and Chelsea
Grade I listed museum buildings
Great Exhibition
Jewellery museums
Museums in the Royal Borough of Kensington and Chelsea
Museums sponsored by the Department for Culture, Media and Sport
Non-departmental public bodies of the United Kingdom government
Performing arts museums
Photography museums and galleries in England
Queen Victoria
Textile museums in the United Kingdom
Victoria and Albert Museum
South Kensington
Art Nouveau collections
Physical museums with virtual catalogues and exhibits
Industrial design collections | Victoria and Albert Museum | Materials_science,Engineering | 18,249 |
62,754 | https://en.wikipedia.org/wiki/H.%20A.%20Rey | H. A. Rey (born Hans Augusto Reyersbach; September 16, 1898 – August 26, 1977) was a German-born American illustrator and author, known best for the series of children's picture books that he and his wife Margret Rey created about Curious George.
Early life
Hans Augusto Reyersbach was born in Hamburg, German Empire on September 16, 1898. He and his wife, Margret, were both German Jews. They first met in Hamburg at Margret's sister's 16th birthday party. They met again in Brazil, where Rey was working as a salesman of bathtubs and Margret had gone to escape the rise of Nazism in Germany. They got married in 1935 and moved to Paris, France in August of that year. They lived in Montmartre and fled Paris in June 1940 on bicycles, carrying the Curious George manuscript with them.
Curious George
While in Paris, Rey's animal drawings came to the attention of a French publisher, who commissioned him to write a children's book. The characters in Cecily G. and the Nine Monkeys included an impish monkey named Curious George, and the couple then decided to write a book focused entirely on him. The outbreak of World War II interrupted their work. Being Jews, the Reys decided to flee Paris before the Nazis invaded the city. Hans assembled two bicycles, and they left the city just a few hours before it fell. Among the meager possessions they brought with them was the illustrated manuscript of Curious George.
The Reys' odyssey took them to Bayonne, France, where they were issued life-saving visas signed by Portuguese Vice-Consul Manuel Vieira Braga (following instructions from Aristides de Sousa Mendes) on June 20, 1940. They crossed the Spanish border, where they bought train tickets to Lisbon. From there, they returned to Brazil, where they had met five years earlier, but this time they continued on to New York. The Reys escaped Europe carrying the manuscript to the first Curious George book, which was published in New York by Houghton Mifflin in 1941. They originally planned to use watercolor illustrations, but since they were responsible for the color separation, Rey changed these to the cartoon-like images that continue to be featured in each of the books. A collector's edition with the original watercolors has since been released.
Curious George was an instant success, and the Reys were commissioned to write more adventures of the mischievous monkey and his friend, the Man with the Yellow Hat. They wrote seven stories in all, with Hans mainly doing the illustrations and Margret working mostly on the stories, though they both admitted to sharing the work and cooperating fully in every stage of development. At first, however, covers omitted Margret's name. In later editions, this was changed, and Margret now receives full credit for her role in developing the stories.
Curious George Takes a Job was named to the Lewis Carroll Shelf Award list in 1960.
In 1963, the Reys relocated to Cambridge, Massachusetts, in a house near Harvard Square, and lived there until Rey died on August 26, 1977.
In the 1990s, friends of the Reys founded a children's bookstore named Curious George & Friends (formerly Curious George Goes to Wordsworth), which operated in Harvard Square until 2011. A new Curious George themed store opened in 2012, The World's Only Curious George Store, which moved to Central Square in 2019.
Star charts
Rey's interest in astronomy began during World War I and led to his desire to redraw constellation diagrams, which Rey found difficult to remember, so that they were more intuitive. This led to the 1952 publication of The Stars: A New Way to See Them (). His constellation diagrams were adopted widely and now appear in many astronomy guides, such as Donald H. Menzel's A Field Guide to the Stars and Planets. As of 2008 The Stars: A New Way to See Them and a simplified presentation for children called Find the Constellations are still in print. A new edition of Find the Constellations was released in 2008, updated with modern fonts, the new status of Pluto, and some more current measurements of planetary sizes and orbital radii.
Collected papers
The University of Oregon holds H. A. Rey papers dated 1940 to 1961, dominated by correspondence, primarily between Rey and his American and British publishers.
The de Grummond Children's Literature Collection in Hattiesburg, Mississippi, holds more than 300 boxes of Rey papers dated 1973 to 2002.
Dr. Lena Y. de Grummond, a professor in the field of library science at the University of Southern Mississippi, contacted the Reys in 1966 about USM's new children's literature collection. H. A. and Margret donated a pair of sketches at the time. When Margret Rey died in 1996, her will designated that the entire literary estate of the Reys be donated to the de Grummond Collection.
Books written by H. A. Rey
Cecily G. and the Nine Monkeys
Curious George
Curious George Takes a Job
Curious George Rides a Bike
Curious George Gets a Medal
Curious George Learns the Alphabet
Curious George Goes to the Hospital
Feed the Animals
Find the Constellations
Elizabite - Adventures of a Carnivorous Plant
How Do You Get There?
Pretzel
The Stars: A New Way to See Them
Where's My Baby?
See the Circus
Tit for Tat
Billy's Picture
Whiteblack the Penguin Sees the World
Au Clair de la Lune and other French Nursery Songs (1941)
Spotty (1945)
Mary had a Little Lamb and other Nursery Songs (1951)
Humpty Dumpty and other Mother Goose Songs (©1943 Harper & Brothers)
Books illustrated by H. A. Rey
Dem Andenken Christian Morgensterns 12 Lithographien zu seinem Werk, von Hans Reyersbach (= H. A. Rey), signiert und mit Text in Bleistift HR 22 (1922)
Die Sommerfrische: 10 Idyllen in Linol-Schnitt, von Hans Reyersbach (= H. A. Rey), Berlin (1923)
Grotesken - 12 Lithographien zu Christian Morgensterns Grotesken von Hans Reyersbach (= H. A. Rey). Neue Folge. 400 Exemplare, Hamburg Kurt Enoch Verlag (1923)
Curious George, written by Margret Rey (1941)
Elizabite - The Adventures of a Carnivorous Plant (1942)
Don't Frighten the Lion (1942)
Katy No-Pocket (1944)
Pretzel, written by Margret Rey (1944)
We Three Kings and other Christmas Carols (1944)
Curious George Takes a Job, written by Margret Rey (1947)
Curious George Rides a Bike, written by Margret Rey (1952)
Curious George Gets a Medal. written by Margret Rey (1957)
Curious George Flies a Kite, written by Margret Rey (1958)
Curious George Learns the Alphabet, written by Margret Rey (1963)
Curious George Goes to the Hospital, written by Margret Rey (1966)
Wordless Novel
Zebrology. Chatto and Windus; London, England; (1937)
References
Citations
New York Times: "How Curious George Escaped the Nazis"
A curious tale of George's creators
Jaeger, Roland: "H. A. und Margret Rey", in: Spalek, John M. / Feilchenfeldt, Konrad / Hawrylchak, Sandra H. (ed.): Deutschsprachige Exilliteratur seit 1933, vol. 3, USA, part 2; Bern/München 2000, p. 351−360
Jaeger, Roland: "Collecting Curious George. Children's Books Illustrated by H. A. Rey", in: Firsts. The Book Collector's Magazine, vol. 8, 1998, no. 12 (Dec.), p. 50–57
Jaeger, Roland: "Der Schöpfer von 'Curious George': Kinderbuch-Illustrator H. A. Rey". in: Aus dem Antiquariat, 1997, No. 10, A543−A551
External links
Margret and H. A. Rey Interactive Timeline: Life in Paris and a Narrow Escape
Curious George Saves the Day: The Art of Margret and H. A. Rey, The Jewish Museum (New York), March 14, 2010 – August 1, 2010.
See IMDB: Monkey Business: The Adventures of Curious George's Creators (2017)
H.A. & Margret Rey Papers at the University of Southern Mississippi Libraries
H.A. & Margret Rey Digital Collections at the University of Southern Mississippi Libraries
H.A. Rey papers at the University of Oregon
1898 births
1977 deaths
Artists from Cambridge, Massachusetts
American academics of English literature
American children's book illustrators
German children's book illustrators
American children's writers
German children's writers
German illustrators
German male writers
Curious George
Jewish American artists
Jewish American children's writers
Jewish emigrants from Nazi Germany to the United States
People associated with astronomy
Writers from Cambridge, Massachusetts
Writers from Hamburg
Writers who illustrated their own writing
Wordless novels | H. A. Rey | Astronomy | 1,915 |
61,921,897 | https://en.wikipedia.org/wiki/Wake-equalising%20duct | A wake-equalising duct is a ship hull appendage mounted in the inflow region of a screw propeller intended to realign and adjust the velocity of the wake at the inflow to the propeller disc. Wake velocity may be straightened out, given contra-rotational swirl, accelerated, or a combination of these effects, all of which can improve propeller efficiency, giving either higher thrust or reducing power requirement for the same thrust.
Structure
The wake-equalising duct is a static flow modifier attached to the hull upstream of the propeller within the wake. It may be made of a single circular section duct directly ahead of the propeller, with fixed internal and/or external fins to impart rotational changes to the wake flow, or as two semicircular ducts mounted further forwards, one on each side of the hull, aligned to provide the desired flow modification. The duct(s) may be eccentrically mounted to include mainly the thickest part of the wake. This offset would usually be above the centre of the shaft.
Function
There are four components to the wake modification.
The wake is accelerated by the shape of the duct to be closer to the free-stream velocity of water over the rest of the propeller disc.
The direction of flow is aligned more closely with the free stream inflow.
A contra-rotational swirl may be induced, which will reduce outflow vorticity.
Flow separation at the afterbody may be reduced, which can reduce the thrust-deduction factor
These have the effect of increasing the efficiency of the propeller as it is working in a more uniform flow, and less energy is lost to hub vortex. The more uniform inflow conditions can also significantly reduce propeller blade vibration.
Fuel savings of up to 12% have been claimed. The system has the greatest benefit on hulls where the inflow conditions are inherently more disturbed.
Versions
The Becker Mewis Duct is a version based on an eccentric annular nozzle fitted between the hull and the propeller, supported by a number of flow directing radial vanes, each of which is angled to optimise inflow direction. The design is patented and has been in use since 2009.
The Schneekluth Wake Equalising Duct is a version using two semicircular nozzles, one on each side of the hull, centred above the shaft centreline, and angled to provide contra-rotational outflow swirl.
References
Naval architecture
Shipbuilding
Marine propulsion | Wake-equalising duct | Engineering | 492 |
9,491,409 | https://en.wikipedia.org/wiki/Department%20of%20Biochemistry%2C%20University%20of%20Oxford | The Department of Biochemistry of Oxford University is located in the Science Area in Oxford, England. It is one of the largest biochemistry departments in Europe. The Biochemistry Department is part of the University of Oxford's Medical Sciences Division, the largest of the university's four academic divisions, which has been ranked first in the world for biomedicine.
History
The Department of Biochemistry at Oxford University began as the physiological chemistry section of the Physiology Department, and acquired its own separate department and building in the 1920s. In 1920, Benjamin Moore was elected to the position of the Whitley Professor of Biochemistry, the newly established Chair of Biochemistry at Oxford University. He was followed by Rudolph Peters in 1923, and an endowment of £75,000 was soon granted by the Rockefeller Foundation for the construction of a new departmental building, purchase of equipment, and its maintenance. The Biochemistry Department building opened in 1927.
In 1954, Hans Krebs was appointed the Whitley Chair of Biochemistry, and his appointment brought greater prominence to the department. He brought with him the Medical Research Council unit established to conduct research on cell metabolism. In 1955, a second professorship in the department, the Iveagh Chair of Microbiology, was established with funding from Guinness and the sub-department of Microbiology created, with Donald Woods its first holder. The eight-storey Hans Krebs Building was constructed in 1964 with funds from the Rockefeller Foundation. Krebs was succeeded by Rodney Porter in 1967. Genetics was brought into the Biochemistry Department when Walter Bodmer was appointed the first Professor of Genetics in 1970. The Laboratory of Molecular Biophysics, first established in the Zoology Department with support from Krebs and also linked to the Physical Chemistry Laboratory of the Chemistry Department, became part of the Biochemistry Department. It moved into the Rex Richards building built in 1984, with David Phillips the Professor in Molecular Biophysics. The Oxford Glycobiology Institute, headed by Raymond Dwek and housed in the Rodney Porter Building, opened in 1991.
The department is now part of the Medical Sciences Division of Oxford University, under the Divisional Boards formed in 2000. In 2006, two older biochemistry buildings were demolished, followed by two more including the Han Krebs Tower in 2014, to make way for the two-phase construction of the New Biochemistry Building. Francis Barr, the EP Abraham Professor of Mechanistic Cell Biology, is the head of the Biochemistry Department, replacing Mark Sansom, the David Phillips Professor in Molecular Biophysics, in January 2019.
Research
The department is sub-divided into the following research areas:
Cell Biology, Development and Genetics
Chromosomal and RNA Biology
Infection and Disease Processes
Microbiology and Systems Biology
Structural Biology and Molecular Biophysics
Academic staff
There are around 400 research staff, with about 50 independent principal investigators who lead research groups that may range from a few people to forty or more. Members of other departments also contribute to teaching, including lecturers in physiology, pathology, pharmacology, clinical biochemistry and zoology. The department hosts the Oxford University Biochemical Society, a graduate student association that invites speakers to the University of Oxford. The head of department is Professor Francis Barr. Other members of the academic staff include Judy Armitage, Elspeth Garman, Jonathan Hodgkin, Kim Nasmyth, Neil Brockdorff, Rob Klose and Alison Woollard.
Buildings
The department currently has two main buildings:
The Dorothy Crowfoot Hodgkin building
The Rex Richards building (housing the NMR facility in the basement)
Until 2006, two older buildings housing genetics (the Walter Bodmer building) and biochemistry (the Rudolph Peters building) were also part of the department. However, these were demolished in 2006 to make way for the first phase of the construction of the New Biochemistry building, completed in October 2008. Until 2008 biochemistry also occupied the Donald Woods building and the Hans Krebs Tower, which were demolished in 2014 for the second phase of the construction. The New Biochemistry building was renamed Dorothy Crowfoot Hodgkin building in 2022. Until 2022 biochemistry also occupied the Rodney Porter building (Oxford Glycobiology Institute).
The New Biochemistry building houses interdisciplinary research in the biosciences, including physiology, chemistry, biochemistry, and clinical neurosciences. The department moved into the purpose-built new biochemistry building during the autumn of 2008 which was designed to promote interaction and collaboration as well as provide facilities for all staff. The New Biochemistry building houses a substantial amount of contemporary art.
Former departmental buildings
References
External links
Department of Biochemistry website
Map of the Science Area (see buildings 2–7)
Saltbridges website
Medical Sciences Division website
University of Oxford website
Oxford University Biochemical Society
Biochemistry research institutes
Biological research institutes in the United Kingdom
Biology education in the United Kingdom
Biochemistry
Research institutes in Oxford | Department of Biochemistry, University of Oxford | Chemistry | 964 |
42,153,115 | https://en.wikipedia.org/wiki/ImmunoGen | ImmunoGen, Inc. was a biotechnology company focused on the development of antibody-drug conjugate (ADC) therapeutics for the treatment of cancer. ImmunoGen was founded in 1981 and was headquartered in Waltham, Massachusetts.
An ImmunoGen ADC contains a manufactured antibody that binds to a target found on cancer cells, with one of the company's potent cell-killing agents attached as a "payload". The antibody serves to deliver the cell-killing agent specifically to cancer cells bearing its target and the payload serves to kill these cells. In some cases, the antibody also has anticancer activity.
In November 2023, AbbVie, an American pharmaceutical company, announced it was buying ImmunoGen for $10.1 billion.
Linkage technology
Currently approved ADCs with ImmunoGen technology employ one of the company's maytansinoid cell-killing agents, either DM1 or DM4, or one of the company's DNA-acting IGN payloads.
DM1 attached to an antibody with ImmunoGen's thioether linker is called "emtansine" in its INN name
trastuzumab emtansine, marketed as Kadcyla
DM1 attached to an antibody with ImmunoGen's SPP linker is called "mertansine".
DM4 attached with ImmunoGen's SPDB linker is called "ravtansine"
indatuximab ravtansine (BT062) targeting multiple myeloma,
anetumab ravtansine (BAY94-9343) targeting mesothelin (to treat mesothelioma), starting phase II trial in 2016
coltuximab ravtansine (SAR3419) targeting CD19 to treat acute lymphoblastic leukemia (ALL).
DM4 attached with ImmunoGen's sSPDB linker is called "soravtansine"
mirvetuximab soravtansine
ImmunoGen also developed isatuximab, a monoclonal antibody without linkage to a toxin.
Pipeline
ImmunoGen uses its ADC technology to develop its own product candidates. The mirvetuximab, which has been submitted for FDA approval, is being developed as a monotherapy or a standalone treatment for ovarian cancer. Other products currently in clinical-stage development include:
IMGN853, (mirvetuximab soravtansine) targeting FRα, and using DM4.
IMGN529, targeting CD37 for Non-Hodgkin lymphoma (NHL).
IMGN289, targeting EGFR
IMGN779, a CD33-Targeted Antibody-Drug Conjugate (ADC) with a new DNA-Alkylating agent acts on AML Cells.
IMGN632, anti-CD123
SAR408701 in collaboration with Sanofi
SAR428926 in collaboration with Sanofi
SAR566658 in collaboration with Sanofi
SAR650984 in collaboration with Sanofi
LY3076226 in collaboration with Eli Lilly
PCA062 in collaboration with Novartis
an undisclosed antibody in collaboration with Amgen
Collaborations & licensing
The company also selectively out licenses limited use of its technology to other companies. Companies licensing ImmunoGen's technology include Amgen, Bayer HealthCare, Biotest, Genentech/Roche, Eli Lilly, Novartis, Sanofi, and Takeda. Roche's Kadcyla (ado-trastuzumab emtansine) utilizes ImmunoGen's ADC technology. It has been approved and launched in a number of countries, including the US, where it is marketed by Genentech, a member of the Roche Group. In October 2015, the company disclosed that Kadcyla had failed to meet its primary endpoint in the Phase II/III GATSBY trial investigating the second line treatment of HER2-positive advanced gastric cancer.
References
External links
Companies established in 1981
Companies formerly listed on the Nasdaq
Defunct pharmaceutical companies of the United States
Health care companies based in Massachusetts
2024 mergers and acquisitions | ImmunoGen | Biology | 887 |
11,347,634 | https://en.wikipedia.org/wiki/Diaporthe%20eres | Diaporthe eres is a fungal plant pathogen, which is the type species of genus ''Diaporthe''. It causes canker disease in a wide variety of hosts. This species has a long history, having been described many times under various synonyms, for instance, the fungus was illustrated by James Sowerby in 1803 under the name Sphaeria ciliaris, attributed to Bulliard. The name D. eres has been proposed for conservation in order to avoid bothersome name changes due to priority.
References
Fungal plant pathogens and diseases
eres
Fungi described in 1801
Fungus species | Diaporthe eres | Biology | 124 |
1,208,715 | https://en.wikipedia.org/wiki/XBRL | XBRL (eXtensible Business Reporting Language) is a freely available and global framework for exchanging business information. XBRL allows the expression of semantics commonly required in business reporting. The standard was originally based on XML, but now additionally supports reports in JSON and CSV formats, as well as the original XML-based syntax. XBRL is also increasingly used in its Inline XBRL variant, which embeds XBRL tags into an HTML document. One common use of XBRL is the exchange of financial information, such as in a company's annual financial report. The XBRL standard is developed and published by XBRL International, Inc. (XII).
XBRL is a standards-based way to communicate and exchange business information between business systems. These communications are defined by metadata set out in taxonomies, which capture the definition of individual reporting concepts as well as the relationships between concepts and other semantic meaning. Information being communicated or exchanged is provided within an XBRL instance.
Early users of XBRL included regulators such as the U.S. Federal Deposit Insurance Corporation and the Committee of European Banking Supervisors (CEBS). Common functions in many countries that make use of XBRL include regulators of stock exchanges and securities, banking regulators, business registrars, revenue reporting and tax-filing agencies, and national statistical agencies.
A list of known XBRL projects is published by XBRL International. Within the last ten years, the Securities and Exchange Commission (SEC), the United Kingdom's HM Revenue and Customs (HMRC), and Singapore's Accounting and Corporate Regulatory Authority (ACRA), had begun to require companies to use it, and other regulators were following suit. Development of the SEC's initial US GAAP Taxonomy was led by XBRL US and was accepted and deployed for use by public companies in 2008 in phases, with the largest filers going first: foreign companies which use International Financial Reporting Standards (IFRS) are expected to submit their financial returns to the SEC using XBRL once the IFRS taxonomy has been accepted by the SEC. In the UK in 2011, both HMRC and Companies House accepted XBRL in the iXBRL format. XBRL was adopted by the Ministry of Corporate Affairs (MCA) of India for filing financial and costing information with the Central Government.
Specification
The current version of the base XBRL specification is 2.1, with errata corrections.
The current version of the Inline XBRL specification is 1.1
Conformance suites are available to test processors of XBRL and Inline XBRL documents.
XBRL document structure
In typical usage, XBRL consists of an XBRL instance, containing primarily the business facts being reported, and a collection of taxonomies (called a Discoverable Taxonomy Set (DTS)), which define metadata about these facts, such as what the facts mean and how they relate to one another. XBRL uses XML Schema, XLink, and XPointer standards.
XBRL Instance
The XBRL instance begins with the <xbrl> root element. There may be more than one XBRL instance embedded in a larger XML document. Xbrl Instance is also known as XBRL file. The XBRL instance itself holds the following information:
Business Facts – facts can be divided into two categories
Items are facts holding a single value. They are represented by a single XML element with the value as its content.
Tuples are facts holding multiple values. They are represented by a single XML element containing nested Items or Tuples.
In the design of XBRL, all Item facts must be assigned a context.
Contexts define the entity, e.g., company or individual, to which the fact applies, the period of time the fact is relevant, and an optional scenario. Date and time information appearing in the period element must conform to ISO 8601. Scenarios provide further contextual information about the facts, such as whether the business values reported are actual, projected, or budgeted.
Units define the units used by numeric or fractional facts within the document, such as USD, shares. XBRL allows more complex units to be defined if necessary. Facts of a monetary nature must use a unit from the ISO 4217 namespace.
Footnotes use XLink to associate one or more facts with some content.
References to XBRL taxonomies, typically through schema references. It is also possible to link directly to a linkbase.
This is an example of a fictitious Dutch company's International Financial Reporting Standards (IFRS) statement instance file :
<?xml version="1.0" encoding="UTF-8"?>
<xbrli:xbrl
xmlns:ifrs-gp="http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15"
xmlns:iso4217="http://www.xbrl.org/2003/iso4217"
xmlns:xbrli="http://www.xbrl.org/2003/instance"
xmlns:xbrll="http://www.xbrl.org/2003/linkbase"
xmlns:xlink="http://www.w3.org/1999/xlink">
<xbrll:schemaRef xlink:href="http://www.org.com/xbrl/taxonomy" xlink:type="simple"/>
<ifrs-gp:OtherOperatingIncomeTotalFinancialInstitutions contextRef="J2004"
decimals="0" unitRef="EUR">38679000000</ifrs-gp:OtherOperatingIncomeTotalFinancialInstitutions>
<ifrs-gp:OtherAdministrativeExpenses contextRef="J2004"
decimals="0" unitRef="EUR">35996000000</ifrs-gp:OtherAdministrativeExpenses>
<ifrs-gp:OtherOperatingExpenses contextRef="J2004"
decimals="0" unitRef="EUR">870000000</ifrs-gp:OtherOperatingExpenses>
...
<ifrs-gp:OtherOperatingIncomeTotalByNature contextRef="J2004"
decimals="0" unitRef="EUR">10430000000</ifrs-gp:OtherOperatingIncomeTotalByNature>
<xbrli:context id="BJ2004">
<xbrli:entity>
<xbrli:identifier scheme="www.iqinfo.com/xbrl">ACME</xbrli:identifier>
</xbrli:entity>
<xbrli:period>
<xbrli:instant>2004-01-01</xbrli:instant>
</xbrli:period>
</xbrli:context>
<xbrli:context id="EJ2004">
<xbrli:entity>
<xbrli:identifier scheme="www.iqinfo.com/xbrl">ACME</xbrli:identifier>
</xbrli:entity>
<xbrli:period>
<xbrli:instant>2004-12-31</xbrli:instant>
</xbrli:period>
</xbrli:context>
<xbrli:context id="J2004">
<xbrli:entity>
<xbrli:identifier scheme="www.iqinfo.com/xbrl">ACME</xbrli:identifier>
</xbrli:entity>
<xbrli:period>
<xbrli:startDate>2004-01-01</xbrli:startDate>
<xbrli:endDate>2004-12-31</xbrli:endDate>
</xbrli:period>
</xbrli:context>
<xbrli:unit id="EUR">
<xbrli:measure>iso4217:EUR</xbrli:measure>
</xbrli:unit>
</xbrli:xbrl>
XBRL Taxonomy
An XBRL Taxonomy is a collection of taxonomy schemas and linkbases. A taxonomy schema is an XML schema document (file). Linkbases are XML documents (file) which follow the XLink specification. The schema must ultimately extend the XBRL instance schema document and typically extend other published XBRL schemas on the xbrl.org website.
Taxonomy schemas define Item and Tuple "concepts" using <xsd:element> elements. Concepts provide names for the fact and indicate whether or not it is a tuple or an item, the data type (such as monetary, numeric, fractional, or textual), and potentially more metadata. Items and Tuples can be regarded as "implementations" of concepts, or specific instances of a concept. A good analogy for those familiar with object oriented programming would be that Concepts are the classes and Items and Tuples are Object instances of those classes. This is the source of the use of the "XBRL instance" terminology. In addition to defining concepts, taxonomy schemas reference linkbase documents. Tuples instances are 1..n relationships with their parents; their metadata is simply the collection of their attributes.
Linkbases are a collection of Links, which themselves are a collection of locators, arcs, and potentially resources. Locators are elements that essentially reference a concept and provide an arbitrary label for it. In turn, arcs are elements indicating that a concept links to another concept by referencing the labels defined by the locators. Some arcs link concepts to other concepts. Other arcs link concepts to resources, the most common of which are human-readable labels for the concepts. The XBRL 2.1 specification defines five different kinds of linkbases.
Label Linkbase
Reference Linkbase
Calculation Linkbase
Definition Linkbase
Presentation Linkbase
Label Linkbase
This linkbase provides human readable strings for concepts. Using the label linkbase, multiple languages can be supported, as well as multiple strings within each language.
XBRL aims to become a worldwide standard for electronic business reporting. This requires taxonomies to present business data in many different languages. Therefore, it is important to be able to create an element that is assigned with labels for different languages. There may also be different labels for different purposes. All labels are stored and linked to the elements in a label linkbase.
Elements defined in a schema are built to convey accounting meaning to computers. In order to make it easier for computers to process their names, they have to obey some rules. For example, the use of spaces is not allowed so 'Cash and Cash Equivalents' would be named 'CashAndCashEquivalents' . Additionally, big taxonomies such as IFRS obey specific rules of naming and labelling to ensure consistency within the schema. For example, there could be a list of words that are excluded from the names, e.g., :and:, "of" ..., or words that appear only in a particular order (i.e., 'Net' or 'Total' at the end of the label after a comma).
In the label linkbase, elements are connected to human readable labels using "concept-label" arcrole.
As mentioned above, elements can be assigned to labels in different languages. An example that describes definitions of labels of the IFRS element AssetsTotal in English, German and Polish is provided below.
<label xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/label"
xlink:label="ifrs_AssetsTotal_lbl"
xml:lang="en">Assets, Total</label>
<label xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/label"
xlink:label="ifrs_AssetsTotal_lbl"
xml:lang="de">Vermögenswerte, Gesamt</label>
<label xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/label"
xlink:label="ifrs_AssetsTotal_lbl"
xml:lang="pl">Aktywa, Razem</label>
To distinguish between languages, XBRL uses the XML attribute lang. Taxonomy creators may also define different labels for one element. One of the ideas of XBRL is that the information about the period and currency for which the element is reported is not contained within an element definition but is described by a context in instance documents. In financial reporting on the other hand, many terms express the date for which they are being reported, for instance Property, Plant and Equipment at the beginning of year and Property, Plant and Equipment at the end of year. XBRL allows the creation of different labels depending on the context in which an element will be used.
<label xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/label"
xlink:label="ifrs_AssetsTotal_lbl"
xml:lang="en">Property, Plant and Equipment, Net</label>
<label xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/periodStartLabel"
xlink:label="ifrs_AssetsTotal_lbl"
xml:lang="en">Property, Plant and Equipment, Net, Beginning Balance</label>
<label xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/periodEndLabel"
xlink:label="ifrs_AssetsTotal_lbl"
xml:lang="en">Property, Plant and Equipment, Net, Ending Balance</label>
The example above shows how three different labels are assigned to one element by applying different role attributes on labels.
Reference Linkbase
This linkbase associates concepts with citations of some body of authoritative literature.
Financial concepts appearing on business reports more often than not stem from regulatory documents issued by authorities. For example, the IFRS Taxonomy describes financial reports prepared based on IFRSs (Bound Volume).
Elements defined by this taxonomy refer to the specific terms and concepts explained in the standards. For this reason, a taxonomy is often provided with a reference linkbase that presents relationships between elements and external regulations or standards (the other solution is to enclose documentation in label linkbase). This helps instance creators and users understand the intended meaning of each element and provides support for its inclusion in the taxonomy.
The reference layer does not contain the full text of the regulations. Instead, it points to source documents by identifying their name and indicating the relevant paragraphs and clauses. This connection is created using "concept-reference" arcrole.
There are several types of references that could be provided for each element.
<reference xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/presentationRef"
xlink:label="CashFlowsFromUsedInOperationsTotal_ref">
<ref:Name>IAS</ref:Name>
<ref:Number>7</ref:Number>
<ref:Paragraph>14</ref:Paragraph>
</reference>
<reference xlink:type="resource"
xlink:role="http://www.xbrl.org/2003/role/measurementRef"
xlink:label="CashFlowsFromUsedInOperationsTotal_ref">
<ref:Name>IAS</ref:Name>
<ref:Number>7</ref:Number>
<ref:Paragraph>18</ref:Paragraph>
<ref:Subparagraph>a</ref:Subparagraph>
</reference>
The example above indicates references for Cash Flow from (Used in) Operations. First, it provides a reference to a document which explains how and where the element should be presented in terms of its placement and labeling. In IAS 7, paragraph 14 we read that the concept Cash Flows from Operating Activities exists and what it is derived from. Second, the measurement reference provides explanations about what determines the value of the element and how it should be calculated. This description can be found in IAS 7 paragraph 18.a.
XBRL also allows an element to be assigned other types of references containing examples, commentaries, etc.
Calculation Linkbase
This linkbase associates concepts with other concepts so that values appearing in an instance document may be checked for consistency.
The idea of the calculation linkbase is to improve the quality of an XBRL report. It contains definitions of basic validation rules, which apply to all instance documents referring to a particular taxonomy. A hierarchical calculation linkbase sorts all monetary elements in this way so that lower level elements sum up to or are subtracted from one another so that the upper level concept is the result of these operations.
The sign of the relationship depends on the weight attribute that is assigned to the arc connecting two elements. An example is provided below.
<calculationArc xlink:type="arc"
xlink:arcrole="http://www.xbrl.org/2003/arcrole/summation-item"
xlink:from="GrossProfit" xlink:to="RevenueTotal"
order="1" weight="1" use="optional"/>
<calculationArc xlink:type="arc"
xlink:arcrole="http://www.xbrl.org/2003/arcrole/summation-item"
xlink:from="GrossProfit" xlink:to="CostOfSales"
order="2" weight="-1" use="optional"/>
The example shows that there are defined two calculation arcs providing details concerning relations between Gross profit, Revenue and Cost of Sales. In Income Statements, Gross profit is the difference between the other two.
Therefore, we assign weight attribute value to "1" on the arc connecting Gross profit and Revenue and "-1" between Gross profit and Cost of Sales.
The reason why there is a difference between calculation and presentation linkbases, is that the total element that stands for the summation of all others usually appears at the bottom in the financial statements whereas in the calculation linkbase it must be placed as the top concept.
Presentation Calculation
Assets (Presentation) Assets, Total
Assets, Non-Current Assets, Non-Current +1
Assets, Current Assets, Current +1
Assets, Total
There are two major rules concerning calculation relations in XBRL.
Firstly, we cannot carry out operations on elements that have different values of the periodType attribute. This is often called the cross-context rule and relates to defining some elements as "For period" (duration) and others as "As of date" (instant). For example, concepts that appear on the Balance Sheet are instant: which means that their value is presented for a specified day, while elements in the Income Statement or Statement of Cash Flows are duration: because they represent actions that took place over a period of time. The problem emerges for example in the Statement of Changes in Equity or Movements in Property, Plant and Equipment where instant elements mix with duration. The solution to this problem is a formula linkbase that will provide taxonomy creators with many more functions than just simple addition or subtraction.
Secondly, the double entry accounting rule requires XBRL taxonomy creators to define the credit/debit nature of monetary elements appearing in the Balance Sheets and Income Statements. This rule does not only disallow the addition of elements with opposite balance attributes—they must be subtracted—it also defines whether the numerical value contained within an element should be positive or negative.
Definition Linkbase
This linkbase associates concepts with other concepts using a variety of arc roles to express relations such as is-a, whole-part, etc. Arc roles can be created by those who create XBRL taxonomies or commonly used arc roles can be added to the XBRL Link Role Registry (LRR).
The definition linkbase provides taxonomy creators with the opportunity to define different kinds of relations between elements. There are four standard types of relationships supported by the definition linkbase.
The first one is referred to as general-special. It distinguishes between concepts that have more generic or more specific meaning. For example, ZIP code is the US representation of Postal Code which is used worldwide. Therefore, to indicate that connection, taxonomy creators define Postal Code as a general term to which there is more specialised concept ZIP code.
Second available relation type is essence-alias. By using it, taxonomy creators are able to indicate that two concepts have similar meaning. For example, some airlines may want to use the term Planes to describe their main component of their PPE while other would prefer Aircraft. To state that meaning of these two is the same and that they can be used interchangeably, taxonomy creators may connect them using "essence-alias" arcrole.
The third standard type of relation is called requires-element. As its name indicates, taxonomy builders use it to force instance creators to enter the value of one element, if they provide the content of another. For instance, a regulator may want to require disclosures on a particular component of Assets if it appears on the Balance Sheet. In order to achieve that, the definition linkbase defines "requires-element" relationship between them (for example, Property, Plant and Equipment, Net and Property, Plant and Equipment Disclosures).
The fourth relation is similar-tuples. It resembles "essence-alias" relation but is applied for tuples. It connects two tuples that are equivalents in terms of definition (documentation from label linkbase or reference in reference linkbase) but are diverse from XML perspective i.e., do not have identical content models, for example contain different elements. One of the reasons that this type of relation was introduced is the prohibition of schema redefinition which prevents changes in a tuple's content model.
Presentation Linkbase
This linkbase associates concepts with other concepts so that the resulting relations can guide the creation of a user interface, rendering, or visualization.
Business reports are in general prepared in the form of tables or statements or other structures. The presentation linkbase stores information about relationships between elements in order to properly organize the taxonomy content. This allows the elements to be arranged in a structure that is appropriate to represent the hierarchical relationships in particular business data.
These groupings can be performed in many ways. For example, a typical Balance Sheet contains Assets, Equity and Liabilities. Assets consist of Current Assets and Non-current Assets. Current Assets are split in Inventories, Receivables and so on. The presentation linkbase, using parent-child relations organizes elements in this way and helps users find concepts they are interested in.
The main drawback of a tree-like (hierarchical) structure in a presentation linkbase is that it only allows the presentation of flat lists of elements, while financial statements also contain more sophisticated reports such as Changes in Equity or Movements in Property, Plant and Equipment . The XBRL Consortium is currently working on rendering solutions that would provide for the automatic creation of such reports.
This is the taxonomy schema of the above shown instance file:
<?xml version="1.0" encoding="utf-8"?>
<schema
xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:xbrli="http://www.xbrl.org/2003/instance"
xmlns:link="http://www.xbrl.org/2003/linkbase"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:ifrs-gp="http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15"
xmlns:ifrs-gp-rol="http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15/roles"
xmlns:samp="http://www.iqinfo.com/xbrl/taxonomy"
targetNamespace="http://www.iqinfo.com/xbrl/taxonomy"
elementFormDefault="qualified"
attributeFormDefault="unqualified">
<annotation>
<appinfo>
<link:linkbaseRef xlink:type='simple'
xlink:href='http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15/ifrs-gp-pre-bs-liquidity-2005-05-15.xml'
xlink:role='http://www.xbrl.org/2003/role/presentationLinkbaseRef'
xlink:arcrole='http://www.w3.org/1999/xlink/properties/linkbase' />
<link:linkbaseRef xlink:type='simple'
xlink:href='http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15/ifrs-gp-pre-is-byNature-2005-05-15.xml'
xlink:role='http://www.xbrl.org/2003/role/presentationLinkbaseRef'
xlink:arcrole='http://www.w3.org/1999/xlink/properties/linkbase' />
<link:linkbaseRef xlink:type='simple'
xlink:href='http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15/ifrs-gp-cal-bs-liquidity-2005-05-15.xml'
xlink:role='http://www.xbrl.org/2003/role/calculationLinkbaseRef'
xlink:arcrole='http://www.w3.org/1999/xlink/properties/linkbase' />
<link:linkbaseRef xlink:type='simple'
xlink:href='http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15/ifrs-gp-cal-is-byNature-2005-05-15.xml'
xlink:role='http://www.xbrl.org/2003/role/calculationLinkbaseRef'
xlink:arcrole='http://www.w3.org/1999/xlink/properties/linkbase' />
</appinfo>
</annotation>
<import namespace="http://www.xbrl.org/2003/instance"
schemaLocation="http://www.xbrl.org/2003/xbrl-instance-2003-12-31.xsd" />
<import namespace="http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15"
schemaLocation="http://xbrl.iasb.org/int/fr/ifrs/gp/2005-05-15/ifrs-gp-2005-05-15.xsd" />
</schema>
XBRL's Global Ledger Framework (XBRL GL) is the only set of taxonomies that is developed and recommended by XBRL International.
XBRL modules
XBRL International has issued and reissued a stability pledge in relation to the core XBRL 2.1 specification. In addition to the core XBRL 2.1 specification, work continues on the development of XBRL modules that define new, compatible functionality.
XBRL Dimensions – This module has achieved Recommendation status in 2005. A new edition of the Dimensions 1.0 Specification with errata corrections was issued on 7 September 2009. The Dimension 1.0 Specification is an optional extension to the XBRL 2.1 Specification that enriches the rules and procedures for constructing dimensional taxonomies and instance documents. It supports the use of XBRL taxonomy linkbases to define additional, structured contextual information for business facts. Each piece of contextual information is referred to as a "dimension." The base XBRL specification essentially defines three dimensions: reporting period, reporting entity (i.e.; a company or a division thereof), and a loosely defined reporting scenario, originally intended to distinguish between actual vs. projected facts. Taxonomies using XBRL Dimensions can define new dimensions, specify the valid values ("domains") for dimensions, designate which dimensions apply to which business concepts through mechanisms called "hypercubes", and relate other taxonomy metadata (labels, presentation information, etc.) to dimensions.
XBRL Formula – This module achieved Recommendation status 22 June 2009. The Formula Specification 1.0 supports the creation of expressions (using XPath) that can be applied to XBRL instances to validate its information or to calculate new XBRL facts in a new instance. To see how formula components interrelate, click Interactive diagram of related Formula specifications.
Inline XBRL (or iXBRL) – This module achieved Recommendation status 20 April 2010. The Inline XBRL Specification defines how XBRL metadata can be embedded within well-formed HTML or XHTML documents, so that data and associated rendering information can be encapsulated within a single document.
XBRL Versioning – This module achieved Recommendation status 27 February 2013. This specification enables creation of Versioning Report which can be used by the authors of XBRL taxonomies to provide documentation of the changes between two taxonomies. Many large taxonomies (such as the IFRS taxonomy) change every year.
XBRL Table Linkbase – This module allows taxonomy authors to define tabular reporting templates. The Table Linkbase can be used for presentation of XBRL data, and also for data entry, by allowing software to present a template for completion by the user. The Table Linkbase is well-suited to handling large, highly-dimensional reporting templates such as those used for Solvency II reporting to EIOPA, and COREP and FINREP reporting to the EBA.
Extensibility
Besides the creation of additional modules, XBRL International supports several methods for continuing expansion of shared XBRL functionality.
Link Role Registry – This registry, hosted at xbrl.org, collects link roles and arc roles to promote reuse across taxonomies.
Functions Registry – This registry collects XPath functions for reuse in formula linkbases.
Transformation Rules Registry – This registry collects common transforms used to convert human-readable data in Inline XBRL documents (e.g. "1st January 2016") into the formats required by XBRL ("2016-01-01").
iXBRL
iXBRL (Inline XBRL) is a development of XBRL in which the XBRL metadata are embedded in an HTML document, e.g., a published report and accounts. It requires the HTML document to be well-formed but does not otherwise specify the required XML format. Typically, iXBRL is implemented within HTML documents, which are displayed or printed by web browsers without revealing the XBRL metadata inside the document. The specification does, however, provide a normative schema which requires that any schema-valid iXBRL document should be in XHTML format.
Most iXBRL financial reports are produced in one of two ways:
The system which creates the report formats it directly in iXBRL. In the UK, where all companies are required to file in iXBRL, the main commercial accounting packages all provide iXBRL export of financial reports.
The financial report is produced as a Microsoft Word or Microsoft Excel document, and a "Tagging Program" is used to add the XBRL concept metadata and to export the document as Inline XBRL.
With large and complex financial statements, a single iXBRL file may be too large for a web browser to handle. This happens more often when, as in the UK, the company report, which may contain many graphics, is combined with the accounts in a single iXBRL document. The iXBRL specification allows for a set of iXBRL documents to be treated as a single iXBRL document set.
In the UK, HM Revenue and Customs requires businesses to submit their report and accounts and tax computations in iXBRL format when making their Corporation Tax return. Businesses and their agents can use HMRC's Online Filing software to prepare their report and accounts and tax computations in iXBRL format or they can prepare the iXBRL files themselves and submit them to HMRC.
HMRC's Online Filing software is an example of a program which generates iXBRL from source data. This uses a series of forms in which the key data (which will appear in XBRL tags) are entered in data entry fields. Additional data (the rest of the report and accounts) are entered in text boxes. The program generates the iXBRL report and accounts in a standard sequence of sections and a standard format. All other formatting of material is lost. While the resulting report and accounts meets HMRC's requirements, it is not an attractive document to view or read.
iXBRL is mandated for corporate filings by government agencies in Japan, Denmark and the United Kingdom. In the United Kingdom, Companies House also accepts iXBRL. Although iXBRL is not mandated by Companies House, it makes up the majority of the filings received each year.
Since June 2016 the SEC started allowing firms to file using iXBRL in the required HTML filings. In June 2018, the SEC announced plans to move to iXBRL, removing the requirement to file separate HTML and XBRL documents.
Since January 2021, all EU listed companies that prepare annual financial reports under IFRS have been required to publish these reports in Inline XBRL format, as part of the European Single Electronic Format (ESEF) initiative.
History
XBRL's beginning, in 1998, can be traced to the initial efforts of one person, Charles Hoffman, a Certified Public Accountant from Tacoma, Washington. The American Institute of Certified Public Accountants (AICPA) was also instrumental in pulling together what eventually became XBRL International.
The specification went through several versions prior to XBRL v2.1 which was published in 2003.
1.0 – Published on July 31, 2000, this version was based on DTDs. It expressed the difference between data exchange in instance documents and metadata exchange in taxonomy documents. Taxonomies were expressed as XML Schema files, but these were not used for instance validation.
2.0 – Published December 14, 2001, this version introduced use of XML Schema substitution groups as a way of allowing schema validation of instances. Concept relations were broken out into separate XLink-based linkbases. Context data in the instance was collected into a separate element.
2.1 – Published December 31, 2003, this version tightened the definition of terms significantly, allowing for the introduction of a conformance suite.
XBRL v2.1 has remained stable since publication, and has been updated only for errata corrections. The standard has evolved significantly through the development of additional XBRL modules. Details of all versions of the specification and associated modules can be found on the XBRL Specification Subsite.
Lack of accuracy
In April 2009 a study of the North Carolina State University Department of Accounting College of Management evaluated the accuracy of XBRL filings for 22 companies participating in the SEC's voluntary filing program in 2006. Results of a comparison of XBRL filings to Forms 10-K revealed multiple errors in signage, amounts, labeling, and classification. The study considers that these errors are serious, since XBRL data are computer-readable and users will not visually recognize the errors, especially when using XBRL analysis software.
A different conclusion was reached by Du et al., 2013 who argued that companies are going through a learning curve and are steadily improving.
In December 2017, Charlie Hoffman stated that there is a 10.2% chance that an XBRL-based public company financial report has errors in its primary financial statements. Hoffman predicts that per the current number of errors and the pace errors are being corrected, within about five years the information quality of XBRL-based public company financial reports will be very good.
Impact of XBRL
An evaluation by Debreceny, Roger S., et al. 2005, of the impact of Financial Reporting in XBRL on the SEC's EDGAR System.
A tool for converting the consolidated balance sheet, income statement, and statement of cash flows into XBRL‐tagged format.
Corporate governance is significantly and positively associated with a firm's decision to be an early and voluntary filer of financial information in XBRL format.
Impact on financial reporting in the European Union
On 18 December 2017 European Securities and Markets Authority published the final draft Regulatory Technical Standards (RTS) setting out the new European Single Electronic Format (ESEF). Under the draft regulation, starting in 2020, financial reports containing IFRS consolidated financial statements shall be labelled with XBRL tags.
See also
XBRL assurance
References
External links
The XBRL Specification subsite - information for developers, with direct access to specifications, conformance suites and FAQ
- the US jurisdiction of XBRL International, the national consortium for standardized business reporting, creator of the initial XBRL US GAAP Taxonomy, under contract with the U.S. Securities and Exchange Commission.
United Kingdom companies accounts search, full access to all Inline XBRL accounts filed at Companies House in the UK
XBRLS: how a simpler XBRL can make a better XBRL
Financial metadata
XML-based standards
Accounting terminology | XBRL | Technology | 8,189 |
75,855,185 | https://en.wikipedia.org/wiki/Rabbit%20r1 | The Rabbit r1 is an Android-powered, ChatGPT-based personal assistant device developed by tech startup Rabbit Inc and co-designed by Teenage Engineering. It is designed to perform various functions, including web searches and media control, using voice commands and touch interaction, allowing AI to be used to provide services commonly associated with smartphones, such as ordering food delivery.
Rabbit Inc was started by Jesse Lyu Cheng.
Hardware
Display: A 2.88-inch touchscreen for interactive user input.
Input: push-to-talk button to activate voice commands; scroll wheel; Gyroscope; Magnetometer; Accelerometer; GPS.
Camera: 8 MP single camera, with a resolution of 3264x2448, allowing for the connected external AI to use computer vision
Audio: Equipped with a speaker and dual microphones for audio interaction.
Connectivity: Supports Wi-Fi and cellular connections via a SIM card slot to access internet services.
Processor: Runs on a 2.3GHz MediaTek Helio P35 processor.
Memory: Contains 4GB of RAM for operational tasks.
Storage: Offers 128GB of internal storage for data.
Ports: Utilizes a USB-C port for charging and data connections.
Software
The Rabbit r1 runs on Rabbit OS, based on the Android Open Source Project (AOSP), specifically version 13. Lyu has claimed that Rabbit OS runs with a "very bespoke AOSP."
The device employs a large action model (LAM) designed to perform actions and assist with tasks like web searches, streaming music, and transportation services. Perplexity.ai, an AI search engine, is one of the Large Language Models (LLM) used to respond to user queries and execute commands. The personal assistant is also capable of various actions such as ordering a cab or playing music from Spotify. This is through the "connections" system on the account management site, which the assistant calls "rabbits".
Rabbit issued 15 software updates within the first four months after releasing r1. On July 11, 2024, Rabbit launched the "beta rabbit" advanced search and conversation assistant to "give more thoughtful responses to complex questions that require multiple steps of research and deeper reasoning".
Reception
Funding
Rabbit raised $20 million in funding from Khosla Ventures, Synergis Capital and Kakao Investment in October 2023. The company announced an additional $10 million in funding in December 2023.
Sales
Following its announcement at the 2024 Consumer Electronics Show, 130,000 units were sold. On August 13, 2024, Rabbit announced that sales of r1 had expanded to the entire European Union (except Malta) and United Kingdom. On August 21, 2024, sales of r1 expanded to Singapore.
Reviews
The r1 was met with strong criticism immediately after Rabbit began shipping the device. Some reviews questioned what the device was able to do that a smartphone could not, while comparing it to the similar Humane Ai Pin. YouTuber Marques Brownlee called the device "barely reviewable". Android Authority'''s Mishaal Rahman managed to install Rabbit r1's software on a Pixel 6a smartphone, after a tipster shared an APK file. The Verge echoed the claims made by Rahman. In response, Lyu published statements confirming its use of Android, but denying that the r1 is an Android app. Mashable called its Vision features impressive, but said that "these praise-worthy features are overshadowed by buggy performance". Ars Technica'' wrote a blog post claiming "the company is blocking access from bootleg APKs". TechCrunch gave a slightly more positive review, calling the device a "fun peep at a possible future", but could not "advise anyone to buy one now."
Shortly after the launch of r1, Rabbit began a weekly cadence of software updates to address much of the criticism from the early reviews, including "battery and GPS performance, time zone selection, and more". Digital Trends said the Magic Camera feature "takes the most mundane, ordinary, and badly composed photos and makes something fun and eye-catching from them. Mashable said the "beta rabbit" feature "makes Rabbit R1 more conversational and intelligent".
Controversies
GAMA project
Rabbit Inc has garnered attention due to allegations surrounding its funding and the company's past projects. The company came under scrutiny when Stephen Findeisen, known as Coffeezilla on YouTube, published a video in May 2024, alleging that Rabbit Incorporation was "built on a scam". Rabbit Incorporation, initially named Cyber Manufacturing Co, rebranded just two months before launching the Rabbit R1. The company, under its former name, raised $6 million in November 2021 for a project called GAMA, described as a "Next Generation NFT Project." Jesse Lyu, the CEO of Rabbit Incorporation, referred to GAMA as a "fun little project."
Coffeezilla, who investigates influencer scams, highlighted old Clubhouse recordings of Jesse Lyu discussing the GAMA project. In these recordings, Lyu emphasized the substantial funding behind GAMA and its potential to be a revolutionary, carbon-negative cryptocurrency. Coffeezilla questioned the whereabouts of the funds raised for GAMA, estimating that approximately $1 million in refunds to investors remained unresolved. He suggested that the rebranding to Rabbit Incorporation and the shift to developing the Rabbit R1 were attempts to divert from the GAMA project's issues.
In response to Coffeezilla's inquiries, Rabbit Incorporation stated that the $6 million raised was indeed used for the GAMA project. The company said that NFTs cannot be refunded unless the owner agrees to "burn" them on the blockchain. Rabbit Incorporation also said that the GAMA project was open-sourced and returned to the community, aligning with community feedback. They also mentioned that efforts to buy back NFTs were made to counteract malicious trading and maintain market stability.
Security
In June 2024, Engadget reported that the Rabbitude team, a community reverse engineering project, had gained access to the r1's codebase revealing that r1's software contained several hardcoded API keys in its code for ElevenLabs, Microsoft Azure, Yelp, and Google Maps, potentially allowing unauthorized access to r1 responses, including those containing the users' personal information. For a short time, Rabbit immediately began revoking and rotating those secrets and confirmed that the code was leaked by an employee who had "been terminated and remains under investigation".
In July 2024, the company revealed that all user chats and device pairing data were logged on the r1 with no ability to delete them. This meant that lost or stolen devices could be used to extract user data. The company stated that it addressed the issue by introducing a factory reset option and limited the data stored on the r1, as well as preventing paired devices from reading data.
References
Consumer electronics
Machine learning
Applications of artificial intelligence
Android (operating system) devices | Rabbit r1 | Engineering | 1,438 |
10,426,558 | https://en.wikipedia.org/wiki/Ribose%205-phosphate | Ribose 5-phosphate (R5P) is both a product and an intermediate of the pentose phosphate pathway. The last step of the oxidative reactions in the pentose phosphate pathway is the production of ribulose 5-phosphate. Depending on the body's state, ribulose 5-phosphate can reversibly isomerize to ribose 5-phosphate. Ribulose 5-phosphate can alternatively undergo a series of isomerizations as well as transaldolations and transketolations that result in the production of other pentose phosphates as well as fructose 6-phosphate and glyceraldehyde 3-phosphate (both intermediates in glycolysis).
The enzyme ribose-phosphate diphosphokinase converts ribose-5-phosphate into phosphoribosyl pyrophosphate.
Structure
R5P consists of a five-carbon sugar, ribose, and a phosphate group at the five-position carbon. It can exist in open chain form or in furanose form. The furanose form is most commonly referred to as ribose 5-phosphoric acid.
Biosynthesis
The formation of R5P is highly dependent on the cell growth and the need for NADPH (Nicotinamide adenine dinucleotide phosphate), R5P, and ATP (Adenosine triphosphate). Formation of each molecule is controlled by the flow of glucose 6-phosphate (G6P) in two different metabolic pathways: the pentose phosphate pathway and glycolysis. The relationship between the two pathways can be examined through different metabolic situations.
Pentose phosphate pathway
R5P is produced in the pentose phosphate pathway in all organisms. The pentose phosphate pathway (PPP) is a metabolic pathway that runs parallel to glycolysis. It is a crucial source for NADPH generation for reductive biosynthesis (e.g. fatty acid synthesis) and pentose sugars. The pathway consists of two phases: an oxidative phase that generates NADPH and a non-oxidative phase that involves the interconversion of sugars. In the oxidative phase of PPP, two molecules of NADP+ are reduced to NADPH through the conversion of G6P to ribulose 5-phosphate (Ru5P). In the non-oxidative of PPP, Ru5P can be converted to R5P through ribose-5-phosphate isomerase enzyme catalysis.
When demand for NADPH and R5P is balanced, G6P forms one Ru5P molecule through the PPP, generating two NADPH molecules and one R5P molecule.
Glycolysis
When more R5P is needed than NADPH, R5P can be formed through glycolytic intermediates. Glucose 6-phosphate is converted to fructose 6-phosphate (F6P) and glyceraldehyde 3-phosphate (G3P) during glycolysis. Transketolase and transaldolase convert two molecules of F6P and one molecule of G3P to three molecules of R5P. During rapid cell growth, higher quantities of R5P and NADPH are needed for nucleotide and fatty acid synthesis, respectively. Glycolytic intermediates can be diverted toward the non-oxidative phase of PPP by the expression of the gene for pyruvate kinase isozyme, PKM. PKM creates a bottleneck in the glycolytic pathway, allowing intermediates to be utilized by the PPP to synthesize NADPH and R5P. This process is further enabled by triosephosphate isomerase inhibition by phosphoenolpyruvate, the PKM substrate.
Function
R5P and its derivatives serve as precursors to many biomolecules, including DNA, RNA, ATP, coenzyme A, FAD (Flavin adenine dinucleotide), and histidine.
Nucleotide biosynthesis
Nucleotides serve as the building blocks for nucleic acids, DNA and RNA. They are composed of a nitrogenous base, a pentose sugar, and at least one phosphate group. Nucleotides contain either a purine or a pyrimidine nitrogenous base. All intermediates in purine biosynthesis are constructed on a R5P "scaffold". R5P also serves as an important precursor to pyrimidine ribonucleotide synthesis.
During nucleotide biosynthesis, R5P undergoes activation by ribose-phosphate diphosphokinase (PRPS1) to form phosphoribosyl pyrophosphate (PRPP). Formation of PRPP is essential for both the de novo synthesis of purines and for the purine salvage pathway. The de novo synthesis pathway begins with the activation of R5P to PRPP, which is later catalyzed to become phosphoribosylamine, a nucleotide precursor. During the purine salvage pathway, phosphoribosyltransferases add PRPP to bases.
PRPP also plays an important role in pyrimidine ribonucleotide synthesis. During the fifth step of pyrimidine nucleotide synthesis, PRPP covalently links to orotate at the one-position carbon on the ribose unit. The reaction is catalyzed by orotate phosphoriboseyltransferase (PRPP transferase), yielding orotidine monophosphate (OMP).
Histidine biosynthesis
Histidine is an essential amino acid that is not synthesized de novo in humans. Like nucleotides, biosynthesis of histidine is initiated by the conversion of R5P to PRPP. The step of histidine biosynthesis is the condensation of ATP and PRPP by ATP-phosphoribosyl transferase, the rate determining enzyme. Histidine biosynthesis is carefully regulated by feedback inhibition/
Other functions
R5P can be converted to adenosine diphosphate ribose, which binds and activates the TRPM2 ion channel. The reaction is catalyzed by ribose-5-phosphate adenylyltransferase
Disease relevance
Diseases have been linked to R5P imbalances in cells. Cancers and tumors show upregulated production of R5P correlated to increased RNA and DNA synthesis. Ribose 5-phosphate isomerase deficiency, the rarest disease in the world, is also linked to an imbalance of R5P. Although the molecular pathology of the disease is poorly understood, hypotheses included decreased RNA synthesis.
Another disease linked to R5P is gout. Higher levels of G6P lead to a buildup of glycolytic intermediates, that are diverted to R5P production. R5P converts to PRPP, which forces an overproduction of purines, leading to uric acid build up.
Accumulation of PRPP is found in Lesch-Nyhan Syndrome. The build up is caused by a deficiency of the enzyme hypoxanthine-guanine phosphoribosyltransferase (HGPRT), which leads to decreased nucleotide synthesis and an increase of uric acid production.
Superactivity in PRPS1, the enzyme that catalyzes the R5P to PRPP, has also been linked to gout, as well as neurodevelopmental impairment and sensorineural deafness.
References
Organophosphates
Pentose phosphate pathway
Monosaccharide derivatives | Ribose 5-phosphate | Chemistry | 1,617 |
47,386,145 | https://en.wikipedia.org/wiki/Vomocytosis | Vomocytosis (sometimes called non-lytic expulsion) is the cellular process by phagocytes expel live organisms that they have engulfed without destroying the organism. Vomocytosis is one of many methods used by cells to expel internal materials into their external environment, yet it is distinct in that both the engulfed organism and host cell remain undamaged by expulsion. As engulfed organisms are released without being destroyed, vomocytosis has been hypothesized to be utilized by pathogens as an escape mechanism from the immune system. The exact mechanisms, as well as the repertoire of cells that utilize this mechanism, are currently unknown, yet interest in this unique cellular process is driving continued research with the hopes of elucidating these unknowns.
Discovery
Vomocytosis was first reported in 2006 by two groups, working simultaneously in the UK and the US, based on time-lapse microscopy footage characterising the interaction between macrophages and the human fungal pathogen Cryptococcus neoformans. Subsequently, this process has also been seen with other fungal pathogens such as Candida albicans and Candida krusei. It has also been speculated that the process may be related to the expulsion of bacterial pathogens such as Mycobacterium marinum from host cells. Vomocytosis has been observed in phagocytic cells from mice, humans and birds, as well as being directly observed in zebrafish and indirectly detected (via flow cytometry) in mice. Amoebae exhibit a similar process to vomocytosis whereby phagosomal material that cannot be digested is exocytosed. Cryptococci are exocytosed from amoebae via this mechanism but inhibition of the constitutive pathway demonstrated that cryptococci could also be expelled via vomocytosis.
Mechanism
A full understanding of the mechanisms involved in vomocytosis is not currently known, yet advances in research have driven initial mechanistic descriptions and crucial steps involved in the process. Research has shown vomocytosis does not occur when pathogens are dead or when engulfed materials are non-living, indicating the survival of phagosomal cargo may be crucial for triggering or enhancing vomocytosis. Additionally, the phagosomal pH may play important roles in vomocytosis efficacy as research has demonstrated vomocytosis rates drop as phagocytes become more acidic and vomocytosis is increased by the addition of weak bases to phagocytes. The membrane composition and cellular state are implicated in vomocytosis as vomocytosis has been shown to decrease with membrane permiability and increase in states of autophagy. Furthermore, inflammatory signals such as Type I interferons, which are produced in response to viral infections, are known to enhance vomocytosis. The impacts of these described forces on inducing vomocytosis are still being elaborated, and it is likely that they are variable based on other unknown external and internal factors.
Just as in standard exocytosis, rearrangements of the actin cytoskeleton within the host cell are crucial for allowing vomocytosis to occur. In contrast to standard exocytosis, the engulfed pathogen is not lysed by internal components of the host cell, and the vesicle is brought close to the cellular membrane where it can fuse and release the pathogen cargo. Annexin A2, a membrane-bound protein, helps regulate vomocytosis and promote the fusing of vesicles to the plasma membranes. In annexin A2 deficient cell lines, rates of vomocytosis were decreased. Furthermore, screens of macrophage kinase inhibitors revealed signaling pathways linked to vomocytosis. ERK5, involved in the MAPK signaling pathway that communicates surface signals to cellular DNA, was shown to suppress vomocytosis. Additional signaling pathways involved in vomocytosis have yet to be determined. Furthermore, different morphologies of vomocytosis have been documented and it is possible that the underlying cellular mechanism may vary between them.
Biological significance
Research has been devoted to understanding the mechanisms and importance of vomocytosis as it is hypothesized to be linked to many significant biological processes. Vomocytosis plays a role in lateral transfer, a process by which cells transfer engulfed cargo to a neighboring recipient cell, as initial cells expel their cargo undamaged so they can be uptaken by recipient cells. Additionally, vomocytosis is hypothesized to be utilized as an escape mechanism by pathogens as it allows them to evade degradation by macrophages. Since there is no damage to host cells or pathogens during vomocytosis, the immune system is not triggered, which allows for further potential evasion from hosts. More research is necessary to determine whether vomocytosis is initiated by engulfed pathogens for this purpose or by host cells and this is simply an unintentional benefit to pathogens. An additional hypothesis is that vomocytosis may enhance pathogenesis or spread of a pathogen as they are engulfed by macrophages and later expelled in locations that may potentially be different from the site of acute infection. Enhancing our understanding of host-pathogen interactions will clarify our understanding of vomocytosis's role in infection progression. Lastly, vomocytosis has been implicated in tumor response as tumor-associated macrophages (TAMs) are speculated to be able to modulate the tumor microenvironment (TME) via vomocytosis. Better understanding the mechanisms of inducing and regulating vomocytosis will enhance our knowledge of host-pathogen and host-self interactions, allowing for advances in our ability to respond to infections and tumors.
References
Articles containing video clips
Immunology
Phagocytes
Microbiology | Vomocytosis | Chemistry,Biology | 1,189 |
7,184,831 | https://en.wikipedia.org/wiki/Primary%20pseudoperfect%20number | In mathematics, and particularly in number theory, N is a primary pseudoperfect number if it satisfies the Egyptian fraction equation
where the sum is over only the prime divisors of N.
Properties
Equivalently, N is a primary pseudoperfect number if it satisfies
Except for the primary pseudoperfect number N = 2, this expression gives a representation for N as the sum of distinct divisors of N. Therefore, each primary pseudoperfect number N (except N = 2) is also pseudoperfect.
The eight known primary pseudoperfect numbers are
2, 6, 42, 1806, 47058, 2214502422, 52495396602, 8490421583559688410706771261086 .
The first four of these numbers are one less than the corresponding numbers in Sylvester's sequence, but then the two sequences diverge.
It is unknown whether there are infinitely many primary pseudoperfect numbers, or whether there are any odd primary pseudoperfect numbers.
The prime factors of primary pseudoperfect numbers sometimes may provide solutions to Znám's problem, in which all elements of the solution set are prime. For instance, the prime factors of the primary pseudoperfect number 47058 form the solution set {2,3,11,23,31} to Znám's problem. However, the smaller primary pseudoperfect numbers 2, 6, 42, and 1806 do not correspond to solutions to Znám's problem in this way, as their sets of prime factors violate the requirement that no number in the set can equal one plus the product of the other numbers. Anne (1998) observes that there is exactly one solution set of this type that has k primes in it, for each k ≤ 8, and conjectures that the same is true for larger k.
If a primary pseudoperfect number N is one less than a prime number, then N × (N + 1) is also primary pseudoperfect. For instance, 47058 is primary pseudoperfect, and 47059 is prime, so 47058 × 47059 = 2214502422 is also primary pseudoperfect.
History
Primary pseudoperfect numbers were first investigated and named by Butske, Jaje, and Mayernik (2000). Using computational search techniques, they proved the remarkable result that for each positive integer r up to 8, there exists exactly one primary pseudoperfect number with precisely r (distinct) prime factors, namely, the rth known primary pseudoperfect number. Those with 2 ≤ r ≤ 8, when reduced modulo 288, form the arithmetic progression 6, 42, 78, 114, 150, 186, 222, as was observed by Sondow and MacMillan (2017).
See also
Giuga number
References
.
.
.
External links
Integer sequences
Egyptian fractions | Primary pseudoperfect number | Mathematics | 588 |
21,963,702 | https://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford%20theorem | The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics that gives necessary and sufficient conditions under which a strictly positive probability distribution (of events in a probability space) can be represented as events generated by a Markov network (also known as a Markov random field). It is the fundamental theorem of random fields. It states that a probability distribution that has a strictly positive mass or density satisfies one of the Markov properties with respect to an undirected graph G if and only if it is a Gibbs random field, that is, its density can be factorized over the cliques (or complete subgraphs) of the graph.
The relationship between Markov and Gibbs random fields was initiated by Roland Dobrushin and Frank Spitzer in the context of statistical mechanics. The theorem is named after John Hammersley and Peter Clifford, who proved the equivalence in an unpublished paper in 1971. Simpler proofs using the inclusion–exclusion principle were given independently by Geoffrey Grimmett, Preston and Sherman in 1973, with a further proof by Julian Besag in 1974.
Proof outline
It is a trivial matter to show that a Gibbs random field satisfies every Markov property. As an example of this fact, see the following:
In the image to the right, a Gibbs random field over the provided graph has the form . If variables and are fixed, then the global Markov property requires that: (see conditional independence), since forms a barrier between and .
With and constant, where and . This implies that .
To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proved:
Lemma 1
Let denote the set of all random variables under consideration, and let and denote arbitrary sets of variables. (Here, given an arbitrary set of variables , will also denote an arbitrary assignment to the variables from .)
If
for functions and , then there exist functions and such that
In other words, provides a template for further factorization of .
In order to use as a template to further factorize , all variables outside of need to be fixed. To this end, let be an arbitrary fixed assignment to the variables from (the variables not in ). For an arbitrary set of variables , let denote the assignment restricted to the variables from (the variables from , excluding the variables from ).
Moreover, to factorize only , the other factors need to be rendered moot for the variables from . To do this, the factorization
will be re-expressed as
For each : is where all variables outside of have been fixed to the values prescribed by .
Let
and
for each so
What is most important is that when the values assigned to do not conflict with the values prescribed by , making "disappear" when all variables not in are fixed to the values from .
Fixing all variables not in to the values from gives
Since ,
Letting
gives:
which finally gives:
Lemma 1 provides a means of combining two different factorizations of . The local Markov property implies that for any random variable , that there exists factors and such that:
where are the neighbors of node . Applying Lemma 1 repeatedly eventually factors into a product of clique potentials (see the image on the right).
End of Proof
See also
Markov random field
Conditional random field
Notes
Further reading
Probability theorems
Theorems in statistics
Markov networks | Hammersley–Clifford theorem | Mathematics | 698 |
1,665,333 | https://en.wikipedia.org/wiki/Abyssal%20zone | The abyssal zone or abyssopelagic zone is a layer of the pelagic zone of the ocean. The word abyss comes from the Greek word (), meaning "bottomless". At depths of , this zone remains in perpetual darkness. It covers 83% of the total area of the ocean and 60% of Earth's surface. The abyssal zone has temperatures around through the large majority of its mass. The water pressure can reach up to .
As there is no light, photosynthesis cannot occur, and there are no plants producing molecular oxygen (O2), which instead primarily comes from ice that had melted long ago from the polar regions. The water along the seafloor of this zone is largely devoid of molecular oxygen, resulting in a death trap for organisms unable to quickly return to the oxygen-enriched water above or to survive in the low-oxygen environment. This region also contains a much higher concentration of nutrient salts, like nitrogen, phosphorus, and silica, due to the large amount of dead organic material that drifts down from the ocean zones above and decomposes.
The region below the abyssal zone is the sparsely inhabited hadal zone. The region above is the bathyal zone.
Trenches
The deep trenches or fissures that plunge down thousands of meters below the ocean floor (for example, the mid-oceanic trenches such as the Mariana Trench in the Pacific) are almost unexplored. Previously, only the bathyscaphe Trieste, the remote control submarine Kaikō and the Nereus have been able to descend to these depths. However, as of March 25, 2012 one vehicle, the Deepsea Challenger, had penetrated to a depth of 10,898 meters (35,756 ft).
Ecosystem
The relative sparsity of primary producers means that the majority of organisms living in the abyssal zone depend on the marine snow that falls from oceanic layers above. The biomass of the abyssal zone actually increases near the seafloor as most of the decomposing material and decomposers rest on the seabed.
The composition of the abyssal plain depends on the depth of the sea floor. Above 4000 meters the seafloor usually consists of calcareous shells of foraminifera, zooplankton, and phytoplankton. At depths greater than 4000 meters shells dissolve, leaving behind a seafloor of brown clay and silica from dead zooplankton and phytoplankton. Chemosynthetic bacteria support large and diverse communities near hydrothermal vents, filling a similar role in these ecosystems as plants do in the sunlit regions above.
A new insight into the complexity of the abyssal environment has been provided by a team of researchers from the Scottish Society of Marine Sciences. They have found that manganese nodules on the deep sea floor produce free oxygen from water molecules.
The manganese nodules act as a kind of battery as they contain different metals, and they release oxygen into the environment. Because it was previously thought that only plants and algae produce dark oxygen (oxygen produced without light), this can be seen as a scientific breakthrough.
Biological adaptations
Organisms that live at this depth have had to evolve to overcome challenges provided by the abyssal zone. Fish and invertebrates had to evolve to withstand the sheer cold and intense pressure found at this level. Not only did they have to find ways to hunt and survive in constant darkness, but they also had to thrive in an ecosystem that has less oxygen and biomass, energy sources and prey, than the upper zones. To survive in these conditions, many fish and other organisms developed a much slower metabolism, and require much less oxygen than those in upper zones. Many animals also move very slowly to conserve energy. Their reproduction rates are also very slow, to decrease competition and conserve energy. Animals here typically have flexible stomachs and mouths, so that when scarce prey are found they can consume as many as possible.
Other challenges faced by life in the abyssal zone are the pressure and darkness caused by the zone's depth. Many organisms living in this zone have evolved to minimize internal air spaces, such as swim bladders. This adaptation helps to protect them from the extreme pressure, which can reach around 75 MPa (11,000 psi). The absence of light also spawned many different adaptations, such as having large eyes and the ability to produce their own light (bioluminescence). Large eyes would allow the detection and use of any light available, no matter how small. Commonly, animals in the abyssal zone are bioluminescent, producing blue light, because light in the blue wavelength range is attenuated over greater travel distances than other wavelengths. Due to this lack of light, complex patterns and bright colors are not needed. Most fish species have evolved to be transparent, red, or black so that they better blend in with the darkness and do not waste energy on developing and maintaining bright or complex patterns.
Animals
The abyssal zone is made up of many different types of organisms, including microorganisms, crustaceans, molluscs (bivalves, snails, and cephalopods), different classes of fishes, and possibly some animals that have yet to be discovered. Most of the fish species in this zone are described as demersal or benthopelagic fishes. Demersal fish are fish whose habitats are on or near (typically less than five meters from) the seafloor. Most fish species fit into that classification, because the seafloor contains most of the abyssal zone's nutrients; therefore, the most complex food web or greatest biomass would be in this region of the zone.
Organisms in the abyssal zone rely on the natural processes of higher ocean layers. When animals from higher ocean levels die, their carcasses occasionally drift down to the abyssal zone, where organisms in the deep can feed on them. When a whale carcass falls down to the abyssal zone, this is called a whale fall. The carcass of the whale can create complex ecosystems for organisms in the depths.
Benthic organisms in the abyssal zone would need to have evolved morphological traits that could either keep them out of oxygen-depleted water above the sea floor or enable them to extract oxygen from the water above, while also allowing the animal access to the seafloor and the nutrients located there. There are also animals that spend their time in the upper portion of the abyssal zone, some of which even occasionally spend time in the zone directly above, the bathyal zone. While there are a number of different fish species representing many different groups and classes, like Actinopterygii (ray-finned fish), there are no known members of the class Chondrichthyes (animals such as sharks, rays, and chimaeras) that make the abyssal zone their primary or constant habitat. Whether this is due to the limited resources, energy availability, or other physiological constraints is unknown. Most Chondrichthyes species only go as deep as the bathyal zone.
Creatures that live in the abyssal zone include:
Tripod fish (Bathypterois grallator): their habitat is along the ocean floor, usually around 4,720 m below sea level. Their pelvic fins and caudal fin have long bony rays protruding from them. They face the current while standing still on their long rays. Once they sense food nearby, they use their large pectoral fins to hit the unsuspecting prey towards their mouth. Each member of this species has both male and female reproductive organs so that if a mate cannot be found, they can self-fertilize.
Dumbo octopus: this octopus usually lives at a depth between 1,000 and 7,000 meters, deeper than any other known octopus. They use the fins on top of their head, which look like flapping ears, to hover over the sea floor looking for food. They use their arms to help change directions or crawl along the seafloor. To combat the intense pressure of the abyssal zone, this octopus species lost its ink sac during evolution. They also use their strand-like structured suction cups to help detect predators, food, and other aspects of their environment.
Cusk eel (genus Bassozetus): there are no known fish that live at depths greater than the cusk eel. The depth of the cusk eel habitat can be as great as 8,370 meters below sea level. This animal's ventral fins are specialized forked barbel-like organs that act as sensory organs. Cusk eels produce sounds to mate. Male cusk eels have two pairs of sonic muscles, while female cusk eels have three.
Abyssal grenadier: this resident of the abyssal zone is known to live at depths ranging from 800 and 4,000 meters. It has extremely large eyes, but a small mouth. It is thought to be a semelparous species, meaning it only reproduces once and then dies. This is seen as a way for the organism to conserve energy and have a higher chance of having some healthy strong children. This reproductive strategy could be very useful in low energy environments such as the abyssal zone.
Pseudoliparis swirei: the Mariana snailfish, or Mariana hadal snailfish, is a species of snailfish found at hadal depths in the Mariana Trench in the western Pacific Ocean. It is known from a depth range of 6,198–8,076 m (20,335–26,496 ft), including a capture at 7,966 m (26,135 ft), which is possibly the record for a fish caught on the seafloor.
Environmental concerns
Climate change has had negative effects on the abyssal zone. Due to the zone's depth, increasing global temperatures do not affect it as quickly or drastically as the rest of the world, but the zone is still afflicted by ocean acidification. Pollutants, such as plastics, are also present in this zone. Plastics are especially bad for the abyssal zone because these organisms have evolved to eat or try to eat anything that moves or appears to be detritus, resulting in organisms consuming plastics instead of nutrients. Both ocean acidification and pollution are decreasing the already small biomass that resides within the abyssal zone.
Another problem caused by humans is overfishing. Even though no fishery can fish for organisms anywhere near the abyssal zone, they can still cause harm in deeper waters. The abyssal zone depends on dead organisms from the upper zones sinking to the seafloor, since the ecosystem lacks producers due to a lack of sunlight. As fish and other animals are removed from the ocean, the frequency and amount of dead material reaching the abyssal zone decreases.
Deep sea mining operations could cause problems for the abyssal zone in the future. The talks and planning for this industry are already under way. Deep sea mining could be disastrous for this extremely fragile ecosystem since there are many ecological dangers posed by mining for deep sea minerals. Mining could increase the amount of pollution not only in the abyssal zone, but in the ocean as a whole, and would physically destroy habitats and the seafloor.
Sediment plumes generated by mining activities can spread widely, affecting filter feeders and smothering marine life. The potential release of toxic chemicals and heavy metals from mining equipment and disturbed seabed materials could lead to chemical pollution, while noise from machinery can disrupt the behavior and communication of marine animals. Physical disturbances to the seabed may destroy geological features and their associated ecosystems. Furthermore, changes in water quality and the disruption of carbon sequestration processes, where organic carbon is stored in the deep sea, could have broader environmental impacts, including contributing to climate change. The slow rate of change in deep-sea environments and the long lifespans and reproductive cycles of abyssal species mean that recovery from such disturbances could take decades or centuries.
See also
Abyssal plain
Beebe Hydrothermal Vent Field
Deep sea
Deep sea community
Deep-sea fish
Mariana Trench
References
Aquatic ecology
Marine biology
Oceanographical terminology | Abyssal zone | Biology | 2,466 |
59,576,617 | https://en.wikipedia.org/wiki/NGC%20753 | NGC 753 is a spiral galaxy located 220 million light-years away in the constellation Andromeda. The galaxy was discovered by astronomer by Heinrich d'Arrest on September 16, 1865 and is a member of Abell 262.
NGC 753 has roughly 2-3 times more mass than the Milky Way and is classified as a radio galaxy.
Physical characteristics
NGC 753 contains two main arms that extend to 180° on either side of the galaxy. From the two main arms, there are three larger and weaker arms that sub-divide into several branches. This open structure of the arms may be due to the influence of NGC 759 which is a close companion of NGC 753 that lies away.
Supermassive black hole
NGC 753 has a supermassive black hole with an estimated mass of (2.2 ± 0.4) × 107 M☉.
Supernovae
NGC 753 has hosted two supernovae, SN 1954E which was discovered by Fritz Zwicky on September 26, 1954 and AT 2018ddf which was discovered on July 5, 2018. Both supernovae were of unknown types.
See also
List of NGC objects (1–1000)
References
External links
753
7387
Andromeda (constellation)
Astronomical objects discovered in 1865
Spiral galaxies
Radio galaxies
Abell 262
1437 | NGC 753 | Astronomy | 268 |
2,674,100 | https://en.wikipedia.org/wiki/Delta%20Scuti | Delta Scuti, Latinized from δ Scuti, is a variable star in the southern constellation Scutum. With an apparent visual magnitude that fluctuates around 4.72, it is the fifth-brightest star in this small and otherwise undistinguished constellation. Analysis of the parallax measurements place this star at a distance of about from Earth. It is drifting closer with a radial velocity of −43 km/s.
Variability
In 1900, William W. Campbell and William H. Wright used the Mills spectrograph at the Lick Observatory to determine that this star has a variable radial velocity. The period of this variability as well as 0.2 magnitude changes in luminosity demonstrated in 1935 that the variability was intrinsic, rather than being the result of a spectroscopic binary. In 1938, a secondary period was discovered and a pulsation theory was proposed to model the variation. Since then, observation of Delta Scuti has shown that it pulsates in multiple discrete radial and non-radial modes. The strongest mode has a frequency of 59.731 μHz, the next strongest has a frequency of 61.936 μHz, and so forth, with a total of eight different frequency modes now modeled.
Delta Scuti is the prototype of the Delta Scuti type variable stars. It is a high-amplitude δ Scuti type pulsator with light variations of about 0.19 magnitudes (V). The peculiar chemical abundances of this star are similar to those of Am stars. It has a stellar classification of F2 IIIp, matching an F-type giant star. Delta Scuti has two times the mass and between 4.07 and 4.25 times the radius of the Sun. It is approximately one billion year old and is spinning with a projected rotational velocity of 25.5 km/s. The radius of Delta Scuti changes at 0.3 to 0.9 percent at each pulsation cycle. On average, the star is radiating 40 times the luminosity of the Sun from its photosphere at an effective temperature of 7,000 K.
Space velocity
The space velocity components of this star in the galactic coordinate system are = . It is following an orbit through the Milky Way galaxy that has an eccentricity of 0.11, carrying it as close as to, and as far as from the Galactic Center. If Delta Scuti maintains its current movement and brightness, it will pass within 10 light-years of the Solar System, becoming the brightest star in the sky between and . It will reach an apparent magnitude of −1.84, brighter than the current −1.46 of Sirius.
Optical companions
This star has two optical companions. The first is a +12.2 magnitude star that is 15.2 arcseconds from Delta Scuti. The second is a +9.2 magnitude star that is 53 arcseconds away. Both are distant background stars unrelated to Delta Scuti.
Nomenclature
Flamsteed did not recognise the constellation Scutum and included several of its stars in Aquila. δ Scuti was catalogued as 2 Aquilae. The Bayer designation δ was assigned by Gould rather than Bayer.
References
External links
AAVSO Variable Star of the Month: Delta Scuti and the Delta Scuti Variables
F-type giants
Delta Scuti variables
Scutum (constellation)
Scuti, Delta
Durchmusterung objects
Scuti, 2
172748
091726
7020
TIC objects
pt:Delta Scuti | Delta Scuti | Astronomy | 718 |
24,918,224 | https://en.wikipedia.org/wiki/Serotonin%20pathway | A serotonin pathway identifies aggregate projections from neurons which synthesize and communicate the monoamine neurotransmitter serotonin. These pathways are relevant to different psychiatric and neurological disorders.
Pathways
Function
Given the wide area that the many serotonergic neurons innervate, these pathways are implicated in many functions, as listed above. The caudal serotonergic nuclei heavily innervate the spinal cord, medulla and cerebellum. In general, manipulation of the caudal nuclei(e.g. pharmacological, lesion, receptor knockout) that results in decreased activity decreases movement, while manipulations to increase activity cause an increase in motor activity. Serotonin is also implicated in sensory processing, as sensory stimulation causes an increase in extracellular serotonin in the neocortex. Serotonin pathways are thought to modulate eating, both the amount as well as the motor processes associated with eating. The serotonergic projections into the hypothalamus are thought to be particularly relevant, and an increase in serotonergic signaling is thought to generally decrease food consumption (evidenced by fenfluramine, however, receptor subtypes might make this more nuanced). Serotonin pathways projecting into the limbic forebrain are also involved in emotional processing, with decreased serotonergic activity resulting in decreased cognition and an emotional bias towards negative stimuli. The function of serotonin in mood is more nuanced, with some evidence pointing towards increased levels leading to depression, fatigue and sickness behavior; and other evidence arguing the opposite.
See also
Dopaminergic pathways
References
Neurotransmitters | Serotonin pathway | Chemistry | 344 |
14,693,108 | https://en.wikipedia.org/wiki/Riparian%20buffer | A riparian buffer or stream buffer is a vegetated area (a "buffer strip") near a stream, usually forested, which helps shade and partially protect the stream from the impact of adjacent land uses. It plays a key role in increasing water quality in associated streams, rivers, and lakes, thus providing environmental benefits. With the decline of many aquatic ecosystems due to agriculture, riparian buffers have become a very common conservation practice aimed at increasing water quality and reducing pollution.
Benefits
Riparian buffers act to intercept sediment, nutrients, pesticides, and other materials in surface runoff and reduce nutrients and other pollutants in shallow subsurface water flow. They also serve to provide habitat and wildlife corridors in primarily agricultural areas. They can also be key in reducing erosion by providing stream bank stabilization. Large scale results have demonstrated that the expansion of riparian buffers through the deployment of plantations systems can effectively reduce nitrogen emissions to water and soil loss by wind erosion, while simultaneously providing substantial environmental co-benefits, having limited negative effects on current agricultural production.
Water quality benefits
Riparian buffers intercept sediment and nutrients. They counteract eutrophication in downstream lakes and ponds which can be detrimental to aquatic habitats because of large fish kills that occur upon large-scale eutrophication. Riparian buffers keep chemicals, like pesticides, that can be harmful to aquatic life out of the water. Some pesticides can be especially harmful if they bioaccumulate in the organism, with the chemicals reaching harmful levels once they are ready for human consumption. Riparian buffers also stabilise the bank surrounding the water body which is important since erosion can be a major problem in agricultural regions when cut (eroded) banks can take land out of production. Erosion can also lead to sedimentation and siltation of downstream lakes, ponds, and reservoirs. Siltation can greatly reduce the life span of reservoirs and the dams that create the reservoirs.
Habitat benefits
Riparian buffers can act as crucial habitat for a large number of species, especially those who have lost habitat due to agricultural land being put into production. The habitat provided by the buffers also double as corridors for species that have had their habitat fragmented by various land uses. By adding this vegetated area of land near a water source, it increases biodiversity by allowing species an area to re-establish after being displaced due to non-conservation land use. With this re-establishment, the number of native species and biodiversity in general can be increased. The large trees in the first zone of the riparian buffer provide shade and therefore cooling for the water, increasing productivity and increasing habitat quality for aquatic species. When branches and stumps (large woody debris) fall into the stream from the riparian zone, more stream habitat features are created. Carbon is added as an energy source for biota in the stream.
Economic benefits
Buffers increase land value and allow for the production of profitable alternative crops. Vegetation such as black walnut and hazelnut, which can be profitably harvested, can be incorporated into the riparian buffer. Lease fees for hunting can also be increased as the larger habitat means that the land will be more sought-after for hunting purposes. Designing buffer zones based on their hydrological function instead of a traditionally used fixed width method, can be economically beneficial in forestry practices.
Design
A riparian buffer is usually split into three different zones, each having its own specific purpose for filtering runoff and interacting with the adjacent aquatic system. Buffer design is a key element in the effectiveness of the buffer. It is generally recommended that native species be chosen to plant in these three zones, with the general width of the buffer being on each side of the stream.
Zone 1
This zone should function mainly to shade the water source and act as a bank stabilizer. The zone should include large native tree species that grow fast and can quickly act to perform these tasks. Although this is usually the smallest of the three zones and absorbs the fewest contaminants, most of the contaminants have been eliminated by Zone 2 and Zone 3.
Zone 2
Usually made up of native shrubs, this zone provides a habitat for wildlife, including nesting areas for bird species. This zone also acts to slow and absorb contaminants that Zone 3 has missed. The zone is an important transition between grassland and forest.
Zone 3
This zone is important as the first line of defense against contaminants. It consists mostly of native grasses and serves primarily to slow water runoff and begin to absorb contaminants before they reach the other zones. Although these grass strips should be one of the widest zones, they are also the easiest to install.
Streambed Zone
The streambed zone of the riparian area is linked closely to Zone 1. Zone 1 provides fallen limbs, trees, and tree roots that in turn slow water flow, reducing erosional processes associated with increased water flow and flooding. This woody debris also increases habitat and cover for various aquatic species.
The US National Agroforestry Center has developed a filter strip design tool called AgBufferBuilder, which is a GIS-based computer program for designing vegetative filter strips around agricultural fields that utilizes terrain analysis to account for spatially non-uniform runoff.
Forest management
Logging is sometimes recommended as a management practice in riparian buffers, usually to provide economic incentive. However, some studies have shown that logging can harm wildlife populations, especially birds. A study by the University of Minnesota found that there was a correlation between the harvesting of timber in riparian buffers and a decline in bird populations. Therefore, logging is generally discouraged as an environmental practice, and left to be done in designated logging areas.
Conservation incentives
The Conservation Reserve Program (CRP), a farming assistance program in the United States, provides many incentives to landowners to encourage them to install riparian buffers around water systems that have a high chance of non-point water pollution and are highly erodible. For example, the Nebraska system of Riparian Buffer Payments offers payments for the cost of setup, a sign up bonus, and annual rental payments.
These incentives are offered to agriculturists to compensate them for their economic loss of taking this land out of production. If the land is highly erodible and produces little economic gain, it can sometimes be more economic to take advantage of these CRP programs.
Effectiveness
Riparian buffers have undergone much scrutiny about their effectiveness, resulting in thorough testing and monitoring. A study done by the University of Georgia, conducted over a nine-year period, monitored the amounts of fertilizers that reached the watershed from the source of the application. It found that these buffers removed at least 60% of the nitrogen in the runoff, and at least 65% of the phosphorus from the fertilizer application. The same study showed that the effectiveness of the Zone 3 was much greater than that of both Zone 1 and 2 at removing contaminants. But another study in 2017 did not find efficiency (or a very limiting capacity) for reducing glyphosate and AMPA leaching to streams; spontaneous herbaceous vegetation RBS is as efficient as Salix plantations and measures of glyphosate in runoff after a year, suggest an unexpected persistence and even a capacity of RBS to potentially favor glyphosate infiltration up to 70 cm depth in the soil.
Long-term sustainability
After the initial installation of the riparian buffer, relatively little maintenance is needed to keep the buffer in good condition. Once the trees and grasses mature, they regenerate naturally and make a more effective buffer. The sustainability of the riparian buffer makes it extremely attractive to landowners since they do relatively little work and still receive payments. Riparian buffers have the potential to be the most effective way to protect aquatic biodiversity and water quality and manage water resources in developing countries that lack the funds to install water treatment and supply systems in midsize and small towns.
Species selection
Species selection based on an area in Nebraska, as an example:
In Zone 1
Cottonwood, Bur Oak, Hackberry, Swamp White Oak, Siberian Elm, Honeylocust, Silver Maple, Black Walnut, and Northern Red Oak.
In Zone 2
Manchurian apricot, Silver Buffaloberry, Caragana, Black Cherry, Chokecherry, Sandcherry, Peking Cotoneaster, Midwest Crabapple, Golden Currant, Elderberry, Washington Hawthorn, American Hazel, Amur Honeysuckle, Common Lilac, Amur Maple, American Plum, and Skunkbush Sumac.
In Zone 3
Western Wheatgrass, Big Bluestem, Sand Bluestem, Sideoats Grama, Blue Grama, Hairy Grama, Buffalo Grass, Sand Lovegrass, Switchgrass, Little Bluestem, Indiangrass, Prairie Cordgrass, Prairie Dropseed, Tall Dropseed, Needleandthread, Green Needlegrass.
See also
Agricultural wastewater treatment
Agroforestry
Ecoscaping
Erosion control
Nonpoint source pollution
References
External links
National Agroforestry Center (USDA)
Filter Strip Design Tool (AgBufferBuilder; USDA)
Extensive Riparian Buffer bibliography
Agricultural soil science
Agroforestry
Environmental conservation
Environmental soil science
Environmental terminology
Forest management
Habitat
Habitats
Hydrology
Riparian zone
Sustainable agriculture
Sustainable design
Water and the environment
Water pollution
Articles containing video clips | Riparian buffer | Chemistry,Engineering,Environmental_science | 1,898 |
2,440,376 | https://en.wikipedia.org/wiki/Glossary%20of%20file%20systems%20terms | This is a glossary of common file system terms.
Dates handled
What type of dates and times the file system can support, which may include:
References
Computer file systems
Lists of computer terms | Glossary of file systems terms | Technology | 39 |
854,416 | https://en.wikipedia.org/wiki/R.%20C.%20Harris%20Water%20Treatment%20Plant | The R. C. Harris Water Treatment Plant in Toronto, Ontario, Canada, is both a crucial piece of infrastructure and an architecturally acclaimed historic building named after the longtime commissioner of Toronto's public works Roland Caldwell Harris. The plant's architect was Thomas C. Pomphrey with engineers H.G. Acres and William Gore. It is located in the east of the city at the eastern end of Queen Street and at the foot of Victoria Park Avenue along the shore of Lake Ontario in the Beaches neighbourhood in the former city of Scarborough.
It has been the location for a number of film productions, the best known being Strange Brew (1983) with Rick Moranis and Dave Thomas.
Roland Caldwell Harris
Harris was born in Lansing on May 26, 1875 in what is now North York, Ontario, but grew up in Toronto. As Public Works Commissioner from 1912 to 1945, Harris was involved in such projects as:
Crawford Street Bridge, 1914-1915, with design heavy influenced by Harris
Prince Edward Viaduct, opened in 1918, which included his idea to add a deck under the bridge allowing for the Bloor Danforth line to be built decades later.
Mount Pleasant bridge as part of the extension of Mount Pleasant Road north to Lawrence Avenue East in 1934.
expansion of the streetcar network of the Toronto Civic Railways from 1912 to 1915.
Waterfront Railway Viaduct built from 1925 to 1934 to bring rail lines into Union Station.
extension of University Avenue south of Queen Street West to Front Street in 1931.
Harris died on September 3, 1945. His son Lieutenant Colonel Roland Allen Harris was a member of the Queen's Own Rifles. Harris is buried in family plot at St. John's Norway Cemetery.
Site history
Victoria Park
The land was once owned by Peter Patterson and was a popular spot for picnickers who nicknamed it "Yellowbanks" for the colour of the bluffs In 1878, Patterson leased the property to businessmen John Irwin, Bob Davies, and P.G. Close who hired John Boyle to develop and operate it as an amusement park. Buildings were erected and landscaping was done in time for it to open on June 8, 1878 as Victoria Park. Initially, the park was only accessible by water and a wharf was built to allow for steamships to bring picnickers from the Toronto Harbour at the foot of Yonge Street. The six-hectare park included a beach, with boating and canoe rentals, picnic shelters, a dance pavilion, restaurant, and an observation tower. Thomas Davies bought the park in 1886 and by 1894 the Toronto Railway Company extended streetcar lines to the park, allowing for ferry service to be discontinued. In 1899, the Toronto Railway Company took over the lease allowing it to continue as a trolley park along with nearby Munro Park which the TRC also operated. In 1906, the park was purchased by Henry Eckardt in a foreclosure sale after Davies had been unable to keep up the mortgage payments. Eckardt closed the park in 1906, the same year that nearby Munro Park closed. The traditions of both continued at Scarboro Beach Amusement Park which opened in 1907 and operated until 1925.
Victoria Park Avenue is named after the amusement park.
From 1912 to 1932 part of the property was used for Victoria Park Forest School during the summer. The T. Eaton Company also used the property for a summer camp for boys from 1917 until 1927. In 1927, the City of Toronto purchased the property for $370,000 in order to build the R.C. Harris Water Treatment Plant.
Water treatment plant
With an early 20th-century Toronto plagued with water shortages and unclean drinking water, public health advocates such as George Nasmith and Toronto's Medical Officer of Health, Charles Hastings, campaigned for a modern water purification system.
Construction for a water treatment plant began on the site in 1932 and the building became operational on November 1, 1941. The building, unlike most modern engineering structures, was also created to make an architectural statement. Fashioned in the Art Deco style, the cathedral-like structure remains one of Toronto's most admired buildings. It is, however, little known to outsiders. The interiors are just as opulent with marble entryways and vast halls filled with pools of water and filtration equipment. The plant has thus earned the nickname The Palace of Purification.
In 1992, the R. C. Harris Water Treatment Plant was named a national historic civil engineering site by the Canadian Society for Civil Engineering. It was designated under the Ontario Heritage Act in 1998. The plant appeared on a stamp issued by Canada Post in 2011, in a series showcasing five notable Art Deco buildings in Canada.
Use
Despite its age, the plant is still fully functional, providing approximately 30% of Toronto's water supply. The intakes are located over from shore in of water, running through two pipes under the bed of the lake. Water is also chlorinated in the plant and then pumped to various reservoirs throughout the City of Toronto and York Region.
Access
The facility grounds have been made available to the public. Despite some concerns of vulnerability to an attack on the water supply since the September 11 attacks, the grounds have remained open to the public, but security has been increased. In the summer of 2007, construction began on the installation of an underground Residual Management Facility allowing processed waste to be removed before discharging into the lake. This construction has since been completed.
In popular culture
The R. C. Harris Water Treatment Plant has been used in dozens of films and television series as a prison, clinic, or headquarters.
The building of the plant is vividly recounted in Michael Ondaatje's In the Skin of a Lion.
The headquarters of "The Man" in the 2002 comedy Undercover Brother.
A prison in the 1998 comedy Half Baked.
An asylum in the 1995 horror film In the Mouth of Madness.
"The Centre," a nefarious think tank in the television series The Pretender.
Base of operations for Genomex, an antagonistic corporation in the television series Mutant X.
The Royal Canadian Institute for the Mentally Insane (next door to Elsinore Brewery) in the 1983 film Strange Brew.
The Henry Ford Centre for the Criminally Insane, as seen in Robocop: The Series.
The Langstaff Maximum Security Prison, as seen in Flashpoint in the episode Just a Man.
The Mellonville Maximum Security Prison, as seen in an SCTV episode (1982).
A prison in the Psi Factor: Chronicles of the Paranormal episode "Solitary Confinement."
"Lake District Federal Prison" in Between in the episode School's Out.
A prison building in the Conviction episode "A Different Kind of Death."
A prison in the closing scenes of The Big Heist, when Donald Sutherland's character enters to serve a 20-year sentence.
"Ekart County Jail" in the 2015 movie Regression.
"U.N. Penitentiary Chesapeake Conservancy Zone" in the 2020 season of The Expanse.
A Children’s Hospital in Guillermo Del Toro’s 1997 film Mimic.
The office of Richard Jenkins' character, Ezra Grindle, a factory executive with a dark past, in Guillermo Del Toro’s Nightmare Alley.
Women's Prison in Mayor of Kingstown
Music video for "When You Know Someone" by the band Valley
References
External links
Art Deco architecture in Canada
Municipal buildings in Toronto
Buildings and structures in Scarborough, Ontario
Water treatment facilities | R. C. Harris Water Treatment Plant | Chemistry | 1,486 |
2,108,086 | https://en.wikipedia.org/wiki/Sulforaphane | Sulforaphane (sometimes sulphoraphane in British English) is a compound within the isothiocyanate group of organosulfur compounds. It is produced when the enzyme myrosinase transforms glucoraphanin, a glucosinolate, into sulforaphane upon damage to the plant (such as from chewing or chopping during food preparation), which allows the two compounds to mix and react.
Sulforaphane is present in cruciferous vegetables, such as broccoli, Brussels sprouts, and cabbage.
Sulforaphane has two possible stereoisomers due to the presence of a stereogenic sulfur atom.
The R-sulforaphane enantiomer occurs naturally, while the S-sulforaphane can be synthesized.
Occurrence and isolation
Sulforaphane occurs in broccoli sprouts, which, among cruciferous vegetables, have the highest concentration of glucoraphanin, the precursor to sulforaphane. It is also found in cabbage, cauliflower, Brussels sprouts, bok choy, kale, collards, mustard greens, and watercress.
Research
Although there has been some basic research on how sulforaphane might have effects in vivo, there is no clinical evidence that consuming cruciferous vegetables and sulforaphane affects the risk of cancer or any other disease, as of 2025.
See also
Raphanin
References
Experimental cancer drugs
Isothiocyanates
Sulfoxides
1,4-Butanediyl compounds | Sulforaphane | Chemistry | 323 |
1,149,868 | https://en.wikipedia.org/wiki/Computer%20tower | In personal computing, a tower unit, or simply a tower, is a form factor of desktop computer case whose height is much greater than its width, thus having the appearance of an upstanding tower block, as opposed to a traditional "pizza box" computer case whose width is greater than its height and appears lying flat.
Compared to a pizza box case, the tower tends to be larger and offers more potential for internal volume for the same desk area occupied, and therefore allows more hardware installation and theoretically better airflow for cooling. Multiple size subclasses of the tower form factor have been established to differentiate their varying sizes, including full-tower, mid-tower, midi-tower, mini-tower, and deskside; these classifications are however nebulously defined and inconsistently applied by different manufacturers.
Although the traditional layout for a tower system is to have the case placed on top of the desk alongside the monitor and other peripherals, a far more common configuration is to place the case on the floor below the desk or in an under-desk compartment, in order to free up desktop space for other items. Computer systems housed in the horizontal "pizza box" form factor—once popularized by the IBM PC in the 1980s but fallen out of mass use since the late 1990s—have been given the term desktops to contrast them with the often underdesk-situated towers.
Subclasses
Tower cases are often categorized as mini-tower, midi-tower, mid-tower, full-tower, and deskside. The terms are subjective and inconsistently defined by different manufacturers.
Full-tower
Full-tower cases, typically or more in height, are designed for maximum scalability. For case modding enthusiasts and gamers wanting to play the most technically challenging video games, the full-tower case also makes for an ideal gaming PC case because of their ability to accommodate extensive water cooling setups and larger case fans. Traditionally, full-tower systems had between four and six externally accessible half-height 5.25-inch drive bays and up to ten 3.5-inch drive bays. Some full-tower cases included locking side-doors and other physical security features to prevent theft of the discs inside those bays. However, as modern computing technology has moved away from mechanical hard drives and optical drives toward solid-state devices such as USB flash drives, solid-state drives (SSDs), large-capacity external storage, and cloud storage, such an abundance of internal and external drive bays is less common. More recent full-tower cases instead only have one or two external drive bays, or none at all, with the internal bays moved elsewhere in the case to free up room and improve airflow.
Full-tower cases readily fit full-size ATX motherboards but may also accommodate smaller microATX motherboards due to the former standard's interoperability in mounting holes. Full-tower cases may also have increased dimensional depth and length over their shorter counterparts, allowing them to accommodate Extended ATX motherboards, larger graphics cards and heat sinks. Since the 2010s, full-tower cases are commonly used by enthusiasts as showpiece cases with custom water cooling, RGB LED lighting, and tempered glass or acrylics side panel. They may also hold two motherboards (as is the case with the Corsair 1000D) and dual power supplies (Corsair 900D).
Mid-tower
Mid-tower cases, usually between and in height, are the most common form factor of personal computer towers. Before the late 2010s, mid-towers contained between three and four 5.25-inch drive bays and an equivalent number of 3.5-inch bays to house optical disc drives, floppy disk drives and hard disk drives, leaving just enough room for a standard ATX motherboard and power supply unit. Since the widespread adoption of USB flash drives, solid-state drives (which take up far less space than spinning hard disk drives) and the declining usage of internal optical drives, the number of drive bays has become less of a concern to the contemporary computer user, the internal space of mid-towers is now used more commonly for closed-loop water coolers, dual graphics cards, and tightly stacked SSDs.
Midi-tower
The marketing term midi-tower sometimes refers to cases smaller than a mid-tower but still larger than a mini-tower (see below), typically with two to three external bays. Other times the term may be synonymous with mid-tower.
Mini-tower
Mini-tower cases, between and in height, slot between the Mini-ITX specification for small-form-factor PCs and the archetypal mid-tower. Mini-towers typically will only accommodate microATX motherboards and for this reason sell in fewer numbers in the consumer market than the other size classes of computer towers. Traditionally, mini-towers had only one or two disk drive bays (either 5.25-inch or 3.5-inch).
Deskside
The term deskside is primarily a term of art in the workstation market, referring to computer towers with a much wider footprint than traditional domestic tower units. These wider deskside cases accommodate a far greater amount of central processing units (CPUs), drive bays, memory slots, expansion slots, peripherals, and I/O adapters, among other devices.
History
The tower form factor may be seen as a proportional miniaturization of mainframe computers and minicomputers, some of which comprise massive tall enclosures standing almost to the ceiling. In the advent of the microcomputer era, most systems were configured with the keyboard built into the same chassis that the main system circuit board resides. Such computers were also termed home computers and counted such popular systems as the Apple II, TRS-80, VIC-20, and Commodore 64, among others. In 1981, IBM introduced the IBM Personal Computer, a system which was met widespread adoption in both enterprises and home businesses within a couple years and set a new de facto standard for the physical configuration of microcomputers. The IBM PC and successors housed the system board and expansion cards in a separate horizontal unit, with the keyboard usually in front and the prescribed CRT monitor resting on top of the system unit; the front of the system unit houses one or more disk drives.
In 1982, NCR introduced the Tower series of workstation computers, named so for their tall, upright configuration, intended to be stowed away under a desk. The first, the Tower 1632, is 29 inches tall and featured a Motorola 68000 microprocessor. Costing upwards of $12,500, the 1632 is meant to run Unix and supports up to 16 simultaneous networked users. NCR continued adding to the Tower line into the late 1980s.
In 1983, Tandy Corporation offered their Tandy 2000 with an optional floor stand, turning the normally horizontal desktop case on its side and allowing it to be stashed under-desk; the square badge on the Tandy 2000 can be removed and rotated upright in turn. IBM followed suit with their PC/AT in 1984, which included an optional "floor-standing enclosure" for $165. Of the three initial entrants in the company's RT PC line in 1986, two were tower units, while the other was a traditional horizontal case like the AT and the PCs before it.
In 1987, IBM introduced the PS/2 Model 60, an initial entry in the company's Personal System/2 line of personal computers. It was IBM's first Intel-based PC built entirely into a tower case. The PS/2 Model 60 was comparable in technical specification to its sibling the PS/2 Model 50, which sported a horizontal desktop form factor. Whereas the Model 50 had only four expansion slots and three drive bays, however, the Model 60 featured eight expansion slots and four drive bays. Because of the latter's increased potential for connectivity and multitasking, technology journalists envisioned the PS/2 Model 60 as a multiuser machine, although multiuser operating systems supporting the 80286 processor of both the Models 50 and 60 were hard to come by in 1987. IBM followed up with the tower-based PS/2 Model 80 later that year, their first PC powered by an i386 processor.
According to The New York Times in 1988, the PS/2 Models 60 and 80 started the trend of computer manufacturers offering IBM PC compatibles in optional tower form factors:
Aftermarket floor stands, allowing existing horizontal desktop computers to be stored upright on the floor, were sold in the late 1980s by companies such as Curtis Computer Products. Recommending such kits in The Washington Post in 1989, Brit Hume called the tower the best configuration for ergonomics and noted that, "Contrary to popular myth, standing vertically will not hurt the computer or throw off your disk drives."
The transition in dominance from horizontal desktop computers to towers was mostly complete by 1994, according to a period article in PC Week. Computer cases or pre-built systems offered in the traditional horizontal form factor have since been separately categorized as desktops, to contrast them with the usually-floor-situated towers.
Brian Benchoff of Hackaday argued that the popularity of the Macintosh Quadra 700 was the turning point for computer manufacturers to move over to the tower form factor en masse. The tower form factor of the Quadra 700 was by necessity: common peripherals of the Quadra were the relatively extremely heavy color CRT monitors offered by Apple (those whose screens measured 20 inches and over diagonally could weigh 80 lbs or more) favored by the desktop publishing industry during the 1990s. Such monitors threatened to crush the plastic frames of the Macintosh IIcx and Macintosh IIci; customers might have been tempted to fit such heavy monitors atop the IIcx and IIci because of their horizontal form factor.
See also
Thin client
All-in-one computer
Desktop form factor
References
Classes of computers
Tower
Desktop computers | Computer tower | Technology | 2,011 |
5,954,836 | https://en.wikipedia.org/wiki/Digital%20delay%20generator | A digital delay generator (also known as digital-to-time converter) is a piece of electronic test equipment that provides precise delays for triggering, syncing, delaying, and gating events. These generators are used in many experiments, controls, and processes where electronic timing of a single event or multiple events to a standard timing reference is needed. The digital delay generator may initiate a sequence of events or be triggered by an event. What differentiates it from ordinary electronic timing is the synchronicity of its outputs to each other and the initiating event.
A time-to-digital converter does the inverse function.
Equipment
The digital delay generator is similar to a pulse generator in function, but the timing resolution is much finer, and the delay and width jitter much less.
Some manufacturers, calling their units "digital delay and pulse generators", have added independent amplitude polarity and level control to each of their outputs in addition to delay and width control. Now each channel provides its delay, width, and amplitude control, with the triggering synchronized to an external source or internal rep rate generator - like a general-purpose pulse generator.
Some delay generators provide precise delays (edges) to trigger devices. Others provide accurate delays and widths also to allow a gating function. Some delay generators provide a single timing channel, while others provide multiple timing channels.
Digital delay generator outputs are typically logic level, but some offer higher voltages to cope with electromagnetic interference environments. For very harsh environments, optical outputs and/or inputs with fiber optic connectors are also offered as options by some manufacturers. In general, a delay generator operates in a 50 Ω transmission line environment with the line terminated in its characteristic impedance to minimize reflections and timing ambiguities.
Historically, digital delay generators were single channel devices with delay-only (see DOT reference below). Now, multi-channel units with delay and gate from each channel are the norm. Some allow referencing to other channels and combining the timing of several channels onto one for more complex, multi-triggering applications. Multiple lasers and detectors can be triggered and gated. (see the second reference on "Experimental study of laser ignition of a methane/air mixture by planar laser-induced fluorescence of OH.)" Another example has a channel pumping a laser with a user-selected number of flash lamp pulses. Another channel may be used in Q-switching that laser. A third channel can then be used to trigger and gate a data acquisition or imaging system a distinct time after the laser fires. (see sensorsportal.com reference below)
Pulse selection or pulse picking of a single laser pulse from a stream of laser pulses generated via mode-locking is another valuable feature of some delay generators. Using the mode-locked rate as an external clock to the digital delay generator, one may adjust a delay and width to select a single pulse and synchronize other events to that single pulse.
Uses
A delay generator can also delay and gate high-speed photodetectors in high-speed imaging applications. (see reference on high-speed photography below)
Digital delay generators are usually the heart of the timing for larger systems and experiments. Users generally create a GUI graphical user interface to provide a single control to the entire system or experiment. Digital delay generator manufacturers have added remote programming schemes that facilitate the creation of such GUIs. Industry standards such as GPIB, RS-232, USB and Ethernet are available from a variety of manufacturers.
Experimental fluid dynamics uses digital delay generators to investigate fluid flow. The field of PIV, particle image velocimetry, encompasses several subsets which would use digital delay generators as the main component of its timing where multiple lasers may be triggered. Multiple channels may trigger various lasers. One can also multiplex the timing of several channels onto one channel to trigger or even gate the same device multiple times. A single channel may trigger a laser or gate a camera with multiple, multiplexed pulses. Another useful setup is to have one channel drive flash lamps a preset number of times, followed by a single Q-switch, followed by a delay and gate for the data acquisition or imaging system.
Negative delay is available with digital delay generators that can select some other channel as a reference. This would be useful for applications where an event must occur in advance of the reference. An example would be to allow for opening a shutter before the reference.
A digital delay generator has been used in mass spectrometry.
Multi-trigger digital delay generators
A new development is digital delay generators that have gating and external triggering, dual or multi-trigger capabilities. The gate allows the user to enable outputs and/or triggers with an electronic signal. Some units have gate or trigger capabilities using single or separate connectors. Dual or multi-trigger digital delay generators have several input triggers. These triggers can be selectively used to trigger any or all channels.
The multi-trigger versions have programmable logic controller-type functionality for incorporating interlocks, latches, dynamic delay adjustment, and trigger noise suppression. Triggers are formed by logically combining the inputs and outputs in And, Or, Xor, and Negated forms.
LIDAR applications use digital delay generators. A channel is used to trigger a laser. A second channel provides a delayed gate for the data acquisition system. Gating allows regions of interest to be processed and stored while ignoring the bulk of unwanted data.
Dual-trigger digital delay generators provide two independently triggered digital delay generators in one package. Since benchtop digital delay generators are now multi-channel, it is possible to have two or more input triggers and select the channels that respond to each trigger. An interesting concept to provide dual-trigger capability converts an instrument with separate trigger and gate inputs to allow the gate to operate as a second trigger.
Design
A vital issue in the design of DDGs is to generate triggered delays having crystal-oscillator precision but that are not quantized to the edges of the reference oscillator. There are several techniques used in digital delay generation.
The most straightforward scheme uses a digital counter and a free-running crystal oscillator to time intervals with 1-clock ambiguity, resulting in output edge jitter of one clock period peak-to-peak relative to an asynchronous trigger. This technique is used in the Quantum Composers and Berkeley Nucleonics instruments.
Triggered crystal, LC, or delay-line oscillators may be started at trigger time and counted to make coarse delays, followed by an analog fine or "vernier" delay to interpolate between clock periods. An enhancement is to use a phase-locked loop to lock the startable oscillator to a more accurate continuous-running crystal oscillator using a technique that preserves the original trigger alignment. The classic Hewlett Packard 5359A Time Synthesizer used a triggered ECL delay-line oscillator synchronized to a crystal oscillator using a heterodyne phase lock technique; the technique was subsequently used in several Berkeley Nucleonics and LeCroy delay generators. Highland Technology uses a triggered LC oscillator and a DSP phase lock scheme. Jitter below ten ps RMS relative to an external trigger can be achieved.
It is possible to design an analog-ramp delay generator that spans some tens of nanoseconds of delay range using a current source to charge a capacitor. One can then suspend the ramp current for some integral number of clocks, as timed by a crystal oscillator. The freezing of the ramp extends the range of delays without the requirement to synchronize the oscillator to the trigger. This technique is described in US patent 4,968,907 and was used in the Signal Recovery instrument. Low delay jitter is possible, but leakage current becomes a serious error contributor for delays in the millisecond range.
A flipflop-based dual-rank synchronizer can be used to synchronize an external trigger to a counter-based delay generator, as in case (1) above. It is then possible to measure the skew between the input trigger and the local clock and adjust the vernier delay on a shot-by-shot basis, to compensate for most of the trigger-to-clock jitter. Jitter in the tens of picoseconds RMS can be achieved with careful calibration. Stanford Research Systems use this technique.
See also
Programmable logic controller
Time-to-digital converter
References
Target Speed Simulator Based on Digital Delay Generator - US Dept of Transportation DOT HS 809 811 Section 4.10.
"Experimental study of laser ignition of a methane/air mixture by planar laser-induced fluorescence of OH"
"Measurement of local forcing on a turbulent boundary layer using PIV"
http://www.sensorsportal.com/HTML/DIGEST/february_06/Pulse_Generator.htm
"Repetitively pulsed ruby lasers as light sources for high-speed photography applications"
"Single photon detector module"
"Lidar remote sensing"
"Programmable sequence generator with multiple outputs"
"Synchrotron timing - multiple channels"
External links
http://www.berkeleynucleonics.com/resources/575_Multiplexing(1).pdf "Sync'ing, delaying and gating with multiple pulse trains"
https://www.keysight.com/en/pd-1000001406%3Aepsg%3Apro-pn-5359A/time-synthesizer?nid=-536900193.536882162&cc=US&lc=eng
http://www.quantumcomposers.com/
http://www.greenfieldtechnology.com/
http://www.signalrecovery.com/9650Apage.htm
http://www.thinksrs.com/products/DG645.htm
http://www.highlandtechnology.com/DSS/T560DS.html
http://zone.ni.com/devzone/cda/epd/p/id/6131 FPGA-based
Electronic circuits
Electronic test equipment | Digital delay generator | Technology,Engineering | 2,091 |
17,265,021 | https://en.wikipedia.org/wiki/Fizzle%20%28nuclear%20explosion%29 | A fizzle occurs when the detonation of a device for creating a nuclear explosion (such as a nuclear weapon) grossly fails to meet its expected yield. The bombs still detonate, but the detonation is much weaker than anticipated. The cause(s) for the failure might be linked to improper design, poor construction, or lack of expertise. All countries that have had a nuclear weapons testing program have experienced some fizzles. A fizzle can spread radioactive material throughout the surrounding area, involve a partial fission reaction of the fissile material, or both. For practical purposes, a fizzle can still have considerable explosive yield when compared to conventional weapons.
In multistage fission-fusion weapons, full yield of the fission primary that fails to initiate fusion ignition in the fusion secondary (or produces only a small degree of fusion) is also considered a "fizzle", as the weapon failed to reach its design yield despite the fission primary working correctly. Such fizzles can have very high yields, as in the case of Castle Koon, where the secondary stage of a device with a 1 megaton design fizzled, but its primary still generated a yield of 100 kilotons, and even the fizzled secondary still contributed another 10 kilotons, for a total yield of 110 kT.
Fusion boosting
If a deuterium-tritium mixture is placed at the center of the device to be compressed and heated by the fission explosion, a fission yield of 250 tons is sufficient to cause D–T fusion releasing high-energy fusion neutrons which will then fission much of the remaining fission fuel. This is known as a boosted fission weapon.
If a fission device designed for boosting is tested without the boost gas, a yield in the sub-kiloton range may indicate a successful test that the device's implosion and primary fission stages are working as designed, though this does not test the boosting process itself.
Nuclear fission tests considered to be fizzles
Buster Able Considered to be the first known failure of any nuclear device.
Upshot–Knothole Ruth Testing a uranium hydride bomb. The test failed to declassify the site (erase evidence) as it left the bottom third of the shot tower still standing.
Upshot–Knothole Ray Similar test conducted the following month. Allegedly a shorter tower was chosen, to ensure that the tower would be completely destroyed.
Gerboise Verte The nuclear test should have had a power of between 6 and 18 kt, but in the end the test will have a power of less than one kiloton.
North Korean nuclear test in 2006 Russia claimed to have measured 5–15 kt yield, whereas the United States, France, and South Korea measured less than 1 kt yield. This North Korean debut test was weaker than all other countries' initial tests by a factor of 20, and the smallest initial test in history.
Nuclear fusion tests that fizzled
Castle Koon A thermonuclear device whose fusion secondary did not successfully ignite, with only low-level fusion burning taking place.
Short Granite Dropped by the United Kingdom over Malden Island in the Pacific on May 15, 1957, during Operation Grapple 1, this bomb had an expected yield of over 1 megaton, but only exploded with a force of a quarter of the anticipated yield. The test was still considered successful, as thermonuclear ignition occurred and contributed substantially to the bomb's yield. Another bomb dropped during Grapple 1, Purple Granite, was hoped to give an improved yield over Short Granite, but the yield was even lower.
Terrorist concerns
One month after the September 11, 2001 attacks, a CIA informant known as "Dragonfire" reported that al-Qaeda had smuggled a low-yield nuclear weapon into New York City. Although the report was found to be false, concerns were expressed that even a "fizzle bomb" capable of yielding a fraction of the known 10-kiloton weapons could cause "horrific" consequences. A detonation in New York City would mean thousands of civilian casualties.
In popular culture
The nuclear weapon which detonates in Tom Clancy's The Sum of all Fears results in a fizzle, caused by tritium poisoning, which causes the secondary core to fail to ignite.
See also
List of nuclear tests
Lists of nuclear disasters and radioactive incidents
Uranium hydride bomb
Dirty bomb
References
External links
Not a bomb or a dud but a fizzle Ian Hoffman, Oakland Tribune, October 9, 2006.
Nuclear Weapons, howthingswork.virginia.edu
Nuclear weapons
Nuclear weapons testing
Nuclear accidents and incidents | Fizzle (nuclear explosion) | Chemistry,Technology | 943 |
42,551,986 | https://en.wikipedia.org/wiki/WISE%200855%E2%88%920714 | WISE 0855−0714 (full designation WISE J085510.83−071442.5, or W0855 for short) is a sub-brown dwarf of spectral class Y4, located from the Sun in the constellation Hydra. It is the fourth-closest star or (sub-) brown dwarf system to the Sun and was discovered by Kevin Luhman in 2013 using data from the Wide-field Infrared Survey Explorer (WISE). It is the coldest brown dwarf found in interstellar space, having a temperature of about . It has an estimated mass between 3–10 Jupiter masses, which makes it a planetary-mass object below the 13 Jupiter mass limit for deuterium fusion.
Characterization
Observations
WISE 0855−0714 was first imaged by the WISE telescope on 4 May 2010 during its primary mission of surveying the entire sky. It was later discovered by Kevin Luhman in March 2013, who noticed the object's unusually high proper motion while searching for potential binary companions of the Sun in WISE images. In the interest of confirming the object's spectral properties and nearby distance to the Sun, Luhman made follow-up observations with the Spitzer Space Telescope and the Gemini North telescope in 2013–2014. The discovery of the object was announced in a NASA press release in April 2014.
Since WISE 0855−0714 is an isolated object, its luminosity primarily comes from thermal radiation. WISE 0855−0714's temperature is low enough that it roughly matches room temperature, which means WISE 0855−0714's luminosity is very low and primarily emits infrared light as thermal radiation. Hence, it is best observed with infrared telescopes such as WISE and the James Webb Space Telescope (JWST). WISE 0855−0714 has been detected in spectral wavelengths as short as —in this near-infrared wavelength, the object appears extremely dim with an apparent magnitude of 26.3. WISE 0855−0714's brightness decreases with decreasing wavelength, so the object is practically invisible in visible light.
Distance and proper motion
Based on direct observations, WISE 0855−0714 has a large parallax of , which corresponds to a distance of around (). This makes WISE 0855−0714 the fourth-closest star or (sub-) brown dwarf system to the Sun. WISE 0855−0714 also has an exceptionally high proper motion of , the third-highest after Barnard's Star () and Kapteyn's Star ()
Spectrometry
Its luminosity in different bands of the thermal infrared in combination with its absolute magnitude—because of its known distance—was used to place it in context of different models; the best characterization of its brightness was in the W2 band of at an apparent magnitude of , though it was brighter into the deeper infrared. Infrared images taken with the Magellan Baade Telescope suggest evidence of sulfide clouds below water ice clouds.
Near- and mid-infrared spectra in the L- and M-band were taken with the GNIRS instrument on the Gemini North Telescope. The M-band (4.5–5.1 μm) spectrum is dominated by water vapour (H2O) absorption. The L-band (3.4–4.14 μm) spectrum is dominated by methane absorption. Both the M- and L-band surprisingly have no detection of phosphine (PH3), which appears in the atmosphere of Jupiter. The M-band spectrum shows evidence for water ice clouds and the near-infrared photometry WISE 0855 is faint compared to models, suggesting an additional absorber, probably clouds made of ammonium dihydrogen phosphate (NH4)(H2PO4), which are below the water ice clouds. An approved JWST proposal describes how the team is planning to use a near-infrared time-series to study the hydrological cycle in the atmosphere of WISE 0855 with NIRSpec.
Observations with NIRSpec detected methane (CH4), water vapor (H2O), ammonia (NH3) and carbon monoxide (CO) in the atmosphere, but was not able to confirm any phosphine (PH3) or carbon dioxide (CO2) in the atmosphere. Water ice clouds are also not confirmed and the spectrum is well matched with a cloudless model. Observations with MIRI showed a water vapor depletion and a water abundance that is variable with pressure. This is consistent with water condensing out in the upper atmosphere. The observations did however not detect any water ice clouds, which were predicted in previous studies. This discrepancy is explained with the rainout of the water: Water condenses into particles in the upper atmosphere, which quickly sink into the lower atmosphere. Clouds only form if upward mixing is present. A similar process is present for alkali metals in L- and T-dwarfs. A direct rainout would suggest weak mixing, but disequilibrium chemistry suggest rigours mixing. Future variable studies might resolve if upward mixing or settling is the dominant process. Cloud models however potentially detected deep ammonium dihydrogen phosphate (NH4)(H2PO4) clouds. The observations also detected 15NH3 for the first time in WISE 0855. The atmosphere has a mass fraction of 14NH3/15NH3 = , meaning it has about 99.7% 14N and about 0.3% 15N. Compared to solar values and the ratio of WISE 1828, the atmosphere of WISE 0855 is enriched in 15N. The nitrogen isotope ratio is closer to today's 15N-enriched interstellar medium. This could mean that WISE 0855 formed from a younger cloud, but more measurements of 15N in other brown dwarfs are needed to establish evolutionary trends. In November 2024 a team used archived and new NIRSpec data to detect deuterated methane (CH3D) and about one part per billion PH3 in WISE 0855. This detection of deuterium showed that WISE 0855 has a mass below the deuterium-burning-limit. The low amount of PH3 is on the other hand in disagreement with predictions, showing incomplete knowledge of phosphorus chemistry.
Variability
Variability of WISE 0855 in the infrared was measured with Spitzer IRAC. A relative small amplitude of 4–5% was measured. Water ice cloud models predicted a large amplitude. This small amplitude might suggest that the hemispheres of WISE 0855 have very small deviation in cloud coverage. The light curve is too irregular to produce a good fit and rotation periods between 9.7 and 14 hours were measured.
Physical parameters
The mass and age of WISE 0855−0714 are neither known with certainty, but can be constrained with its known present-day temperature. The age of WISE 0855−0714 depends on its mass; a lower mass would lead to a faster rate of cooling and thus a younger age for WISE 0855−0714, whereas a higher mass would lead to a slower rate of cooling and thus an older age for WISE 0855−0714. Assuming an age range of 1–10 billion years, evolutionary models for brown dwarfs predict that WISE 0855−0714 should have a mass between . This mass is in the range of a sub-brown dwarf or planetary-mass object.
As of 2003, the International Astronomical Union considers an object with a mass above , capable of fusing deuterium, to be a brown dwarf. A lighter object and one orbiting another object is considered a planet. However, if the distinction is based on how the object formed then it might be considered a failed star, a theory advanced for the object Cha 110913-773444.
Combining its luminosity, distance, and mass it is estimated to be the coldest-known brown dwarf, with a modeled effective temperature of , depending on the model. Models matching the NIRSpec spectrum are well fitted with a temperature of .
Gallery
See also
CFBDSIR 2149-0403, the first free-floating object with a confirmed mass below .
List of nearest stars and brown dwarfs
Luhman 16
PSO J318.5-22
Superjupiter
Sub-brown dwarf
List of Y-dwarfs
Notes
References
Further reading
(Note: WISE 0855−0714 is not mentioned in this paper; it is about other Y-type objects discovered by WISE.)
External links
WISE 0855−0714 on WiseView, a tool created by volunteers of the Backyard Worlds citizen science project
201303??
Hydra (constellation)
Local Bubble
WISE objects
Y-type brown dwarfs
Rogue planets | WISE 0855−0714 | Astronomy | 1,785 |
74,471,367 | https://en.wikipedia.org/wiki/Pneumatic%20anti-ice%20system | A pneumatic anti-ice system is a technology that uses air or another gas to prevent ice buildup on ships sailing in icy waters. It is housed below the waterline on the ship's hull. Pneumatic anti-ice systems use compressed air or engine exhaust as the working gas, which is vented overboard through a series of ejectors from bow to amidships. Since the ejectors are located below the waterline or near the keel, the airflow streaming from them forms a water/air curtain along the hull.
History
The concept of a ship anti-icing system in the form of a water-air boundary layer was introduced in 1966 in the USSR. Variants of a heated steam-air system in the waterline area were considered, and the prospects for its use as a thruster to increase maneuverability were studied. The modern form was proposed by Wärtsilä in 1969 and was first installed on Finnish cargo ferry Finncarrier. It was tested in the Baltic Sea in 1970. The first icebreaker on which the pneumatic anti-ice system was installed was the Yermak, built in 1974.
Performance
The adhesion of ice to the hull has thermal and electrostatic aspects. The ongoing processes develop too quickly to warm above-the-waterline ice to the ambient water temperature, as a result of which it freezes or sticks to the hull. Air flushing reduces the contact area of the ice with the hull and increases the temperature by creating an upstream current of warmer water at greater depth, thereby solving the first problem. Another mechanism is the accumulation of an electrostatic charge in the ice when it cracks and breaks. When the state of underwater paint coating of the hull is unsatisfactory, it can become ineffective against preventing ice from sticking.
Referencesiocde
Transport safety
Ice in transportation | Pneumatic anti-ice system | Physics | 374 |
14,879,381 | https://en.wikipedia.org/wiki/KCNMB2 | Calcium-activated potassium channel subunit beta-2 is a protein that in humans is encoded by the KCNMB2 gene.
Big Potassium (BK) channels are large conductance, voltage and calcium-sensitive potassium channels which are fundamental to the control of smooth muscle tone and neuronal excitability. BK channels can contain two distinct subunits: a pore-forming alpha subunit and a modulatory beta subunit. Each complete BK channel contains four copies of the pore-forming alpha subunit and up to four beta subunits. The protein encoded by the KCNMB2 gene is an auxiliary beta subunit which influences the calcium sensitivity of BK currents and, following activation of BK current, causes persistent inactivation. The subunit encoded by the KCNMB2 gene is expressed in various endocrine cells, including pancreas and adrenal chromaffin cells. It is also found in the brain, including the hippocampus. The KCNMB2 gene is homologous to three other genes found in mammalian genomes: KCNMB1 (found primarily in smooth muscle), KCNMB3, and KCNMB4 (the primary brain BK auxiliary subunit).
Calcium-activated potassium channel subunit beta-2 comprises two domains. An N-terminal cytoplasmic domain, the ball and chain domain, which is responsible for the fast inactivation of these channels, and a C-terminal calcium-activated potassium channel beta subunit domain. The N-terminal domain only occurs in calcium-activated potassium channel subunit beta-2, while the C-terminal domain is found in related proteins.
See also
BK channel
Voltage-gated potassium channel
References
Further reading
External links
Protein domains
Ion channels | KCNMB2 | Chemistry,Biology | 351 |
33,791,047 | https://en.wikipedia.org/wiki/Geometrically%20and%20materially%20nonlinear%20analysis%20with%20imperfections%20included | Geometrically and materially nonlinear analysis with imperfections included (GMNIA), is a structural analysis method designed to verify the strength capacity of a structure, which accounts for both plasticity and buckling failure modes.
GMNIA is currently considered the most sophisticated and perspectively the most accurate method of a numerical buckling strength verification.
References
Structural analysis | Geometrically and materially nonlinear analysis with imperfections included | Mathematics,Engineering | 71 |
5,679,766 | https://en.wikipedia.org/wiki/Stopped-flow | Stopped-flow is an experimental technique for studying chemical reactions with a half time of the order of 1 ms, introduced by Britton Chance and extended by Quentin Gibson (Other techniques, such as the temperature-jump method, are available for much faster processes.)
Description of the method
Summary
Stopped-flow spectrometry allows chemical kinetics of fast reactions (with half times of the order of milliseconds) to be studied in solution. It was first used primarily to study enzyme-catalyzed reactions. Then the stopped-flow rapidly found its place in almost all biochemistry, biophysics, and chemistry laboratories with a need to follow chemical reactions in the millisecond time scale.
In its simplest form, a stopped-flow mixes two solutions. Small volumes of solutions are rapidly and continuously driven into a high-efficiency mixer. This mixing process then initiates an extremely fast reaction. The newly mixed solution travels to the observation cell and pushes out the contents of the cell (the solution remaining from the previous experiment or from necessary washing steps). The time required for this solution to pass from the mixing point to the observation point is known as dead time. The minimum injection volume will depend on the volume of the mixing cell. Once enough solution has been injected to completely remove the previous solution, the instrument reaches a stationary state and the flow can be stopped. Depending on the syringe drive technology, the flow stop is achieved by using a stop valve called the hard-stop or by using a stop syringe. The stopped-flow also sends a ‘start signal’ to the detector called the trigger so the reaction can be observed. The timing of the trigger is usually software controlled so the user can trigger at the same time the flow stops or a few milliseconds before the stop to check the stationary state has been reached.
Reactant syringes
Two syringes are filled with solutions that do not undergo a chemical reaction until mixed together. These have pistons that are driven by a single drive piston or by independent stepping motors, so that they are coupled together and their contents are forced out simultaneously into a mixing device.
Mixing chamber
Once the two solutions are forced out of their syringes they enter a mixing system that has baffles to ensure complete mixing, with turbulent flow rather than laminar flow. (Laminar flow would allow the two solutions to flow side by side with incomplete mixing.)
Dead time
The dead time is the time for the solutions to go from the mixing point to the observation point, it is the part of the kinetics which cannot be observed. So the lower the dead time, the more information the user can get. In older instruments this could be of the order of 1 ms, but improvements now allow a dead time of about 0.3 ms.
Observation cell
The mixed reactants pass an observation cell that allows the reaction to be followed spectrophotometrically, typically by ultraviolet spectroscopy, fluorescence spectroscopy, circular dichroism or light scattering, and it is now common to combine several of these.
Observation cuvette with a short light path (0.75 to 1.5mm) are usually preferred for fluorescence measurements to reduce self-absorption effects. Observation cuvette with longer light path (0.5 cm to 1 cm) are preferred for absorbance measurements. Modern stopped-flow can accommodate different models of cells and it is possible to change the cuvette between two experiments.
For stopped-flow X-ray measurements, a quartz capillary with thin wall is used to minimize quartz absorption. Simultaneous x-ray and absorbance measurements are possible in the same capillary.
Stopping
Once through the observation cell the mixture enters a third syringe that contains a piston that is driven by the flow to activate a switch to stop the flow and activate the observation.
Continuous flow
The stopped-flow method is a development of the continuous-flow method used by Hamilton Hartridge and Francis Roughton to study the binding of O2 to hemoglobin. In the absence of any stopping system the reaction mixture passed to a long tube past an observation system (consisting in 1923 of a simple colorimeter) to waste. By moving the colorimeter along the tube, and knowing the flow rate, Hartridge and Roughton could measure the process after a known time.
In its time this was a revolutionary advance showing an apparently intractable problem (studying a process taking milliseconds with equipment that required seconds for each measurement) could be solved with simple equipment. However, in practice it was limited to reactants available in large quantities: for proteins this effectively limited it to reactions of hemoglobin. For practical purposes this approach is obsolete.
Quenched flow
The stopped-flow method depends on the existence of spectroscopic properties that can be used for following the reaction. When that is not the case quenched flow provides an alternative that uses conventional chemical methods for analysis. Instead of a mechanical stopping system the reaction is stopped by quenching, the products being delivered to a recipient that stops the reaction immediately, either by instantaneous freezing or by denaturing the enzyme with a chemical denaturant or exposing the sample to a denaturing light source. As in the continuous-flow method, the time between mixing and quenching can be varied by varying the length of the tube.
The pulsed quenched flow method introduced by Alan Fersht and Ross Jakes overcomes the need for a long tube. The reaction is initiated exactly as in a stopped-flow experiment, but there is a third syringe that brings about quenching a definite and preset time after the initiation.
Quenched flow has both advantages and disadvantages with respect to stopped flow. On the one hand, chemical analysis makes it clear what process is being measured, whereas it may not always be obvious what process a spectroscopic signal represents. On the other hand, quenched flow is much more laborious, as each point along the time course must be determined separately. The image at left for catalysis by nitrogenase from Klebsiella pneumoniae illustrates both of these points: the agreement in half times indicates that the absorbance at 420 nm measured the release of Pi, but the quenched-flow experiment required 11 data points.
References
Further reading
Chemical kinetics
Biophysics | Stopped-flow | Physics,Chemistry,Biology | 1,286 |
42,087,655 | https://en.wikipedia.org/wiki/List%20of%20systems%20of%20the%20human%20body | This is a list of the main organ systems in the human body.
Circulatory system/cardiovascular system
Circulates blood around the body via the heart, arteries and veins, delivering oxygen and nutrients to organs and cells and carrying their waste products away, as well as keeping the body's temperature in a safe range.
Digestive system/excretory system
Absorbs nutrients and removes waste via the gastrointestinal tract, including the mouth, esophagus, stomach and intestines.
Endocrine system
Influences the function of the body using hormones.
Exocrine system
System that secrete substances through ducts for various functions.
Integumentary system
The integumentary system comprises skin and its appendages; hair, nails, sweat glands and
oil glands.
Immune system/lymphatic system
Defends the body against pathogens that may harm the body. The system contains a network of lymphatic vessels that carry a clear fluid called lymph.
Muscular system
Enables the body to move using muscles.
Nervous system
Collects and processes information from the senses via nerves and the brain and tells the muscles to contract to cause physical actions.
Reproductive system
The reproductive organs are required for the production of offspring.
Respiratory system
Brings air into and out of the lungs to absorb oxygen and remove carbon dioxide.
Skeletal system
Bones maintain the structure of the body and its organs.
Urinary system/renal system
The urinary system (also known as renal system) filter blood with the help of kidneys to produce urine, and get rid of waste.
See also
List of distinct cell types in the adult human body
List of organs of the human body
Organ systems
Systems | List of systems of the human body | Biology | 343 |
46,570,039 | https://en.wikipedia.org/wiki/C2orf27 | Uncharacterized protein C2orf27 is a protein that in humans is encoded by the C2orf27A gene. Although its function is not clearly understood, through the use of bioinformatic analysis more information is being brought to light.
Gene
The mRNA is 1,222bp in length and is located at 2q21.2 with a total of five exons in Homo sapiens. Other sources list C2orf27B as a paralog but this is unlikely because both genes are located in the same place on chromosome 2. It seems to be generally accepted that they are the same gene. Other gene aliases include C2orf27 and chromosome 2 open reading frame 27A. The gene is surrounded upstream by POTEKP and downstream by ANKRD30BL.
Protein
The length of the C2orf27 protein sequence is 203 a.a. in length and has a molecular weight of 21.5 kDa with a pI of 5.13 in Homo sapiens. When taking into account the primate orthologs, the molecular weights range from 21.4 to 36.7 kDa with the isoelectric point ranging from 4.58 to 5.25. This gene is located in the nucleus of the cell and, it doesn't contain any transmembrane regions.
Looking at the motifs of the protein sequence, a few important ones are present. All of the repeat sequences are concentrated near the N-terminus of the protein and are highly conserved through all the orthologs.
Post-translationally, there are multiple glycosylation sites scattered throughout the protein sequence, phosphorylation site positioned at S13, and a nuclear export signal located at L80 - V90, which also happens to be within a coiled coil region.
Expression
C2orf27 is ubiquitously expressed in most tissues but with increased expression in the brain, pancreas, kidneys, and testis.
Interactions
The protein is said to interaction with another protein called ataxin-1 which was discovered by performing a two hybrid prey pooling (Y2H) approach.
They share the similar characteristics of being located in the nucleus of cells and are expressed in the brain.
Structure
The overall structure of this protein is predicted to be composed of both alpha-helices and beta-sheets. The majority of the alpha-helices fall on the N-terminus of the protein and the beta-sheets fall near the C-terminus of the protein. There is a sequence of four prolines located from P185 to P188 has the secondary structure of a type II polyproline helix.
Evolutionary History
This gene is found in primates but is also found at a very poor E-values in other mammals and organisms like fish, invertebrates, fungi, bacteria, or plants. The protein C2orf27, however, is strictly found only in primates like chimpanzees, gorillas, and baboons.
When comparing the mRNA of C2orf27A with the exclusion of primates, it is shown that there is a high similarity with a gene called neurobeachin (NBEA) NBEA. When taking a look at this connection between the two, it was found that NBEA was on a different reading frame than C2orf27A which already begins to rule out any similarity between the two. This was confirmed based upon the fact that when comparing both of these protein sequences, it resulted in a 44% similarity in a 1% query cover. These protein sequences are entirely different suggesting that their functions may not be similar. It was also discovered when comparing the alignment of the sequences, it was shown that a duplication event occurred between NBEA and C2orf27A. NBEA is present on chromosome 13 but a section of this mRNA corresponds, with a 96% similarity score, with exons one through four on C2orf27 of chromosome 2. This may be an example of a gene duplication event.
Taking all of this into account, the duplication of NBEA into chromosome 2 to form C2orf27 may be the divergence point of the gene becoming strictly present in primates only.
Clinical Significance
C2orf27A has been found to be associated with nonsyndromic craniosynostosis, a premature fusion of the calvaria. There are two distinct subtypes of this disease and patients with a certain subtype present with an increase in the expression of certain genes characteristic of each subtype. There is subtype A which is associated with increased insulin-like growth factor expression, and subtype B which is associated with increased integrin expression. There is an increased expression of the gene C2orf27A shown in patients with the subtype B disease.
Through a combination of a microarray assay and use of IPA software, C2orf27A has been found to be regulated by the hormone melatonin and linked with a role in cellular movement, the function and development of blood and bone marrow, and cell-mediated response of the immune system.
The chimeric fusion of C2orf27A (exon 1) and NBEA (exon 37 and 38) was present in only ovarian cancer samples.
References
Genes
Uncharacterized proteins | C2orf27 | Biology | 1,071 |
48,912,755 | https://en.wikipedia.org/wiki/Bismuth%20antimonide | Bismuth antimonides, Bismuth-antimonys, or Bismuth-antimony alloys, (Bi1−xSbx) are binary alloys of bismuth and antimony in various ratios.
Some, in particular Bi0.9Sb0.1, were the first experimentally-observed three-dimensional topological insulators, materials that have conducting surface states but have an insulating interior.
Various BiSb alloys also superconduct at low temperatures, are semiconductors, and are used in thermoelectric devices.
Bismuth antimonide itself (see box to right) is sometimes described as Bi2Sb2.
Synthesis
Crystals of bismuth antimonides are synthesized by melting bismuth and antimony together under inert gas or vacuum. Zone melting is used to decrease the concentration of impurities. When synthesizing single crystals of bismuth antimonides, it is important that impurities are removed from the samples, as oxidation occurring at the impurities leads to polycrystalline growth.
Properties
Topological insulator
Pure bismuth is a semimetal, containing a small band gap, which leads to it having a relatively high conductivity ( at 20 °C). When the bismuth is doped with antimony, the conduction band decreases in energy and the valence band increases in energy. At an antimony concentration of 4%, the two bands intersect, forming a Dirac point (which is defined as a point where the conduction and valence bands intersect). Further increases in the concentration of antimony result in a band inversion, in which the energy of the valence band becomes greater than that of the conduction band at specific momenta. Between Sb concentrations of 7 and 22%, the bands no longer intersect, and the Bi1−xSbx becomes an inverted-band insulator. It is at these higher concentrations of Sb that the band gap in the surface states vanishes, and the material thus conducts at its surface.
Superconductor
The highest temperatures at which Bi0.4Sb0.6, as a thin film of thicknesses 150–1350 Å, superconducts (the critical temperature Tc) is approximately 2 K. Single crystal Bi0.935Sb0.065 can superconduct at slightly higher temperatures, and at 4.2 K, its critical magnetic field Bc (the maximum magnetic field that the superconductor can expel) of 1.6 T at 4.2 K.
Semiconductor
Electron mobility is one important parameter describing semiconductors because it describes the rate at which electrons can travel through the semiconductor. At 40 K, electron mobility ranged from at an antimony concentration of 0 to at an antimony concentration of 7.2%. This is much greater than the electron mobility of other common semiconductors like silicon, which is 1400 cm2/V·s at room temperature.
Another important parameter of Bi1−xSbx is the effective electron mass (EEM), a measure of the ratio of the acceleration of an electron to the force applied to an electron. The effective electron mass is for x = 0.11 and at x = 0.06. This is much less than the electron effective mass in many common semiconductors (1.09 in Si at 300 K, 0.55 in Ge, and 0.067 in GaAs). A low EEM is good for Thermophotovoltaic applications.
Thermoelectric
Bismuth antimonides are used as the n-type legs in many thermoelectric devices below room temperature. The thermoelectric efficiency, given by its figure of merit zT = , where S is the Seebeck coefficient, λ is the thermal conductivity, and σ is the electrical conductivity, describes the ratio of the energy provided by the thermoelectric to the heat absorbed by the device. At 80 K, the figure of merit (zT) for Bi1−xSbx peaks at when x = 0.15. Also, the Seebeck coefficient (the ratio of the potential difference between ends of a material to the temperature difference between the sides) at 80 K of Bi0.9Sb0.1 is −140 μV/K, much lower than the Seebeck coefficient of pure bismuth, −50 μV/K.
References
Antimonides
Binary compounds
Bismuth compounds
Semiconductor materials | Bismuth antimonide | Chemistry | 918 |
52,644,504 | https://en.wikipedia.org/wiki/HilD%203%27UTR%20regulatory%20element | The 3′ UTR of mRNA hilD, a master regulator of Salmonella pathogenicity island 1 (SPI-1), is a prokaryotic example of functional 3'UTR. The 3'UTR is a target for hilD mRNA degradation by the degradosome and it may play a role in hilD and SPI-1 expression by serving as a target for the Hfq RNA chaperone. Under non-invasive conditions it is necessary to keep low levels of SPI-1 expression. It plays a role in S. Typhimurium virulence as a regulatory motif.
References
RNA
Non-coding RNA | HilD 3'UTR regulatory element | Chemistry | 137 |
70,338,934 | https://en.wikipedia.org/wiki/Kepler-289 | Kepler-289 (PH3) is a rotating variable star slightly more massive than the Sun, with an unknown spectral type, 2370 light-years away from Earth in the constellation of Cygnus. In 2014, three exoplanets were discovered orbiting it.
Planetary system
Kepler-289 hosts four planets, three confirmed (Kepler-289b, Kepler-289c, Kepler-289d) and one unconfirmed candidate (Kepler-289e). The discovery of this system was made using the transit method. The inner three planets were found in 2014 with the Kepler space telescope and the Planet Hunters team, while planet e was discovered by follow-up studies in 2017.
References
Cygnus (constellation)
Planetary systems with three confirmed planets
J19495168+4252582
273234825 | Kepler-289 | Astronomy | 168 |
58,823,220 | https://en.wikipedia.org/wiki/George%20Burba | George Burba is an American bio-atmospheric scientist, author, and inventor.
Burba is a Science & Strategy Fellow at LI-COR Biosciences of the Battery Ventures Group, a Global Fellow at Robert B. Daugherty Water for Food Global Institute, and a Graduate Adjunct Professor at the University of Nebraska-Lincoln. He is a co-founder of CarbonDew, a non-profit Community of Practice developing novel climate solutions across economic sectors.
Burba is a leading figure in micrometeorology, surface-to-atmosphere exchange of greenhouse gasses, water vapor, heat and momentum, and the direct real-time measurements of carbon emission and sequestration, evaporation and transpiration, and turbulent transport within the atmospheric boundary layer. He is an author of multiple books on these subjects, used by universities and teaching institutions across the globe, as well as numerous other publications.
Research and career
Burba is an expert on the in-situ measurement methods: an author of instrument surface heating concept, related equations known as “Burba corrections”, an inventor of two new types of gas analyzers known as “enclosed-path” and “semi-open-path”, a multi-method flux emissions station, and new methods for computing gas fluxes from the open-path high-speed laser-based analyzers, and from the multiple types of low-speed gas analyzers. He is an elected Senior Member of the National Academy of Inventors.
After his PhD, Burba worked as a graduate faculty at the University of Nebraska and as a scientist at the LI-COR Biosciences. In 2016, he was appointed Global Fellow at Robert B. Daugherty Water for Food Global Institute. At LI-COR Biosciences, he was appointed to the position of Science Fellow in 2017, and to the position of Science & Strategy Fellow in 2019. The same year he was elected a Senior Member of the National Academy of Inventors.
In 2022, Burba co-founded CarbonDew, a non-profit climate- and carbon-focused international Community of Practice, that united carbon experts from over 200 organizations to jointly address the negative consequences of climate change by promoting fair and equitable solutions based on direct measurements of GHG exchange in and out of the air.
In 2023, he was inducted as a Full Member of SigmaXi, the Scientific Research Honor Society.
Education
Burba was educated at Lomonosov Moscow State University and at the University of Nebraska where he received a PhD in 2005 in Bio-Atmospheric Sciences for the study of GHG, water, light and energy transport in the natural and agricultural systems, supervised by Professor Shashi Verma.
Personal life
George Burba is a son of ru:George A. Burba and a grandson of Aleksandr A. Burba.
References
Living people
Year of birth missing (living people)
University of Nebraska alumni
Moscow State University alumni
American technology businesspeople
American atmospheric scientists
American environmental scientists
American scientists
American textbook writers
American science writers
21st-century American inventors | George Burba | Environmental_science | 623 |
42,863,440 | https://en.wikipedia.org/wiki/Pulse%20vaccination%20strategy | The pulse vaccination strategy is a method used to eradicate an epidemic by repeatedly vaccinating a group at risk, over a defined age range, until the spread of the pathogen has been stopped. It is most commonly used during measles and polio epidemics to quickly stop the spread and contain the outbreak.
Mathematical model
Where T= time units is a constant fraction p of susceptible subjects vaccinated in a relatively short time. This yields the differential equations for the susceptible and vaccinated subjects as
Further, by setting , one obtains that the dynamics of the susceptible subjects is given by:
and that the eradication condition is:
See also
Critical community size
Epidemic model
Herd immunity
Pulse Polio
Ring vaccination
Vaccine-naive
References
External links
Immunisation Immunisation schedule for children in the UK. Published by the UK Department of Health.
CDC.gov - 'National Immunization Program: leading the way to healthy lives', US Centers for Disease Control (CDC information on vaccinations)
CDC.gov - Vaccines timeline
History of Vaccines Medical education site from the College of Physicians of Philadelphia, the oldest medical professional society in the US
Images of vaccine-preventable diseases
Vaccination
Biotechnology
Preventive medicine
Epidemiology
Global health | Pulse vaccination strategy | Biology,Environmental_science | 257 |
59,497,390 | https://en.wikipedia.org/wiki/NGC%207674 | NGC 7674 is a spiral galaxy located in the constellation Pegasus. It is located at a distance of about 350 million light years from Earth, which, given its apparent dimensions, means that NGC 7674 is about 125,000 light years across. It was discovered by John Herschel on August 16, 1830.
Characteristics
The galaxy is seen nearly face-on, at an inclination of 31 degrees. The central bar-shaped structure, measuring 15×5 arcseconds is made up of stars. The galaxy has two spiral arms that become broader as the distance increases. One arm vanishes at the point it overlaps with the nearby galaxy NGC 7674A. The shape of NGC 7674, including the long narrow streamers emanating northeast and northwest of the galaxy can be accounted for by tidal interactions with its companions. There is no dwarf galaxy seen inside the streamers. It is featured in Arp's Atlas of Peculiar Galaxies as number 182, in the category "galaxies with narrow filaments".
NGC 7674 has a powerful active nucleus of the kind known as a type 2 Seyfert that is perhaps fed by gas drawn into the center through the interactions with the companions. In 1975, observations of excess ultraviolet emission led to designation as Markarian 533 in Markarian's catalog. Later, using spectropolarimetry, emission characteristic of a hidden broad-line region (BLR), visible only in the polarized flux spectrum was detected, implying that the nucleus of NGC 7674 is an obscured type 1 Seyfert, hidden by a dust torus. In the center of NGC 7674 lies a supermassive black hole whose mass is estimated to be nearly based on stellar velocity dispersion. When observed in radio waves, NGC 7674 features two radio jets with an S-shape, 0.7 kpc long. The reason for this shape may be a change in the black hole spin axis due to a minor merger, the presence of a binary black hole or due to interactions with the interstellar medium. Two radio sources with characteristics similar to accreting supermassive black holes have been observed in the centre of NGC 7674, at a projected separation of 0.35 parsec.
NGC 7674 falls into the family of luminous infrared galaxies, with its infrared luminosity being 1011.54 . The luminous infrared galaxies are characterised by intense star forming activity. The total star formation rate in NGC 7674 is estimated to be 54 per year, and the star formation rate at the nucleus is 4.3 per year.
Two supernovae have been observed in NGC 7674, SN 2011ee (type Ic, mag 18.6) and SN 2011hb (type Ia, mag 18.8).
Nearby galaxies
NGC 7674 is the brightest and largest member of the isolated Hickson 96 compact group of galaxies, consisting of four galaxies. NGC 7674 forms a pair with its smaller companion NGC 7674A, which lies 34 arcseconds to the north. NGC 7675, an elliptical galaxy, lies 2.2 arcminutes to the east.
References
External links
NGC 7674 on SIMBAD
Unbarred spiral galaxies
Peculiar galaxies
Seyfert galaxies
Luminous infrared galaxies
Pegasus (constellation)
7674
12608
182
71504
Markarian 0533
Discoveries by John Herschel
Astronomical objects discovered in 1830 | NGC 7674 | Astronomy | 707 |
21,345,662 | https://en.wikipedia.org/wiki/Prostanozol | Prostanozol, also known as demethylstanozolol tetrahydropyran ether, is an androgen/anabolic steroid (AAS) and designer steroid which acts as a prodrug of the 17α-demethylated analogue of stanozolol (Winstrol). It was found in 2005 as an ingredient of products sold as "dietary supplements" for bodybuilding.
It is one of hundreds of drugs banned from the Olympics by the IOC. Russian marathon runner Lyubov Denisova was banned for two years from competition after testing positive for prostanozol and testosterone in 2007.
References
Abandoned drugs
Androgen ethers
Anabolic–androgenic steroids
Androstanes
Designer drugs
Tetrahydropyrans
World Anti-Doping Agency prohibited substances | Prostanozol | Chemistry | 168 |
17,040,082 | https://en.wikipedia.org/wiki/Centerpoint%20%28geometry%29 | In statistics and computational geometry, the notion of centerpoint is a generalization of the median to data in higher-dimensional Euclidean space. Given a set of points in d-dimensional space, a centerpoint of the set is a point such that any hyperplane that goes through that point divides the set of points in two roughly equal subsets: the smaller part should have at least a 1/(d + 1) fraction of the points. Like the median, a centerpoint need not be one of the data points. Every non-empty set of points (with no duplicates) has at least one centerpoint.
Related concepts
Closely related concepts are the Tukey depth of a point (the minimum number of sample points on one side of a hyperplane through the point) and a Tukey median of a point set (a point maximizing the Tukey depth). A centerpoint is a point of depth at least n/(d + 1), and a Tukey median must be a centerpoint, but not every centerpoint is a Tukey median. Both terms are named after John Tukey.
For a different generalization of the median to higher dimensions, see geometric median.
Existence
A simple proof of the existence of a centerpoint may be obtained using Helly's theorem. Suppose there are n points, and consider the family of closed half-spaces that contain more than dn/(d + 1) of the points. Fewer than n/(d + 1) points are excluded from any one of these halfspaces, so the intersection of any subset of d + 1 of these halfspaces must be nonempty. By Helly's theorem, it follows that the intersection of all of these halfspaces must also be nonempty. Any point in this intersection is necessarily a centerpoint.
Algorithms
For points in the Euclidean plane, a centerpoint may be constructed in linear time. In any dimension d, a Tukey median (and therefore also a centerpoint) may be constructed in time O(nd − 1 + n log n).
A randomized algorithm that repeatedly replaces sets of d + 2 points by their Radon point can be used to compute an approximation to a centerpoint of any point set, in the sense that its Tukey depth is linear in the sample set size, in an amount of time that is polynomial in the dimension.
References
Citations
Sources
.
.
.
.
.
Euclidean geometry
Multi-dimensional geometry
Means
Point (geometry) | Centerpoint (geometry) | Physics,Mathematics | 507 |
374,338 | https://en.wikipedia.org/wiki/NMDA%20receptor | The N-methyl-D-aspartate receptor (also known as the NMDA receptor or NMDAR), is a glutamate receptor and predominantly Ca2+ ion channel found in neurons. The NMDA receptor is one of three types of ionotropic glutamate receptors, the other two being AMPA and kainate receptors. Depending on its subunit composition, its ligands are glutamate and glycine (or D-serine). However, the binding of the ligands is typically not sufficient to open the channel as it may be blocked by Mg2+ ions which are only removed when the neuron is sufficiently depolarized. Thus, the channel acts as a "coincidence detector" and only once both of these conditions are met, the channel opens and it allows positively charged ions (cations) to flow through the cell membrane. The NMDA receptor is thought to be very important for controlling synaptic plasticity and mediating learning and memory functions.
The NMDA receptor is ionotropic, meaning it is a protein which allows the passage of ions through the cell membrane. The NMDA receptor is so named because the agonist molecule N-methyl-D-aspartate (NMDA) binds selectively to it, and not to other glutamate receptors. Activation of NMDA receptors results in the opening of the ion channel that is nonselective to cations, with a combined reversal potential near 0 mV. While the opening and closing of the ion channel is primarily gated by ligand binding, the current flow through the ion channel is voltage-dependent. Specifically located on the receptor, extracellular magnesium (Mg2+) and zinc (Zn2+) ions can bind and prevent other cations from flowing through the open ion channel. A voltage-dependent flow of predominantly calcium (Ca2+), sodium (Na+), and potassium (K+) ions into and out of the cell is made possible by the depolarization of the cell, which displaces and repels the Mg2+ and Zn2+ ions from the pore. Ca2+ flux through NMDA receptors in particular is thought to be critical in synaptic plasticity, a cellular mechanism for learning and memory, due to proteins which bind to and are activated by Ca2+ ions.
Activity of the NMDA receptor is blocked by many psychoactive drugs such as phencyclidine (PCP), alcohol (ethanol) and dextromethorphan (DXM). The anaesthetic and analgesic effects of the drugs ketamine and nitrous oxide are also partially due to their effects at blocking NMDA receptor activity. In contrast, overactivation of NMDAR by NMDA agonists increases the cytosolic concentrations of calcium and zinc, which significantly contributes to neural death, an effect known to be prevented by cannabinoids, mediated by activation of the CB1 receptor, which leads HINT1 protein to counteract the toxic effects of NMDAR-mediated NO production and zinc release. As well as preventing methamphetamine-induced neurotoxicity via inhibition of nitric oxide synthase (nNOS) expression and astrocyte activation, it is seen to reduce methamphetamine induced brain damage through CB1-dependent and independent mechanisms, respectively, and inhibition of methamphetamine induced astrogliosis is likely to occur through a CB2 receptor dependent mechanism for THC. Since 1989, memantine has been recognized to be an uncompetitive antagonist of the NMDA receptor, entering the channel of the receptor after it has been activated and thereby blocking the flow of ions.
Overactivation of the receptor, causing excessive influx of Ca2+ can lead to excitotoxicity which is implied to be involved in some neurodegenerative disorders. Blocking of NMDA receptors could therefore, in theory, be useful in treating such diseases. However, hypofunction of NMDA receptors (due to glutathione deficiency or other causes) may be involved in impairment of synaptic plasticity and could have other negative repercussions. The main problem with the utilization of NMDA receptor antagonists for neuroprotection is that the physiological actions of the NMDA receptor are essential for normal neuronal function. To be clinically useful NMDA antagonists need to block excessive activation without interfering with normal functions. Memantine has this property.
History
The discovery of NMDA receptors was followed by the synthesis and study of N-methyl-D-aspartic acid (NMDA) in the 1960s by Jeff Watkins and colleagues. In the early 1980s, NMDA receptors were shown to be involved in several central synaptic pathways. Receptor subunit selectivity was discovered in the early 1990s, which led to recognition of a new class of compounds that selectively inhibit the NR2B subunit. These findings led to vigorous campaign in the pharmaceutical industry. From this it was considered that NMDA receptors were associated with a variety of neurological disorders such as epilepsy, Parkinson's, Alzheimer's, Huntington's and other CNS disorders.
In 2002, it was discovered by Hilmar Bading and co-workers that the cellular consequences of NMDA receptor stimulation depend on the receptor's location on the neuronal cell surface. Synaptic NMDA receptors promote gene expression, plasticity-related events, and acquired neuroprotection. Extrasynaptic NMDA receptors promote death signaling; they cause transcriptional shut-off, mitochondrial dysfunction, and structural disintegration. This pathological triad of extrasynaptic NMDA receptor signaling represents a common conversion point in the etiology of several acute and chronic neurodegenerative conditions. The molecular basis for toxic extrasynaptic NMDA receptor signaling was uncovered by Hilmar Bading and co-workers in 2020. Extrasynaptic NMDA receptors form a death signaling complex with TRPM4. NMDAR/TRPM4 interaction interface inhibitors (also known as interface inhibitors) disrupt the NMDAR/TRPM4 complex and detoxify extrasynaptic NMDA receptors.
A fortuitous finding was made in 1968 when a woman was taking amantadine as flu medicine and experienced remarkable remission of her Parkinson's symptoms. This finding, reported by Scawab et al., was the beginning of medicinal chemistry of adamantane derivatives in the context of diseases affecting the CNS. Before this finding, memantine, another adamantane derivative, had been synthesized by Eli Lilly and Company in 1963. The purpose was to develop a hypoglycemic drug, but it showed no such efficacy. It was not until 1972 that a possible therapeutic importance of memantine for treating neurodegenerative disorders was discovered. From 1989 memantine has been recognized to be an uncompetitive antagonist of the NMDA receptor.
Structure
Functional NMDA receptors are heterotetramers comprising different combinations of the GluN1, GluN2 (A-D), and GluN3 (A-B) subunits derived from distinct gene families (Grin1-Grin3). All NMDARs contain two of the obligatory GluN1 subunits, which when assembled with GluN2 subunits of the same type, give rise to canonical diheteromeric (d-) NMDARs (e.g., GluN1-2A-1-2A). Triheteromeric NMDARs, by contrast, contain three different types of subunits (e.g., GluN1-2A-1-2B), and include receptors that are composed of one or more subunits from each of the three gene families, designated t-NMDARs (e.g., GluN1-2A-3A-2A). There is one GluN1, four GluN2, and two GluN3 subunit encoding genes, and each gene may produce more than one splice variant.
GluN1 – GRIN1
GluN2
GluN2A – GRIN2A
GluN2B – GRIN2B
GluN2C – GRIN2C
GluN2D – GRIN2D
GluN3
GluN3A – GRIN3A
GluN3B – GRIN3B
Gating
The NMDA receptor is a glutamate and ion channel protein receptor that is activated when glycine and glutamate bind to it. The receptor is a highly complex and dynamic heteromeric protein that interacts with a multitude of intracellular proteins via three distinct subunits, namely GluN1, GluN2, and GluN3. The GluN1 subunit, which is encoded by the GRIN1 gene, exhibits eight distinct isoforms owing to alternative splicing. On the other hand, the GluN2 subunit, of which there are four different types (A-D), as well as the GluN3 subunit, of which there are two types (A and B), are each encoded by six separate genes. This intricate molecular structure and genetic diversity enable the receptor to carry out a wide range of physiological functions within the nervous system. All the subunits share a common membrane topology that is dominated by a large extracellular N-terminus, a membrane region comprising three transmembrane segments, a re-entrant pore loop, an extracellular loop between the transmembrane segments that are structurally not well known, and an intracellular C-terminus, which are different in size depending on the subunit and provide multiple sites of interaction with many intracellular proteins. Figure 1 shows a basic structure of GluN1/GluN2 subunits that forms the binding site for memantine, Mg2+ and ketamine.
Mg2+ blocks the NMDA receptor channel in a voltage-dependent manner. The channels are also highly permeable to Ca2+. Activation of the receptor depends on glutamate binding, D-serine or glycine binding at its GluN1-linked binding site and AMPA receptor-mediated depolarization of the postsynaptic membrane, which relieves the voltage-dependent channel block by Mg2+. Activation and opening of the receptor channel thus allows the flow of K+, Na+ and Ca2+ ions, and the influx of Ca2+ triggers intracellular signaling pathways. Allosteric receptor binding sites for zinc, proteins and the polyamines spermidine and spermine are also modulators for the NMDA receptor channels.
The GluN2B subunit has been involved in modulating activity such as learning, memory, processing and feeding behaviors, as well as being implicated in number of human derangements. The basic structure and functions associated with the NMDA receptor can be attributed to the GluN2B subunit. For example, the glutamate binding site and the control of the Mg2+ block are formed by the GluN2B subunit. The high affinity sites for glycine antagonist are also exclusively displayed by the GluN1/GluN2B receptor.
GluN1/GluN2B transmembrane segments are considered to be the part of the receptor that forms the binding pockets for uncompetitive NMDA receptor antagonists, but the transmembrane segments structures are not fully known as stated above. It is claimed that three binding sites within the receptor, A644 on the GluNB subunit and A645 and N616 on the GluN1 subunit, are important for binding of memantine and related compounds as seen in figure 2.
The NMDA receptor forms a heterotetramer between two GluN1 and two GluN2 subunits (the subunits were previously denoted as GluN1 and GluN2), two obligatory GluN1 subunits and two regionally localized GluN2 subunits. A related gene family of GluN3 A and B subunits have an inhibitory effect on receptor activity. Multiple receptor isoforms with distinct brain distributions and functional properties arise by selective splicing of the GluN1 transcripts and differential expression of the GluN2 subunits.
Each receptor subunit has modular design and each structural module, also represents a functional unit:
The extracellular domain contains two globular structures: a modulatory domain and a ligand-binding domain. GluN1 subunits bind the co-agonist glycine and GluN2 subunits bind the neurotransmitter glutamate.
The agonist-binding module links to a membrane domain, which consists of three transmembrane segments and a re-entrant loop reminiscent of the selectivity filter of potassium channels.
The membrane domain contributes residues to the channel pore and is responsible for the receptor's high-unitary conductance, high-calcium permeability, and voltage-dependent magnesium block.
Each subunit has an extensive cytoplasmic domain, which contain residues that can be directly modified by a series of protein kinases and protein phosphatases, as well as residues that interact with a large number of structural, adaptor, and scaffolding proteins.
The glycine-binding modules of the GluN1 and GluN3 subunits and the glutamate-binding module of the GluN2A subunit have been expressed as soluble proteins, and their three-dimensional structure has been solved at atomic resolution by x-ray crystallography. This has revealed a common fold with amino acid-binding bacterial proteins and with the glutamate-binding module of AMPA-receptors and kainate-receptors.
Mechanism of action
NMDA receptors are a crucial part of the development of the central nervous system. The processes of learning, memory, and neuroplasticity rely on the mechanism of NMDA receptors. NMDA receptors are glutamate-gated cation channels that allow for an increase of calcium permeability. Channel activation of NMDA receptors is a result of the binding of two co agonists, glycine and glutamate.
Overactivation of NMDA receptors, causing excessive influx of Ca2+ can lead to excitotoxicity. Excitotoxicity is implied to be involved in some neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease and Huntington's disease. Blocking of NMDA receptors could therefore, in theory, be useful in treating such diseases. It is, however, important to preserve physiological NMDA receptor activity while trying to block its excessive, excitotoxic activity. This can possibly be achieved by uncompetitive antagonists, blocking the receptors ion channel when excessively open.
Uncompetitive NMDA receptor antagonists, or channel blockers, enter the channel of the NMDA receptor after it has been activated and thereby block the flow of ions. MK-801, ketamine, amantadine and memantine are examples of such antagonists, see figure 1. The off-rate of an antagonist from the receptors channel is an important factor as too slow off-rate can interfere with normal function of the receptor and too fast off-rate may give ineffective blockade of an excessively open receptor.
Memantine is an example of an uncompetitive channel blocker of the NMDA receptor, with a relatively rapid off-rate and low affinity. At physiological pH its amine group is positively charged and its receptor antagonism is voltage-dependent. It thereby mimics the physiological function of Mg2+ as channel blocker. Memantine only blocks NMDA receptor associated channels during prolonged activation of the receptor, as it occurs under excitotoxic conditions, by replacing magnesium at the binding site. During normal receptor activity the channels only stay open for several milliseconds and under those circumstances memantine is unable to bind within the channels and therefore does not interfere with normal synaptic activity.
Variants
GluN1
There are eight variants of the GluN1 subunit produced by alternative splicing of GRIN1:
GluN1-1a, GluN1-1b; GluN1-1a is the most abundantly expressed form.
GluN1-2a, GluN1-2b;
GluN1-3a, GluN1-3b;
GluN1-4a, GluN1-4b;
GluN2
While a single GluN2 subunit is found in invertebrate organisms, four distinct isoforms of the GluN2 subunit are expressed in vertebrates and are referred to with the nomenclature GluN2A through GluN2D (encoded by GRIN2A, GRIN2B, GRIN2C, GRIN2D). Strong evidence shows that the genes encoding the GluN2 subunits in vertebrates have undergone at least two rounds of gene duplication. They contain the binding-site for glutamate. More importantly, each GluN2 subunit has a different intracellular C-terminal domain that can interact with different sets of signaling molecules. Unlike GluN1 subunits, GluN2 subunits are expressed differentially across various cell types and developmental timepoints and control the electrophysiological properties of the NMDA receptor. In classic circuits, GluN2B is mainly present in immature neurons and in extrasynaptic locations such as growth cones, and contains the binding-site for the selective inhibitor ifenprodil. However, in pyramidal cell synapses in the newly evolved primate dorsolateral prefrontal cortex, GluN2B are exclusively within the postsynaptic density, and mediate higher cognitive operations such as working memory. This is consistent with the expansion in GluN2B actions and expression across the cortical hierarchy in monkeys and humans and across primate cortex evolution.
GluN2B to GluN2A switch
While GluN2B is predominant in the early postnatal brain, the number of GluN2A subunits increases during early development; eventually, GluN2A subunits become more numerous than GluN2B. This is called the GluN2B-GluN2A developmental switch, and is notable because of the different kinetics each GluN2 subunit contributes to receptor function. For instance, greater ratios of the GluN2B subunit leads to NMDA receptors which remain open longer compared to those with more GluN2A. This may in part account for greater memory abilities in the immediate postnatal period compared to late in life, which is the principle behind genetically altered 'doogie mice'.
The detailed time course of this switch in the human cerebellum has been estimated using expression microarray and RNA seq and is shown in the figure on the right.
There are three hypothetical models to describe this switch mechanism:
Increase in synaptic GluN2A along with decrease in GluN2B
Extrasynaptic displacement of GluN2B away from the synapse with increase in GluN2A
Increase of GluN2A diluting the number of GluN2B without the decrease of the latter.
The GluN2B and GluN2A subunits also have differential roles in mediating excitotoxic neuronal death. The developmental switch in subunit composition is thought to explain the developmental changes in NMDA neurotoxicity. Homozygous disruption of the gene for GluN2B in mice causes perinatal lethality, whereas disruption of the GluN2A gene produces viable mice, although with impaired hippocampal plasticity. One study suggests that reelin may play a role in the NMDA receptor maturation by increasing the GluN2B subunit mobility.
GluN2B to GluN2C switch
Granule cell precursors (GCPs) of the cerebellum, after undergoing symmetric cell division in the external granule-cell layer (EGL), migrate into the internal granule-cell layer (IGL) where they down-regulate GluN2B and activate GluN2C, a process that is independent of neuregulin beta signaling through ErbB2 and ErbB4 receptors.
Role in excitotoxicity
NMDA receptors have been implicated by a number of studies to be strongly involved with excitotoxicity. Because NMDA receptors play an important role in the health and function of neurons, there has been much discussion on how these receptors can affect both cell survival and cell death. Recent evidence supports the hypothesis that overstimulation of extrasynaptic NMDA receptors has more to do with excitotoxicity than stimulation of their synaptic counterparts. In addition, while stimulation of extrasynaptic NMDA receptors appear to contribute to cell death, there is evidence to suggest that stimulation of synaptic NMDA receptors contributes to the health and longevity of the cell. There is ample evidence to support the dual nature of NMDA receptors based on location, and the hypothesis explaining the two differing mechanisms is known as the "localization hypothesis".
Differing cascade pathways
In order to support the localization hypothesis, it would be necessary to show differing cellular signaling pathways are activated by NMDA receptors based on its location within the cell membrane. Experiments have been designed to stimulate either synaptic or non-synaptic NMDA receptors exclusively. These types of experiments have shown that different pathways are being activated or regulated depending on the location of the signal origin. Many of these pathways use the same protein signals, but are regulated oppositely by NMDARs depending on its location. For example, synaptic NMDA excitation caused a decrease in the intracellular concentration of p38 mitogen-activated protein kinase (p38MAPK). Extrasynaptic stimulation NMDARs regulated p38MAPK in the opposite fashion, causing an increase in intracellular concentration. Experiments of this type have since been repeated with the results indicating these differences stretch across many pathways linked to cell survival and excitotoxicity.
Two specific proteins have been identified as a major pathway responsible for these different cellular responses ERK1/2, and Jacob. ERK1/2 is responsible for phosphorylation of Jacob when excited by synaptic NMDARs. This information is then transported to the nucleus. Phosphorylation of Jacob does not take place with extrasynaptic NMDA stimulation. This allows the transcription factors in the nucleus to respond differently based in the phosphorylation state of Jacob.
Neural plasticity
NMDA receptors (NMDARs) critically influence the induction of synaptic plasticity. NMDARs trigger both long-term potentiation (LTP) and long-term depression (LTD) via fast synaptic transmission. Experimental data suggest that extrasynaptic NMDA receptors inhibit LTP while producing LTD. Inhibition of LTP can be prevented with the introduction of a NMDA antagonist. A theta burst stimulation that usually induces LTP with synaptic NMDARs, when applied selectively to extrasynaptic NMDARs produces a LTD. Experimentation also indicates that extrasynaptic activity is not required for the formation of LTP. In addition, both synaptic and extrasynaptic activity are involved in expressing a full LTD.
Role of differing subunits
Another factor that seems to affect NMDAR induced toxicity is the observed variation in subunit makeup. NMDA receptors are heterotetramers with two GluN1 subunits and two variable subunits. Two of these variable subunits, GluN2A and GluN2B, have been shown to preferentially lead to cell survival and cell death cascades respectively. Although both subunits are found in synaptic and extrasynaptic NMDARs there is some evidence to suggest that the GluN2B subunit occurs more frequently in extrasynaptic receptors. This observation could help explain the dualistic role that NMDA receptors play in excitotoxicity. t-NMDA receptors have been implicated in excitotoxicity-mediated death of neurons in temporal lobe epilepsy.
Despite the compelling evidence and the relative simplicity of these two theories working in tandem, there is still disagreement about the significance of these claims. Some problems in proving these theories arise with the difficulty of using pharmacological means to determine the subtypes of specific NMDARs. In addition, the theory of subunit variation does not explain how this effect might predominate, as it is widely held that the most common tetramer, made from two GluN1 subunits and one of each subunit GluN2A and GluN2B, makes up a high percentage of the NMDARs. The subunit composition of t-NMDA receptors has recently been visualized in brain tissue.
Excitotoxicity in a clinical setting
Excitotoxicity has been thought to play a role in the degenerative properties of neurodegenerative conditions since the late 1950s. NMDA receptors seem to play an important role in many of these degenerative diseases affecting the brain. Most notably, excitotoxic events involving NMDA receptors have been linked to Alzheimer's disease and Huntington's disease, as well as with other medical conditions such as strokes and epilepsy. Treating these conditions with one of the many known NMDA receptor antagonists, however, leads to a variety of unwanted side effects, some of which can be severe. These side effects are, in part, observed because the NMDA receptors do not just signal for cell death but also play an important role in its vitality. Treatment for these conditions might be found in blocking NMDA receptors not found at the synapse. One class of excitotoxicity in disease includes gain-of-function mutations in GRIN2B and GRIN1 associated with cortical malformations, such as polymicrogyria. D-serine, an antagonist/inverse co-agonist of t-NMDA receptors, which is made in the brain, has been shown to mitigate neuron loss in an animal model of temporal lobe epilepsy.
Ligands
Agonists
Activation of NMDA receptors requires binding of glutamate or aspartate (aspartate does not stimulate the receptors as strongly). In addition, NMDARs also require the binding of the co-agonist glycine for the efficient opening of the ion channel, which is a part of this receptor.
D-Serine has also been found to co-agonize the NMDA receptor with even greater potency than glycine. It is produced by serine racemase, and is enriched in the same areas as NMDA receptors. Removal of D-serine can block NMDA-mediated excitatory neurotransmission in many areas. Recently, it has been shown that D-serine can be released both by neurons and astrocytes to regulate NMDA receptors. Note that D-serine has also been shown to work as an antagonist / inverse co-agonist for t-NMDA receptors.
NMDA receptor (NMDAR)-mediated currents are directly related to membrane depolarization. NMDA agonists therefore exhibit fast Mg2+ unbinding kinetics, increasing channel open probability with depolarization. This property is fundamental to the role of the NMDA receptor in memory and learning, and it has been suggested that this channel is a biochemical substrate of Hebbian learning, where it can act as a coincidence detector for membrane depolarization and synaptic transmission.
Examples
Some known NMDA receptor agonists include:
Amino acids and amino acid derivatives
Aspartic acid (aspartate) (D-aspartic acid, L-aspartic acid) – endogenous glutamate site agonist. The word N-methyl-D-aspartate (NMDA) is partially derived from D-aspartate.
Glutamic acid (glutamate) – endogenous glutamate site agonist
Tetrazolylglycine – synthetic glutamate site agonist
Homocysteic acid – endogenous glutamate site agonist
Ibotenic acid – naturally occurring glutamate site agonist found in Amanita muscaria
Quinolinic acid (quinolinate) – endogenous glutamate site agonist
Glycine – endogenous glycine site agonist
Alanine (D-alanine, L-alanine) – endogenous glycine site agonist
Milacemide – synthetic glycine site agonist; prodrug of glycine
Sarcosine (monomethylglycine) – endogenous glycine site agonist
Serine (D-serine, L-serine) – endogenous glycine site agonist
Positive allosteric modulators
Cerebrosterol – endogenous weak positive allosteric modulator
Cholesterol – endogenous weak positive allosteric modulator
Dehydroepiandrosterone (DHEA) – endogenous weak positive allosteric modulator
Dehydroepiandrosterone sulfate (DHEA-S) – endogenous weak positive allosteric modulator
Nebostinel (neboglamine) – synthetic positive allosteric modulator of the glycine site
Pregnenolone sulfate – endogenous weak positive allosteric modulator
Polyamines
Spermidine – endogenous polyamine site agonist
Spermine – endogenous polyamine site agonist
Neramexane
An example of memantine derivative is neramexane which was discovered by studying number of aminoalkyl cyclohexanes, with memantine as the template, as NMDA receptor antagonists. Neramexane binds to the same site as memantine within the NMDA receptor associated channel and with comparable affinity. It does also show very similar bioavailability and blocking kinetics in vivo as memantine. Neramexane went to clinical trials for four indications, including Alzheimer's disease.
Partial agonists
N-Methyl-D-aspartic acid (NMDA), which the NMDA receptor was named after, is a partial agonist of the active or glutamate recognition site.
3,5-Dibromo-L-phenylalanine, a naturally occurring halogenated derivative of L-phenylalanine, is a weak partial NMDA receptor agonist acting on the glycine site. 3,5-Dibromo-L-phenylalanine has been proposed a novel therapeutic drug candidate for treatment of neuropsychiatric disorders and diseases such as schizophrenia, and neurological disorders such as ischemic stroke and epileptic seizures.
Other partial agonists of the NMDA receptor acting on novel sites such as rapastinel (GLYX-13) and apimostinel (NRX-1074) are now viewed for the development of new drugs with antidepressant and analgesic effects without obvious psychotomimetic activities.
Examples
Aminocyclopropanecarboxylic acid (ACC) – synthetic glycine site partial agonist
Cycloserine (D-cycloserine) – naturally occurring glycine site partial agonist found in Streptomyces orchidaceus
HA-966 and L-687,414 – synthetic glycine site weak partial agonists
Homoquinolinic acid – synthetic glutamate site partial agonist
N-Methyl-D-aspartic acid (NMDA) – synthetic glutamate site partial agonist
Positive allosteric modulators include:
Zelquistinel (GATE-251) – synthetic novel site partial agonist
Apimostinel (GATE-202) – synthetic novel site partial agonist
Rapastinel (GLYX-13) – synthetic novel site partial agonist
Antagonists
Antagonists of the NMDA receptor are used as anesthetics for animals and sometimes humans, and are often used as recreational drugs due to their hallucinogenic properties, in addition to their unique effects at elevated dosages such as dissociation. When certain NMDA receptor antagonists are given to rodents in large doses, they can cause a form of brain damage called Olney's lesions. NMDA receptor antagonists that have been shown to induce Olney's lesions include ketamine, phencyclidine, and dextrorphan (a metabolite of dextromethorphan), as well as some NMDA receptor antagonists used only in research environments. So far, the published research on Olney's lesions is inconclusive in its occurrence upon human or monkey brain tissues with respect to an increase in the presence of NMDA receptor antagonists.
Most NMDAR antagonists are uncompetitive or noncompetitive blockers of the channel pore or are antagonists of the glycine co-regulatory site rather than antagonists of the active/glutamate site.
Examples
Common agents in which NMDA receptor antagonism is the primary or a major mechanism of action:
4-Chlorokynurenine (AV-101) – glycine site antagonist; prodrug of 7-chlorokynurenic acid
7-Chlorokynurenic acid – glycine site antagonist
Agmatine – endogenous polyamine site antagonist
Argiotoxin-636 – naturally occurring dizocilpine or related site antagonist found in Argiope venom
AP5 – glutamate site antagonist
AP7 – glutamate site antagonist
CGP-37849 – glutamate site antagonist
D-serine - t-NMDA receptor antagonist / inverse co-agonist
Delucemine (NPS-1506) – dizocilpine or related site antagonist; derived from argiotoxin-636
Dextromethorphan (DXM) – dizocilpine site antagonist; prodrug of dextrorphan
Dextrorphan (DXO) – dizocilpine site antagonist
Dexanabinol – dizocilpine-related site antagonist
Diethyl ether – unknown site antagonist
Diphenidine – dizocilpine site antagonist
Dizocilpine (MK-801) – dizocilpine site antagonist
Eliprodil – ifenprodil site antagonist
Esketamine – dizocilpine site antagonist
Hodgkinsine – undefined site antagonist
Ifenprodil – ifenprodil site antagonist
Kaitocephalin – naturally occurring glutamate site antagonist found in Eupenicillium shearii
Ketamine – dizocilpine site antagonist
Kynurenic acid – endogenous glycine site antagonist
Lanicemine – low-trapping dizocilpine site antagonist
LY-235959 – glutamate site antagonist
Memantine – low-trapping dizocilpine site antagonist
Methoxetamine – dizocilpine site antagonist
Midafotel – glutamate site antagonist
Nitrous oxide (N2O) – undefined site antagonist
PEAQX – glutamate site antagonist
Perzinfotel – glutamate site antagonist
Phencyclidine (PCP) – dizocilpine site antagonist
Phenylalanine - a naturally occurring amino acid, glycine site antagonist
Psychotridine – undefined site antagonist
Selfotel – glutamate site antagonist
Tiletamine – dizocilpine site antagonist
Traxoprodil – ifenprodil site antagonist
Xenon – unknown site antagonist
Some common agents in which weak NMDA receptor antagonism is a secondary or additional action include:
Amantadine – an antiviral and antiparkinsonian drug; low-trapping dizocilpine site antagonist
Atomoxetine – a norepinephrine reuptake inhibitor used to treat
Dextropropoxyphene – an opioid analgesic
Ethanol (alcohol) – a euphoriant, sedative, and anxiolytic used recreationally; unknown site antagonist
Guaifenesin – an expectorant
Huperzine A – a naturally occurring acetylcholinesterase inhibitor and potential antidementia agent
Ibogaine – a naturally occurring hallucinogen and antiaddictive agent
Ketobemidone – an opioid analgesic
Methadone – an opioid analgesic
Minocycline – an antibiotic
Tramadol – an atypical opioid analgesic and serotonin releasing agent
Nitromemantine
The NMDA receptor is regulated via nitrosylation and aminoadamantane can be used as a target-directed shuttle to bring nitrogen oxide (NO) close to the site within the NMDA receptor where it can nitrosylate and regulate the ion channel conductivity. A NO donor that can be used to decrease NMDA receptor activity is the alkyl nitrate nitroglycerin. Unlike many other NO donors, alkyl nitrates do not have potential NO associated neurotoxic effects. Alkyl nitrates donate NO in the form of a nitro group as seen in figure 7, -NO2-, which is a safe donor that avoids neurotoxicity. The nitro group must be targeted to the NMDA receptor, otherwise other effects of NO such as dilatation of blood vessels and consequent hypotension could result.
Nitromemantine is a second-generation derivative of memantine, it reduces excitotoxicity mediated by overactivation of the glutamatergic system by blocking NMDA receptor without sacrificing safety. Provisional studies in animal models show that nitromemantines are more effective than memantine as neuroprotectants, both in vitro and in vivo. Memantine and newer derivatives could become very important weapons in the fight against neuronal damage.
Negative allosteric modulators include:
25-Hydroxycholesterol – endogenous weak negative allosteric modulator
Conantokins – naturally occurring negative allosteric modulators of the polyamine site found in Conus geographus
Modulators
Examples
The NMDA receptor is modulated by a number of endogenous and exogenous compounds:
Aminoglycosides have been shown to have a similar effect to polyamines, and this may explain their neurotoxic effect.
CDK5 regulates the amount of NR2B-containing NMDA receptors on the synaptic membrane, thus affecting synaptic plasticity.
Polyamines do not directly activate NMDA receptors, but instead act to potentiate or inhibit glutamate-mediated responses.
Reelin modulates NMDA function through Src family kinases and DAB1. significantly enhancing LTP in the hippocampus.
Src kinase enhances NMDA receptor currents.
Na+, K+ and Ca2+ not only pass through the NMDA receptor channel but also modulate the activity of NMDA receptors.
Zn2+ and Cu2+ generally block NMDA current activity in a noncompetitive and a voltage-independent manner. However zinc may potentiate or inhibit the current depending on the neural activity.
Pb2+ is a potent NMDAR antagonist. Presynaptic deficits resulting from Pb2+ exposure during synaptogenesis are mediated by disruption of NMDAR-dependent BDNF signaling.
Proteins of the major histocompatibility complex class I are endogenous negative regulators of NMDAR-mediated currents in the adult hippocampus, and are required for appropriate NMDAR-induced changes in AMPAR trafficking and NMDAR-dependent synaptic plasticity and learning and memory.
The activity of NMDA receptors is also strikingly sensitive to the changes in pH, and partially inhibited by the ambient concentration of H+ under physiological conditions. The level of inhibition by H+ is greatly reduced in receptors containing the NR1a subtype, which contains the positively charged insert Exon 5. The effect of this insert may be mimicked by positively charged polyamines and aminoglycosides, explaining their mode of action.
NMDA receptor function is also strongly regulated by chemical reduction and oxidation, via the so-called "redox modulatory site." Through this site, reductants dramatically enhance NMDA channel activity, whereas oxidants either reverse the effects of reductants or depress native responses. It is generally believed that NMDA receptors are modulated by endogenous redox agents such as glutathione, lipoic acid, and the essential nutrient pyrroloquinoline quinone.
Development of NMDA receptor antagonists
The main problem with the development of NMDA antagonists for neuroprotection is that physiological NMDA receptor activity is essential for normal neuronal function. Complete blockade of all NMDA receptor activity results in side effects such as hallucinations, agitation and anesthesia. To be clinically relevant, an NMDA receptor antagonist must limit its action to blockade of excessive activation, without limiting normal function of the receptor.
Competitive NMDA receptor antagonists
Competitive NMDA receptor antagonists, which were developed first, are not a good option because they compete and bind to the same site (NR2 subunit) on the receptor as the agonist, glutamate, and therefore block normal function also. They will block healthy areas of the brain prior to having an impact on pathological areas, because healthy areas contain lower levels of agonist than pathological areas. These antagonists can be displaced from the receptor by high concentration of glutamate which can exist under excitotoxic circumstances.
Noncompetitive NMDA receptor antagonists
Uncompetitive NMDA receptor antagonists block within the ion channel at the Mg2+ site (pore region) and prevent excessive influx of Ca2+. Noncompetitive antagonism refers to a type of block that an increased concentration of glutamate cannot overcome, and is dependent upon prior activation of the receptor by the agonist, i.e. it only enters the channel when it is opened by agonist.
Memantine and related compounds
Because of these adverse side effects of high affinity blockers, the search for clinically successful NMDA receptor antagonists for neurodegenerative diseases continued and focused on developing low affinity blockers. However the affinity could not be too low and dwell time not too short (as seen with Mg2+) where membrane depolarization relieves the block. The discovery was thereby development of uncompetitive antagonist with longer dwell time than Mg2+ in the channel but shorter than MK-801. That way the drug obtained would only block excessively open NMDA receptor associated channels but not normal neurotransmission. Memantine is that drug. It is a derivative of amantadine which was first an anti-influenza agent but was later discovered by coincidence to have efficacy in Parkinson's disease. Chemical structures of memantine and amantadine can be seen in figure 5. The compound was first thought to be dopaminergic or anticholinergic but was later found to be an NMDA receptor antagonist.
Memantine is the first drug approved for treatment of severe and more advanced Alzheimer's disease, which for example anticholinergic drugs do not do much good for. It helps recovery of synaptic function and in that way improves impaired memory and learning. In 2015 memantine is also in trials for therapeutic importance in additional neurological disorders.
Many second-generation memantine derivatives have been in development that may show even better neuroprotective effects, where the main thought is to use other safe but effective modulatory sites on the NMDA receptor in addition to its associated ion channel.
Structure activity relationship (SAR)
Memantine (1-amino-3,5-dimethyladamantane) is an aminoalkyl cyclohexane derivative and an atypical drug compound with non-planar, three dimensional tricyclic structure. Figure 8 shows SAR for aminoalkyl cyclohexane derivative. Memantine has several important features in its structure for its effectiveness:
Three-ring structure with a bridgehead amine, -NH2
The -NH2 group is protonated under physiological pH of the body to carry a positive charge, -NH3+
Two methyl (CH3) side groups which serve to prolong the dwell time and increase stability as well as affinity for the NMDA receptor channel compared with amantadine (1-adamantanamine).
Despite the small structural difference between memantine and amantadine, two adamantane derivatives, the affinity for the binding site of NR1/NR2B subunit is much greater for memantine. In patch-clamp measurements memantine has an IC50 of (2.3+0.3) μM while amantadine has an IC50 of (71.0+11.1) μM.
The binding site with the highest affinity is called the dominant binding site. It involves a connection between the amine group of memantine and the NR1-N161 binding pocket of the NR1/NR2B subunit. The methyl side groups play an important role in increasing the affinity to the open NMDA receptor channels and making it a much better neuroprotective drug than amantadine. The binding pockets for the methyl groups are considered to be at the NR1-A645 and NR2B-A644 of the NR1/NR2B. The binding pockets are shown in figure 2.
Memantine binds at or near to the Mg2+ site inside the NMDA receptor associated channel. The -NH2 group on memantine, which is protonated under physiological pH of the body, represents the region that binds at or near to the Mg2+ site. Adding two methyl groups to the -N on the memantine structure has shown to decrease affinity, giving an IC50 value of (28.4+1.4) μM.
Second generation derivative of memantine; nitromemantine
Several derivatives of Nitromemantine, a second-generation derivative of memantine, have been synthesized in order to perform a detailed structure activity relationship (SAR) of these novel drugs. One class, containing a nitro (NO2) group opposite to the bridgehead amine (NH2), showed a promising outcome. Nitromemantine utilizes memantine binding site on the NMDA receptor to target the NOx (X= 1 or 2) group for interaction with the S- nitrosylation/redox site external to the memantine binding site. Lengthening the side chains of memantine compensates for the worse drug affinity in the channel associated with the addition of the –ONO2 group
Therapeutic application
Excitotoxicity is implied to be involved in some neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease, Huntington's disease and amyotrophic lateral sclerosis. Blocking of NMDA receptors could therefore, in theory, be useful in treating such diseases. It is, however, important to preserve physiological NMDA receptor activity while trying to block its excessive, excitotoxic activity. This can possibly be achieved by uncompetitive antagonists, blocking the receptor's ion channel when excessively open
Memantine is an example of uncompetitive NMDA receptor antagonist that has approved indication for the neurodegenerative disease Alzheimer's disease. In 2015 memantine is still in clinical trials for additional neurological diseases.
Receptor modulation
The NMDA receptor is a non-specific cation channel that can allow the passage of Ca2+ and Na+ into the cell and K+ out of the cell. The excitatory postsynaptic potential (EPSP) produced by activation of an NMDA receptor increases the concentration of Ca2+ in the cell. The Ca2+ can in turn function as a second messenger in various signaling pathways. However, the NMDA receptor cation channel is blocked by Mg2+ at resting membrane potential. Magnesium unblock is not instantaneous; to unblock all available channels, the postsynaptic cell must be depolarized for a sufficiently long period of time (in the scale of milliseconds).
Therefore, the NMDA receptor functions as a "molecular coincidence detector". Its ion channel opens only when the following two conditions are met: glutamate is bound to the receptor, and the postsynaptic cell is depolarized (which removes the Mg2+ blocking the channel). This property of the NMDA receptor explains many aspects of long-term potentiation (LTP) and synaptic plasticity.
In a resting-membrane potential, the NMDA receptor pore is opened allowing for an influx of external magnesium ions binding to prevent further ion permeation. External magnesium ions are in a millimolar range while intracellular magnesium ions are at a micromolar range to result in negative membrane potential. NMDA receptors are modulated by a number of endogenous and exogenous compounds and play a key role in a wide range of physiological (e.g., memory) and pathological processes (e.g., excitotoxicity). Magnesium works to potentiate NMDA-induced responses at positive membrane potentials while blocking the NMDA channel. The use of calcium, potassium, and sodium are used to modulate the activity of NMDARs passing through the NMDA membrane. Changes in H+ concentration can partially inhibit the activity of NMDA receptors in different physiological conditions.
Clinical significance
NMDAR antagonists like ketamine, esketamine, tiletamine, phencyclidine, nitrous oxide, and xenon are used as general anesthetics. These and similar drugs like dextromethorphan and methoxetamine also produce dissociative, hallucinogenic, and euphoriant effects and are used as recreational drugs.
NMDAR-targeted compounds, including ketamine, esketamine (JNJ-54135419), rapastinel (GLYX-13), apimostinel (NRX-1074), zelquistinel (AGN-241751), 4-chlorokynurenine (AV-101), and rislenemdaz (CERC-301, MK-0657), are under development for the treatment of mood disorders, including major depressive disorder and treatment-resistant depression. In addition, ketamine is already employed for this purpose as an off-label therapy in some clinics.
Research suggests that tianeptine produces antidepressant effects through indirect alteration and inhibition of glutamate receptor activity and release of , in turn affecting neural plasticity. Tianeptine also acts on the NMDA and AMPA receptors. In animal models, tianeptine inhibits the pathological stress-induced changes in glutamatergic neurotransmission in the amygdala and hippocampus.
Memantine, a low-trapping NMDAR antagonist, is approved in the United States and Europe for the treatment of moderate-to-severe Alzheimer's disease, and has now received a limited recommendation by the UK's National Institute for Health and Care Excellence for patients who fail other treatment options.
Cochlear NMDARs are the target of intense research to find pharmacological solutions to treat tinnitus. NMDARs are associated with a rare autoimmune disease, anti-NMDA receptor encephalitis (also known as NMDAR encephalitis), that usually occurs due to cross-reactivity of antibodies produced by the immune system against ectopic brain tissues, such as those found in teratoma. These are known as anti-glutamate receptor antibodies.
Compared to dopaminergic stimulants like methamphetamine, the NMDAR antagonist phencyclidine can produce a wider range of symptoms that resemble schizophrenia in healthy volunteers, in what has led to the glutamate hypothesis of schizophrenia. Experiments in which rodents are treated with NMDA receptor antagonist are today the most common model when it comes to testing of novel schizophrenia therapies or exploring the exact mechanism of drugs already approved for treatment of schizophrenia.
NMDAR antagonists, for instance eliprodil, gavestinel, licostinel, and selfotel have been extensively investigated for the treatment of excitotoxicity-mediated neurotoxicity in situations like ischemic stroke and traumatic brain injury, but were unsuccessful in clinical trials used in small doses to avoid sedation, but NMDAR antagonists can block Spreading Depolarizations in animals and in patients with brain injury. This use has not been tested in clinical trials yet.
See also
Calcium/calmodulin-dependent protein kinases
References
External links
NMDA receptor pharmacology
Motor Discoordination Results from Combined Gene Disruption of the NMDA Receptor NR2A and NR2C Subunits, But Not from Single Disruption of the NR2A or NR2C Subunit
Drosophila NMDA receptor 1 - The Interactive Fly
Cell signaling
Glutamate (neurotransmitter)
Ion channels
Ionotropic glutamate receptors
Molecular neuroscience
NMDA receptor antagonists | NMDA receptor | Chemistry | 11,113 |
57,853,016 | https://en.wikipedia.org/wiki/Oxiranol | Oxiranol is an organic chemical that is an alcohol derivative of oxirane: it consists of a hydroxy group as substituent on ethylene oxide. It can have two enantiomeric forms. The compound has been proposed as an intermediate in the interstellar formation of glycolaldehyde (a constitutional isomer of oxiranol) and the oxidation of acrolein in the environment.
References
Secondary alcohols
Epoxides | Oxiranol | Chemistry | 97 |
54,964,079 | https://en.wikipedia.org/wiki/Leucine-rich%20repeat%20receptor%20like%20protein%20kinase | Leucine-rich repeat receptor like protein kinase (PEPR1 and PEPR2 in Arabidopsis thaliana and Xa21 in rice) are plant cell membrane localized Leucine-rich repeat (LRR) receptor kinase that play critical roles in plant innate immunity. Plants have evolved intricate immunity mechanism to combat against pathogen infection by recognizing Pathogen Associated Molecular Patterns (PAMP) and endogenous Damage Associated Molecular Patterns (DAMP). PEPR 1 considered as the first known DAMP receptor of Arabidopsis.
Discovery
First isolation of AtPEPR 1 was carried out from the surface of Arabidopsis suspension cultured cells. I -125 labeled Azido-cys-AtPEP 1 photo affinity analog specifically interacted with PEPR 1 when incubated with Arabidopsis cells. Separation of this labeled protein using SDS- PAGE led to the identification of the 170 kDa PEPR 1 protein. Further, characterization helped to identify a gene; At1g73080 which encodes for 1,124 amino acids containing PEPR 1.
Function in plant innate immunity
Plasma membrane localized pattern recognition receptors (PRR) that recognized pathogen associated molecular patterns, provide the first line of defense in plants innate immunity. Recent studies in Arabidopsis have provided important details on plant innate immunity. Plant membrane PRR mainly consist of receptor like kinases and receptor like proteins. They sense PAMPs such as chitin from fungal cell wall, sulfated peptides, flagellin elongation factors etc. In addition to PAMPs, PRRs also recognize DAMP molecules that present in the intracellular space response to damage caused by pathogens, e.g.cell wall fragments, cytoplasmic proteins.
AtPEP 1, a 23-amino acids precursor peptide encoded by c-terminal of PROPEP 1 gene, is considered to be a DAMP associated molecule in Arabidopsis. Later, study using alanine scanning analysis showed that AtPEP 1 was derived by deletion of N-terminal of precursor protein, PROPEP 1. AtPEPs are functionally similar to systemin, an 18 residues peptide which plays critical role in defense signal and induced in response to wounding, jasmonate and ethylene. PEPR 1 is a receptor kinase with extra cellular leucine rich repeat motif and functions as a receptor for AtPEPs. In addition, Arabidopsis genome encode a close homologue named PEPR 2. But PEPR 1 and PEPR 2 have different preferences for AtPEPs.
AtPEP 1 interaction with PEPR 1 activates the defense genes that regulates jasmonate/ethylene and salicylate defense hormones and induce the expression of PDF1.2 (defensin) gene being component of plant innate immune system. Expression of these defense genes would result in more production of PROPEP 1 gene through feedback mechanism. This amplify the danger signals during pathogen infection and confers resistance against pathogens.
Moreover, PEPR 1 specifically interacts with receptor like cytoplasmic kinase, Botrylic- Induced Kianse 1 (BIK 1) to mediate PEP1 induce defense. C-terminus of PEPR 1 kinase domain showed a strong interaction with BIK 1 and phosphorylates BIK 1 on serine 236 and threonine 237 residues. Thus, BIK 1 phosphorylation by PEPRs is important for amplifying ethylene induce signaling, which is known to play an important role in plant innate immune system. Further, ethylene can enhance DAMP triggered immunity. Later, it was found that AtPEPs also help to transmit danger signals to the cell interior by activating the cell membrane Ca2+ channels to elevate innate immune defense. This activity is dependent on AtPEPR 1 and cyclic nucleotide gated channel 2 (CNGC2) . Activation of CNGC2 occurs through cGMP when produced by the AtPEPR 1 guanyl cyclic domain AtPEP 1 binding. Thus, cytosol Ca 2+ elevation cause expression of pathogen PDF1.2 gene and basal defense in plants.
Structure
First crystal structure of the LRR domain of PEPR 1 with AtPEP1 (1 -23 residues) was solved by Jiao Tang in 2015. This help to reveal the molecular mechanism of AtPEP 1 recognition by PEPR 1. PEPR 1 receptors are receptor kinases with extracellular LRR motifs. AtPEP1 interacts with the inner side of the PEPR 1 LRR helical structure. PEPR 1 contains 27 canonical LRRs and AtPEP1 interacts LRR 4 to LRR 18. Many amino acids are highly conserved among these LRRs and AtPEP 1 only interacts with 3rd, 5th, 7th and 8th position of each LRR motif.
The C- terminal residues of AtPEP 1 shows strong interaction with PEPR 1 LRR than N-terminal. However, N- terminal segment of AtPEP1 also important in DAMP signals and both N and C terminal of AtPEP 1 act cooperatively in signaling. Moreover, it was found that AtPEP 1 interacting residues in PEPR 1 are also highly conserved in PEPR2 . However, PEPR2 does not contain the residues that interact with N-terminal of AtPEP1. Consequently, PEPR 1 has high affinity to AtPEP1.
References
LRR proteins
Plant immunity
Membrane proteins | Leucine-rich repeat receptor like protein kinase | Biology | 1,101 |
27,239,299 | https://en.wikipedia.org/wiki/Polflucht | Polflucht (from German, flight from the poles) is a geophysical concept invoked in 1922 by Alfred Wegener to explain his ideas of continental drift.
The pole-flight force is that component of the centrifugal force during the rotation of the Earth that acts tangentially to the Earth's surface.
The daily rotation of the Earth (more precisely: within a sidereal day of 23.93447 hours) around its axis of rotation causes everybody on Earth to experience a centrifugal force that points away perpendicularly from the Earth's axis, i.e. diagonally to the Earth's surface, depending on the degree of latitude. The centrifugal force contains a component tangential to the surface of the Earth away from the pole; this component is called the Polfluchtkraft, or pole-flight force.
Mathematics
The magnitude of the centrifugal force is:
where
is the mass of the body (not the Earth's mass!)
is its distance from the Earth's axis
is the angular velocity of the Earth's rotation in radians per second
The distance depends on the geographical latitude φ over a mean Earth radius R = 6371 km, resulting in:
Only at the equator does the centrifugal force exactly counteract the gravitational force. At all other degrees of latitude it acts at an angle α = 90° – φ to the horizontal.
The following now applies to the pole-flight force:
Effect
If one considers only that component of the force which acts parallel to the Earth's surface, then it is directed south in the northern hemisphere and north in the southern hemisphere. The constant action of this force is why the Earth is not a perfect sphere but is flattened at the poles. The somewhat elastic nature of the Earth adjusts to the prevailing rotation, so that its mass distribution yields to the polar-flight force and the equatorial radius increases at the expense of the polar radius. Today the flattening of the poles is 0.3353%, or 21 km.
Isaac Newton formulated this deformation mathematically for the first time. The resulting Polfluchtkraft was postulated by the German geologist Damian Kreichgauer in 1902 and the Hungarian physicist Loránd Eötvös in 1912. Around 1920 Alfred Wegener postulated the pole flight of the continents and suspected that the centrifugal force was the cause of continental drift hypothesized by him and others. This was refuted a few years later, but the terms Polflucht and Polfluchtkraft found their way into the scientific literature.
Wegener suggested that the differential gravitational force resulting from the horizontal component of the centrifugal could cause continental masses to drift slowly towards the equator.
Wegener's hypothesis was expanded by Paul Sophus Epstein in 1920, but the force is now known to be far too weak to cause plate tectonics. The strength of the layers of the Earth's crust is much stronger than assumed by Wegener and Epstein.
Literature
The Concise Oxford Dictionary of Earth Sciences (topic 'Polflucht'), Oxford 1990
Laszlo Egyed: Physik der festen Erde (Physics of solid Earth), 368p. Akadémiai Kiadó, Budapest 1969
Über die Polflucht der Kontinente, F.Nölke 1921
Damian Kreichgauer, Die Äquatorfrage in der Geologie, Missionsdruckerei in Steyl, Steyl (1902)
Loránd Eötvös, Verhandlungen der 17. Allgemeinen Conferenz der Internationalen Erdmessung, Volume 1, Georg Reimer, Berlin (1913)
Alfred Wegener, Die Entstehung der Kontinente und Ozeane, Second Edition, Friedrich Vieweg & Sohn, Braunschweig (1920)
Alfred Wegener, The Origin of Continents and Oceans, Translated from the Third German Edition by John George Anthony Skerl, E P Dutton and Company, New York (1924)
Geophysics
Plate tectonics
Obsolete geology theories | Polflucht | Physics | 852 |
18,894,568 | https://en.wikipedia.org/wiki/Mathematical%20programming%20with%20equilibrium%20constraints | Mathematical programming with equilibrium constraints (MPEC) is the study of
constrained optimization problems where the constraints include variational inequalities or complementarities. MPEC is related to the Stackelberg game.
MPEC is used in the study of engineering design, economic equilibrium, and multilevel games.
MPEC is difficult to deal with because its feasible region is not necessarily convex or even connected.
References
Z.-Q. Luo, J.-S. Pang and D. Ralph: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, 1996, .
B. Baumrucker, J. Renfro, L. T. Biegler, MPEC problem formulations and solution strategies with chemical engineering applications, Computers & Chemical Engineering, 32 (12) (2008) 2903-2913.
A. U. Raghunathan, M. S. Diaz, L. T. Biegler, An MPEC formulation for dynamic optimization of distillation operations, Computers & Chemical Engineering, 28 (10) (2004) 2037-2052.
External links
MPEC examples such as SIGN, ABS, MIN, and MAX
Formulating logical statements as continuously differentiable nonlinear programming problems
Mathematical optimization | Mathematical programming with equilibrium constraints | Mathematics | 249 |
48,574,442 | https://en.wikipedia.org/wiki/Redmi%20Note%202 | The Xiaomi Redmi Note 2 was a middle class Android smartphone by Xiaomi. It came in two variants. It had 13 MP rear camera and 5 MP front camera.
The company discontinued the smartphone's sale in favor of the successors Redmi 3 and Redmi Note 3.
Specifications
Hardware
The low end variant Redmi Note 2 was powered by a MediaTek MT6795 Helio X10 Octa-core 2.0 GHz Cortex-A53 prosessor coupled with 2 GB of RAM and 16 GB of internal storage. The high-end variant Redmi Note 2 Prime, on the other hand, was powered by a MediaTek MT6795 Helio X10 Octa-core 2.2 GHz Cortex-A53 processor coupled with 2 GB of RAM and 32 GB of internal storage.
Software
Xiaomi Redmi Note 2 runs on Android Lollipop 5.0 on top of MIUI 7 and can be upgraded to MIUI 9. It is also possible to flash a custom ROM with Android 5, 6, 7, 8 or 9, but there is no official support by LineageOS.
Famous problem - fast battery drain.
References
Mobile phones introduced in 2015
Note 2
Phablets
Discontinued smartphones
Mobile phones with user-replaceable battery | Redmi Note 2 | Technology | 260 |
607,942 | https://en.wikipedia.org/wiki/Blown%20flap | Blown flaps, blown wing or jet flaps are powered aerodynamic high-lift devices used on the wings of certain aircraft to improve their low-speed flight characteristics. They use air blown through nozzles to shape the airflow over the rear edge of the wing, directing the flow downward to increase the lift coefficient. There are a variety of methods to achieve this airflow, most of which use jet exhaust or high-pressure air bled off of a jet engine's compressor and then redirected to follow the line of trailing-edge flaps.
Blown flaps may refer specifically to those systems that use internal ductwork within the wing to direct the airflow, or more broadly to systems like upper surface blowing or nozzle systems on conventional underwing engine that direct air through the flaps. Blown flaps are one solution among a broader category known as powered lift, which also includes various boundary layer control systems, systems using directed prop wash, and circulation control wings.
Internal blown flaps were used on some land and carrier-based fast jets in the 1960s, including the Lockheed F-104, Blackburn Buccaneer and certain versions of the Mikoyan-Gurevich MiG-21. They generally fell from favour because they imposed a significant maintenance overhead in keeping the ductwork clean and various valve systems working properly, along with the disadvantage that an engine failure reduced lift in precisely the situation where it is most desired. The concept reappeared in the form of upper and lower blowing in several transport aircraft, both turboprop and turbofan.
Mechanism
In a conventional blown flap, a small amount of the compressed air produced by the jet engine is "bled" off at the compressor stage and piped to channels running along the rear of the wing. There, it is forced through slots in the wing flaps of the aircraft when the flaps reach certain angles. Injecting high energy air into the boundary layer produces an increase in the stalling angle of attack and maximum lift coefficient by delaying boundary layer separation from the airfoil. Boundary layer control by mass injecting (blowing) prevents boundary layer separation by supplying additional energy to the particles of fluid which are being retarded in the boundary layer. Therefore, injecting a high velocity air mass into the air stream essentially tangent to the wall surface of the airfoil reverses the boundary layer friction deceleration; thus, the boundary layer separation is delayed.
The lift of a wing can be greatly increased with blowing flow control. With mechanical slots, the natural boundary layer limits the boundary layer control pressure to the freestream total head. Blowing with a small proportion of engine airflow (internal blown flap) increases the lift. Using much higher quantities of gas from the engine exhaust, which increases the effective chord of the flap (the jet flap), produces supercirculation, or forced circulation up to the theoretical potential flow maximum. Surpassing this limit requires the addition of direct thrust.
Development of the general concept continued at NASA in the 1950s and 1960s, leading to simplified systems with similar performance. The externally blown flap arranges the engine to blow across the flaps at the rear of the wing. Some of the jet exhaust is deflected downward directly by the flap, while additional air travels through the slots in the flap and follows the outer edge due to the Coandă effect. The similar upper-surface blowing system arranges the engines over the wing and relies completely on the Coandă effect to redirect the airflow. Although not as effective as direct blowing, these "powered lift" systems are nevertheless quite powerful and much simpler to build and maintain.
A more recent and promising blow-type flow control concept is the counter-flow fluid injection which is able to exert high-authority control to global flows using low energy modifications to key flow regions. In this case, the air blow slit is located at the pressure side near the leading edge stagnation point location and the control air-flow is directed tangentially to the surface but with a forward direction. During the operation of such a flow control system two different effects are present. One effect, boundary layer enhancement, is caused by the increased turbulence levels away from the wall region thus transporting higher-energy outer flow into the wall region. In addition to that another effect, the virtual shaping effect, is utilized to aerodynamically thicken the airfoil at high angles of attack. Both these effects help to delay or eliminate flow separation.
In general, blown flaps can improve the lift of a wing by two to three times. Whereas a complex triple-slotted flap system on a Boeing 747 produces a coefficient of lift of about 2.45, external blowing (upper surface blowing on a Boeing YC-14) improves this to about 7, and internal blowing (jet flap on Hunting H.126) to 9.
History
Williams states some flap blowing tests were done at the Royal Aircraft Establishment before the Second World War, and that extensive tests were done during the war in Germany including flight tests with Arado Ar 232, Dornier Do 24 and Messerschmitt Bf 109 aircraft. Lachmann states the Arado and Dornier aircraft used an ejector-driven single flow of air which was sucked over part of the trailing edge span and blown over the remainder. The ejector was chemically powered using high pressure vapour. The Bf 109 used engine-driven blowers for flap blowing.
Rebuffet and Poisson-Quinton describe tests in France at O.N.E.R.A. after the war with combined sucking at le of first flap section and blowing at second flap section using a jet engine compressor bleed ejector to give both sucking and blowing. Flight testing was done on a Breguet Vultur aircraft.
Tests were also done at Westland Aircraft by W.H. Paine after the war with reports dated 1950 and 1951.
In the United States, a Grumman F9F Panther was modified with flap blowing based on work done by John Attinello in 1951. Engine compressor bleed was used. The system was known as "Supercirculation Boundary Layer Control" or BLC for short.
Between 1951 and 1955, Cessna did flap blowing tests on Cessna 309 and 319 aircraft using the Arado system.
During the 1950s and 60s, fighter aircraft generally evolved towards smaller wings in order to reduce drag at high speeds. Compared to the fighters of a generation earlier, they had wing loadings about four times as high; for instance the Supermarine Spitfire had a wing loading of and the Messerschmitt Bf 109 had the "very high" loading of , whereas the 1950s-era Lockheed F-104 Starfighter had .
One serious downside to these higher wing loadings is at low speed, when there is not enough wing left to provide lift to keep the plane flying. Even huge flaps could not offset this to any large degree, and as a result many aircraft landed at fairly high speeds, and were noted for accidents as a result.
The major reason flaps were not effective is that the airflow over the wing could only be "bent so much" before it stopped following the wing profile, a condition known as flow separation. There is a limit to how much air the flaps can deflect overall. There are ways to improve this, through better flap design; modern airliners use complex multi-part flaps for instance. However, large flaps tend to add considerable complexity, and take up room on the outside of the wing, which makes them unsuitable for use on a fighter.
The principle of the jet flap, a type of internally blown flap, was proposed and patented in 1952 by the British National Gas Turbine Establishment (NGTE) and thereafter investigated by the NGTE and the Royal Aircraft Establishment.
The concept was first tested at full-scale on the experimental Hunting H.126. It reduced the stall speed to only , a number most light aircraft cannot match. The jet flap used a large percentage of the engine exhaust, rather than compressor bleed air, for blowing.
One of the first production aircraft with blown flaps was the Lockheed F-104 Starfighter, which entered service in January 1958. After prolonged development problems, the BLCS proved to be enormously useful in compensating for the Starfighter's tiny wing surface. The Lockheed T2V SeaStar, with blown flaps, had entered service in May 1957 but was to have persistent maintenance problems with the BLCS which led to its early retirement. In June 1958, the Supermarine Scimitar with blown flaps entered service. Blown flaps were used on the North American Aviation A-5 Vigilante, the Vought F-8 Crusader variants E(FN) and J, the McDonnell Douglas F-4 Phantom II and the Blackburn Buccaneer. The Mikoyan-Gurevich MiG-21 and Mikoyan-Gurevich MiG-23 had blown flaps. Petrov states long-term operation of these aircraft showed high reliability of the BLC systems. The TSR-2, which was cancelled before it entered service, had full-span blown flaps.
Starting in the 1970s, the lessons of air combat over Vietnam changed thinking considerably. Instead of aircraft designed for outright speed, general maneuverability and load capacity became more important in most designs. The result is an evolution back to larger planforms to provide more lift. For instance the General Dynamics F-16 Fighting Falcon has a wing loading of , and uses leading edge extensions to provide considerably more lift at higher angles of attack, including approach and landing. Some later combat aircraft achieved the required low-speed characteristics using swing-wings. Internal flap blowing is still used to supplement externally blown flaps on the Shin Meiwa US-1A.
Some aircraft currently (2015) in service that require a STOL performance use external flap blowing and, in some cases, also use internal flap blowing on flaps as well as on control surfaces such as the rudder to ensure adequate control and stability at low speeds. External blowing concepts are known as the "externally blown flap" (used on the Boeing C-17 Globemaster), "upper surface blowing" (used on the Antonov An-72 and Antonov An-74) and "vectored slipstream", or "over the wing blowing", used on the Antonov An-70 and the Shin Meiwa US-1A and ShinMaywa US-2.
Powered high-lift systems, such as externally blown flaps, are not used for civil transport aircraft for reasons given by Reckzeh, which include complexity, weight, cost, sufficient existing runway lengths and certification rules.
See also
Boundary layer
Boundary layer control
Coandă effect
Circulation control wing
Thrust vectoring
References
Aircraft controls
Boundary layers
Aircraft wing design | Blown flap | Chemistry | 2,165 |
75,200,703 | https://en.wikipedia.org/wiki/Emma%20P%C3%A9rez%20Ferreira | Emma Victoria Pérez Ferreira (2 April 1925 – 28 June 2005) was an Argentine physicist who contributed immensely to the advancement of science in Argentina. She was the first female president of the country's National Atomic Energy Commission (CNEA).
Life and work
Pérez Ferreira was born in Buenos Aires on 2 April 1925 and studied Physico-Mathematical Sciences at the University of Buenos Aires (UBA), earning her bachelor's degree in 1952 and becoming a teacher at the university's Faculty of Exact and Natural Sciences. She received her doctorate in Physics from UBA in 1960 with her dissertation titled, The production of pions by pions at energies of around 1 Bev. She then took postgraduate courses at the University of Durham in England and University of Bologna in Italy, receiving a scholarship from the National Atomic Energy Commission (CNEA). She was one of the first scientists at this institution and held positions of high responsibility, initially dedicating herself to scientific research in high energy nuclear physics. She served as head of the Department of Nuclear Physics for ten years and then Director of research and development.
In 1976, she was appointed head of the project called TANDAR (for TANDem ARgentino), to build a 20 MeV tandem-type accelerator for heavy ions. From 1985 to 1989, she was a member of the Council for the Consolidation of Democracy created by President Raúl Alfonsín. In the period from 1987 to 1989 she was president of the country's National Atomic Energy Commission.
In 1990 she began serving as director of the RETINA (Red Teleinformática Académica or Academic Teleinformatics Network) project, an academic network developed before commercial Internet networks existed, to link computers between universities and facilitate communications among them. When the speed of this network was no longer sufficient, she participated in the implementation of advanced academic networks to integrate Argentina into the newer Internet, which at that time was an advanced North American academic network that allowed large volumes of data to be transferred at higher speeds. In December 2001, Argentina was integrated under a program known as RETINA2.
At the Konex 2003 Science and Technology Awards, Pérez Ferreira received a special mention for her role in Argentine science and technology. At the Constituent Atomic Center of the National Atomic Energy Commission, a public space is named after her: the Emma Pérez Ferreira Auditorium.
She died at the age of 80 in Buenos Aires on 28 June 2005.
Bibliography
Pérez Ferreira E. & Waloscheck P. J. (1956). Medición de intensidades totales de neutrones rápidos con placas nucleares y su aplicación a la determinación de la distribución angular de los neutrones de la reacción li (d. n).
References
1925 births
2005 deaths
Argentine physicists
University of Buenos Aires alumni
20th-century Argentine physicists
20th-century Argentine women scientists
20th-century women physicists
Particle physicists | Emma Pérez Ferreira | Physics | 595 |
2,737,686 | https://en.wikipedia.org/wiki/Kralk%C4%B1z%C4%B1%20Dam | Kralkızı Dam is one of the 21 dams of the Southeastern Anatolia Project of Turkey in Batman. The hydroelectric power plant has a total installed power capacity of 94 MW. The dam was constructed between 1985 and 1997.
The amount of water in the Kralkızı Dam, which should have 2 billion cubic meters of water in its lake for electricity generation, dropped to 520 million cubic meters due to drought. Due to the decrease in the water level, electricity production was suspended for 46 days in January 2007, and it was reported that the water level should be at least 700 million cubic meters for energy production. These facilities are located on the Maden Stream, one of the important tributaries of the Tigris River, at a distance of 81 kilometres to Diyarbakır and 6 kilometres to the township of Dicle.
Notes
References
www.un.org.tr/undp/Gap.htm - United Nations Southeast Anatolia Sustainable Human Development Program (GAP)
www.gapturkiye.gen.tr/english/current.html Current status of GAP as of June 2000
www.ecgd.gov.uk Data sheet
External links
www.gap.gov.tr - Official GAP web site
Dams in Diyarbakır Province
Southeastern Anatolia Project
Dams completed in 1997
Rock-filled dams
Dams in the Tigris River basin | Kralkızı Dam | Engineering | 279 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.