id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
25,819,145
https://en.wikipedia.org/wiki/HD%2013931
HD 13931 is a Sun-like star in the northern constellation of Andromeda. It can be viewed with binoculars or a small telescope but is too faint to be seen with the naked eye, having an apparent visual magnitude of 7.60. This object is located at a distance of 154 light years from the Sun, as determined from its parallax, and is drifting further away with a radial velocity of +31 km/s. This is an ordinary G-type main-sequence star with a stellar classification of G0V, which indicates it, like the Sun, is generating energy through core hydrogen fusion. It is slightly larger, hotter, brighter, and more massive than the Sun. The metal content is about 8% greater than the Sun, and it has a quiet (magnetically inactive) chromosphere. The star is an estimated 6.8 billion years old and it is spinning with a rotation period of about 26 days In 2009, a very long-period giant planet, more massive than Jupiter, was found in orbit around the star by measuring changes in the star's radial velocity. This planet takes to orbit the star at the typical distance of . The planet's eccentricity (0.02) is about the same as Earth. In 2023, the inclination and true mass of HD 13931 b were measured via astrometry. According to a 2018 research, HD 13931 is the most promising Solar System analogue known, since it has a star similar to the Sun and a planet with mass and semimajor axis similar to Jupiter. Those characteristics yield a probability almost 75% for the existence of a dynamically stable habitable zone, where an Earth-like planet may exist and sustain life. See also List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Andromeda (constellation) BD+43 0459 013931 010626
HD 13931
Astronomy
391
37,942,128
https://en.wikipedia.org/wiki/Alipogene%20tiparvovec
Alipogene tiparvovec, sold under the brand name Glybera, is a gene therapy treatment designed to reverse lipoprotein lipase deficiency (LPLD), a rare recessive disorder, due to mutations in LPL, which can cause severe pancreatitis. It was recommended for approval by the European Medicines Agency in July 2012, and approved by the European Commission in November of the same year. It was the first marketing authorisation for a gene therapy treatment in either the European Union or the United States. The medication is administered via a series of injections into the leg muscles. Glybera gained infamy as the "million-dollar drug" and proved commercially unsuccessful for a number of reasons. Its cost to patients and payers, together with the rarity of LPLD, high maintenance costs to its manufacturer , and failure to achieve approval in the US, led to withdrawing the drug after two years on the EU market. As of 2018, only 31 people worldwide have ever been administered Glybera, and has no plans to sell the drug in the US or Canada. History Glybera was developed over a period of decades by researchers at the University of British Columbia (UBC). In 1986, Michael R. Hayden and John Kastelein began research at UBC, confirming the hypothesis that LPLD was caused by a gene mutation. Years later, in 2002, Hayden and Colin Ross successfully performed gene therapy on test mice to treat LPLD; their findings were featured on the September 2004 cover of Human Gene Therapy. Ross and Hayden next succeeded in treating cats in the same manner, with the help of Boyce Jones. Trials and approval Meanwhile, Kastelein—who had, by 1998, become an international expert in lipid disorders—co-founded Amsterdam Molecular Therapeutics (AMT), which acquired rights to Hayden's research with the aim of releasing the drug in Europe. Since LPLD is a rare condition (prevalence worldwide 1–2 per million), related clinical tests and trials have involved unusually small cohort sizes. The first main trial (CT-AMT-011-01) involved just 14 subjects, and by 2015, a total of 27 individuals had been involved in phase III testing. The second phase of testing focused on subjects living along the Saguenay River in Quebec, where LPLD affects people at the highest rate in the world (up to 200 per million) due to the founder effect. Price After over two years of testing, Glybera was approved in the European Union in 2012. However, after spending millions of euros on Glybera's approval, AMT went bankrupt and its assets were acquired by . Alipogene tiparvovec was expected to cost around per treatment in 2012,—revised to $1 million in 2015,—making it the most expensive medicine in the world at the time. However, replacement therapy, a similar treatment, can cost over $300,000 per year, for life. In 2015, dropped its plans for approval in the US and exclusively licensed rights to sell the drug in Europe to Chiesi Farmaceutici for . As of 2016, only one person had received the drug outside of a clinical trial. In April 2017, Chiesi quit selling Glybera and announced that it would not pursue the renewal of the marketing authorisation in the European Union when it was scheduled to expire that October, due to lack of demand. Afterwards, the three remaining doses in Chiesi's inventory were given away to two patients in Germany and one patient in Italy for each. Mechanism The adeno-associated virus serotype 1 (AAV1) viral vector delivers an intact copy of the human lipoprotein lipase (LPL) gene to muscle cells. The LPL gene is not inserted into the cell's chromosomes but remains as free floating DNA in the nucleus. The injection is followed by immunosuppressive therapy to prevent immune reactions to the virus. Data from the clinical trials indicates that fat concentrations in blood were reduced between 3 and 12 weeks after injection, in nearly all patients. The advantages of AAV include apparent lack of pathogenicity, delivery to non-dividing cells, and much smaller risk of insertion compared to retroviruses, which show random insertion with accompanying risk of cancer. AAV also presents very low immunogenicity, mainly restricted to generating neutralising antibodies, and little well defined cytotoxic response. The cloning capacity of the vector is limited to replacement of the virus's 4.8 kilobase genome. See also List of gene therapies Health care costs References Applied genetics Drugs that are a gene therapy Gene delivery Approved gene therapies Withdrawn drugs
Alipogene tiparvovec
Chemistry,Biology
976
61,344
https://en.wikipedia.org/wiki/Lightning
Lightning is a natural phenomenon, more specifically an atmospheric electrical phenomenon. It consists of electrostatic discharges occurring through the atmosphere between two electrically charged regions, either both existing within the atmosphere or one within the atmosphere and one on the ground, with these regions then becoming partially or wholly electrically neutralized. Lightning involves a near-instantaneous release of energy on a scale averaging between 200 megajoules and 7 gigajoules. This discharge may produce a wide range of electromagnetic radiation, from heat created by the rapid movement of electrons, to brilliant flashes of visible light in the form of black-body radiation. Lightning also causes thunder, a sound from the shock wave which develops as gases in the vicinity of the discharge experience a sudden increase in pressure. The most common occurrence of a lightning event is known as a thunderstorm, though they can also commonly occur in other types of energetic weather systems too. Lightning influences the global atmospheric electrical circuit, atmospheric chemistry, and is a natural ignition source of wildfires. The scientific study of lightning is called fulminology. Forms Three primary forms of lightning are distinguished by where they occur: (IC) or — Within a single thundercloud (CC) or — Between two clouds (CG) — Between a cloud and the ground, in which case it is referred to as a lightning strike. Many other observational variants are recognized, including: volcanic lightning, which can occur during volcanic eruptions; "heat lightning", which can be seen from a great distance but not heard; dry lightning, which can cause forest fires; and ball lightning, which is rarely observed scientifically. The most direct effects of lightning on humans occur as a result of cloud-to-ground lightning, even though intra-cloud and cloud-to-cloud are more common. Intra-cloud and cloud-to-cloud lightning indirectly affect humans through their influence on atmospheric chemistry. There are variations of each type, such as "positive" versus "negative" CG flashes, that have different physical characteristics common to each which can be measured. Cloud to ground (CG) (CG) lightning is a lightning discharge between a thundercloud and the ground. It is initiated by a stepped leader moving down from the cloud, which is met by a streamer moving up from the ground. CG is the least common, but best understood of all types of lightning. It is easier to study scientifically because it terminates on a physical object, namely the ground, and lends itself to being measured by instruments on the ground. Of the three primary types of lightning, it poses the greatest threat to life and property, since it terminates on the ground or "strikes". The overall discharge, termed a flash, is composed of a number of processes such as preliminary breakdown, stepped leaders, connecting leaders, return strokes, dart leaders, and subsequent return strokes. The conductivity of the electrical ground, be it soil, fresh water, or salt water, may affect the lightning discharge rate and thus visible characteristics. Positive and negative lightning Cloud-to-ground (CG) lightning is either positive or negative, as defined by the direction of the conventional electric current between cloud and ground. Most CG lightning is negative, meaning that a negative charge is transferred (electrons flow) downwards to ground along the lightning channel (conventionally speaking they flow from the ground up to the cloud). The reverse happens in a positive CG flash, where electrons travel upward along the lightning channel, while also a positive charge is transferred downward to the ground (conventionally speaking this would be the opposite). Positive lightning is less common than negative lightning and on average makes up less than 5% of all lightning strikes. There are a number of mechanisms theorized to result in the formation of positive lightning. These are mainly based on movement or intensification of charge centres in the cloud. Such changes in cloud charging may come about as a result of variations in vertical wind shear or precipitation, or dissipation of the storm. Positive flashes may also result from certain behaviour of in-cloud discharges, e.g. breaking off or branching from existing flashes. Positive lightning strikes tend to be much more intense than their negative counterparts. An average bolt of negative lightning creates an electric current of 30,000 amperes (30 kA), transferring a total 15 C (coulombs) of electric charge and 1 gigajoule of energy. Large bolts of positive lightning can create up to 120 kA and transfer 350 C. The average positive ground flash has roughly double the peak current of a typical negative flash, and can produce peak currents up to 400 kA and charges of several hundred coulombs. Furthermore, positive ground flashes with high peak currents are commonly followed by long continuing currents, a correlation not seen in negative ground flashes. As a result of their greater power, positive lightning strikes are considerably more dangerous than negative strikes. Positive lightning produces both higher peak currents and longer continuing currents, making them capable of heating surfaces to much higher levels which increases the likelihood of a fire being ignited. The long distances positive lightning can propagate through clear air explains why they are known as "bolts from the blue", giving no warning to observers. Positive lightning has also been shown to trigger the occurrence of upward lightning flashes from the tops of tall structures and is largely responsible for the initiation of sprites several tens of kilometers above ground level. Positive lightning tends to occur more frequently in winter storms, as with thundersnow, during intense tornadoes and in the dissipation stage of a thunderstorm. Huge quantities of extremely low frequency (ELF) and very low frequency (VLF) radio waves are also generated. Contrary to popular belief, positive lightning flashes do not necessarily originate from the anvil or the upper positive charge region and strike a rain-free area outside of the thunderstorm. This belief is based on the outdated idea that lightning leaders are unipolar and originate from their respective charge region. Despite the popular misconception that flashes originating from the anvil are positive, due to them seemingly originating from the positive charge region, observations have shown that these are in fact negative flashes. They begin as IC flashes within the cloud, the negative leader then exits the cloud from the positive charge region before propagating through clear air and striking the ground some distance away. Cloud to cloud (CC) and intra-cloud (IC) Lightning discharges may occur between areas of cloud without contacting the ground. When it occurs between two separate clouds, it is known as (CC) or lightning; when it occurs between areas of differing electric potential within a single cloud, it is known as (IC) lightning. IC lightning is the most frequently occurring type. IC lightning most commonly occurs between the upper anvil portion and lower reaches of a given thunderstorm. This lightning can sometimes be observed at great distances at night as so-called "sheet lightning". In such instances, the observer may see only a flash of light without hearing any thunder. Another term used for cloud–cloud or cloud–cloud–ground lightning is "Anvil Crawler", due to the habit of charge, typically originating beneath or within the anvil and scrambling through the upper cloud layers of a thunderstorm, often generating dramatic multiple branch strokes. These are usually seen as a thunderstorm passes over the observer or begins to decay. The most vivid crawler behavior occurs in well developed thunderstorms that feature extensive rear anvil shearing. Formation The processes involved in lightning formation fall into the following categories: Large-scale atmospheric phenomena in which charge separation can occur (e.g. storm) Microscopic physical processes that result in charge separation Large-scale separation of charge and establishment of an electric field Discharge through a lightning channel Atmospheric phenomena in which lightning occurs Lightning primarily occurs when warm air is mixed with colder air masses, resulting in atmospheric disturbances necessary for polarizing the atmosphere. The disturbances result in storms, and when those storms also result in lightning and thunder, they are called a thunderstorm. Lightning can also occur during dust storms, forest fires, tornadoes, volcanic eruptions, and even in the cold of winter, where the lightning is known as thundersnow. Hurricanes typically generate some lightning, mainly in the rainbands as much as from the center. Intense forest fires, such as those seen in the 2019–20 Australian bushfire season, can create their own weather systems that can produce lightning (also called Fire Lightning) and other weather phenomena. Intense heat from a fire causes air to rapidly rise within the smoke plume, causing the formation of pyrocumulonimbus clouds. Cooler air is drawn in by this turbulent, rising air, helping to cool the plume. The rising plume is further cooled by the lower atmospheric pressure at high altitude, allowing the moisture in it to condense into cloud. Pyrocumulonimbus clouds form in an unstable atmosphere. These weather systems can produce dry lightning, fire tornadoes, intense winds, and dirty hail. Airplane contrails have also been observed to influence lightning to a small degree. The water vapor-dense contrails of airplanes may provide a lower resistance pathway through the atmosphere having some influence upon the establishment of an ionic pathway for a lightning flash to follow. Rocket exhaust plumes provided a pathway for lightning when it was witnessed striking the Apollo 12 rocket shortly after takeoff. Thermonuclear explosions, by providing extra material for electrical conduction and a very turbulent localized atmosphere, have been seen triggering lightning flashes within the mushroom cloud. In addition, intense gamma radiation from large nuclear explosions may develop intensely charged regions in the surrounding air through Compton scattering. The intensely charged space charge regions create multiple clear-air lightning discharges shortly after the device detonates. Some high energy cosmic rays produced by supernovas as well as solar particles from the solar wind, enter the atmosphere and electrify the air, which may create pathways for lightning channels. Charge separation Charge separation in thunderstorms The details of the charging process are still being studied by scientists, but there is general agreement on some of the basic concepts of thunderstorm charge separation, also known as electrification. Electrification can be by the triboelectric effect leading to electron or ion transfer between colliding bodies. Uncharged, colliding water-drops can become charged because of charge transfer between them (as aqueous ions) in an electric field as would exist in a thunderstorm. The main charging area in a thunderstorm occurs in the central part of the storm where air is moving upward rapidly (updraft) and temperatures range from ; see Figure 1. In that area, the combination of temperature and rapid upward air movement produces a mixture of super-cooled cloud droplets (small water droplets below freezing), small ice crystals, and graupel (soft hail). The updraft carries the super-cooled cloud droplets and very small ice crystals upward. At the same time, the graupel, which is considerably larger and denser, tends to fall or be suspended in the rising air. The differences in the movement of the precipitation cause collisions to occur. When the rising ice crystals collide with graupel, the ice crystals become positively charged and the graupel becomes negatively charged; see Figure 2. The updraft carries the positively charged ice crystals upward toward the top of the storm cloud. The larger and denser graupel is either suspended in the middle of the thunderstorm cloud or falls toward the lower part of the storm. The result is that the upper part of the thunderstorm cloud becomes positively charged while the middle to lower part of the thunderstorm cloud becomes negatively charged. The upward motions within the storm and winds at higher levels in the atmosphere tend to cause the small ice crystals (and positive charge) in the upper part of the thunderstorm cloud to spread out horizontally some distance from the thunderstorm cloud base. This part of the thunderstorm cloud is called the anvil. While this is the main charging process for the thunderstorm cloud, some of these charges can be redistributed by air movements within the storm (updrafts and downdrafts). In addition, there is a small but important positive charge buildup near the bottom of the thunderstorm cloud due to the precipitation and warmer temperatures. Charge separation in different phases of water The induced separation of charge in pure liquid water has been known since the 1840s as has the electrification of pure liquid water by the triboelectric effect. William Thomson (Lord Kelvin) demonstrated that charge separation in water occurs in the usual electric fields at the Earth's surface and developed a continuous electric field measuring device using that knowledge. The physical separation of charge into different regions using liquid water was demonstrated by Kelvin with the Kelvin water dropper. The most likely charge-carrying species were considered to be the aqueous hydrogen ion and the aqueous hydroxide ion. An electron is not stable in liquid water concerning a hydroxide ion plus dissolved hydrogen for the time scales involved in thunderstorms. The electrical charging of solid water ice has also been considered. The charged species were again considered to be the hydrogen ion and the hydroxide ion. The charge carrier in lightning is mainly electrons in a plasma. The process of going from charge as ions (positive hydrogen ion and negative hydroxide ion) associated with liquid water or solid water to charge as electrons associated with lightning must involve some form of electro-chemistry, that is, the oxidation and/or the reduction of chemical species. As hydroxide functions as a base and carbon dioxide is an acidic gas, it is possible that charged water clouds in which the negative charge is in the form of the aqueous hydroxide ion, interact with atmospheric carbon dioxide to form aqueous carbonate ions and aqueous hydrogen carbonate ions. Establishing an electric field In order for an electrostatic discharge to occur, two preconditions are necessary: first, a sufficiently high potential difference between two regions of space must exist, and second, a high-resistance medium must obstruct the free, unimpeded equalization of the opposite charges. The atmosphere provides the electrical insulation, or barrier, that prevents free equalization between charged regions of opposite polarity. Meanwhile, a thunderstorm can provide the charge separation and aggregation in certain regions of the cloud. When the local electric field exceeds the dielectric strength of damp air (about 3 MV/m), electrical discharge results in a strike, often followed by commensurate discharges branching from the same path. Mechanisms that cause the charges to build up to lightning are still a matter of scientific investigation. A 2016 study confirmed dielectric breakdown is involved. Lightning may be caused by the circulation of warm moisture-filled air through electric fields. Ice or water particles then accumulate charge as in a Van de Graaff generator. As a thundercloud moves over the surface of the Earth, an equal electric charge, but of opposite polarity, is induced on the Earth's surface underneath the cloud. The induced positive surface charge, when measured against a fixed point, will be small as the thundercloud approaches, increasing as the center of the storm arrives and dropping as the thundercloud passes. The referential value of the induced surface charge could be roughly represented as a bell curve. The oppositely charged regions create an electric field within the air between them. This electric field varies in relation to the strength of the surface charge on the base of the thundercloud – the greater the accumulated charge, the higher the electrical field. Electrical discharge as flashes and strikes The best-studied and understood form of lightning is cloud to ground (CG) lightning. Although more common, intra-cloud (IC) and cloud-to-cloud (CC) flashes are very difficult to study given there are no "physical" points to monitor inside the clouds. Also, given the very low probability of lightning striking the same point repeatedly and consistently, scientific inquiry is difficult even in areas of high CG frequency. Lightning leaders In a process not well understood, a bidirectional channel of ionized air, called a "leader", is initiated between oppositely-charged regions in a thundercloud. Leaders are electrically conductive channels of ionized gas that propagate through, or are otherwise attracted to, regions with a charge opposite of that of the leader tip. The negative end of the bidirectional leader fills a positive charge region, also called a well, inside the cloud while the positive end fills a negative charge well. Leaders often split, forming branches in a tree-like pattern. In addition, negative and some positive leaders travel in a discontinuous fashion, in a process called "stepping". The resulting jerky movement of the leaders can be readily observed in slow-motion videos of lightning flashes. It is possible for one end of the leader to fill the oppositely-charged well entirely while the other end is still active. When this happens, the leader end which filled the well may propagate outside of the thundercloud and result in either a cloud-to-air flash or a cloud-to-ground flash. In a typical cloud-to-ground flash, a bidirectional leader initiates between the main negative and lower positive charge regions in a thundercloud. The weaker positive charge region is filled quickly by the negative leader which then propagates toward the inductively-charged ground. The positively and negatively charged leaders proceed in opposite directions, positive upwards within the cloud and negative towards the earth. Both ionic channels proceed, in their respective directions, in a number of successive spurts. Each leader "pools" ions at the leading tips, shooting out one or more new leaders, momentarily pooling again to concentrate charged ions, then shooting out another leader. The negative leader continues to propagate and split as it heads downward, often speeding up as it gets closer to the Earth's surface. About 90% of ionic channel lengths between "pools" are approximately in length. The establishment of the ionic channel takes a comparatively long amount of time (hundreds of milliseconds) in comparison to the resulting discharge, which occurs within a few dozen microseconds. The electric current needed to establish the channel, measured in the tens or hundreds of amperes, is dwarfed by subsequent currents during the actual discharge. Initiation of the lightning leader is not well understood. The electric field strength within the thundercloud is not typically large enough to initiate this process by itself. Many hypotheses have been proposed. One hypothesis postulates that showers of relativistic electrons are created by cosmic rays and are then accelerated to higher velocities via a process called runaway breakdown. As these relativistic electrons collide and ionize neutral air molecules, they initiate leader formation. Another hypothesis involves locally enhanced electric fields being formed near elongated water droplets or ice crystals. Percolation theory, especially for the case of biased percolation, describes random connectivity phenomena, which produce an evolution of connected structures similar to that of lightning strikes. A streamer avalanche model has recently been favored by observational data taken by LOFAR during storms. Upward streamers When a stepped leader approaches the ground, the presence of opposite charges on the ground enhances the strength of the electric field. The electric field is strongest on grounded objects whose tops are closest to the base of the thundercloud, such as trees and tall buildings. If the electric field is strong enough, a positively charged ionic channel, called a positive or upward streamer, can develop from these points. This was first theorized by Heinz Kasemir. As negatively charged leaders approach, increasing the localized electric field strength, grounded objects already experiencing corona discharge will exceed a threshold and form upward streamers. Attachment Once a downward leader connects to an available upward leader, a process referred to as attachment, a low-resistance path is formed and discharge may occur. Photographs have been taken in which unattached streamers are clearly visible. The unattached downward leaders are also visible in branched lightning, none of which are connected to the earth, although it may appear they are. High-speed videos can show the attachment process in progress. Discharge – Return stroke Once a conductive channel bridges the air gap between the negative charge excess in the cloud and the positive surface charge excess below, there is a large drop in resistance across the lightning channel. Electrons accelerate rapidly as a result in a zone beginning at the point of attachment, which expands across the entire leader network at up to one third of the speed of light. This is the "return stroke" and it is the most luminous and noticeable part of the lightning discharge. A large electric charge flows along the plasma channel, from the cloud to the ground, neutralising the positive ground charge as electrons flow away from the strike point to the surrounding area. This huge surge of current creates large radial voltage differences along the surface of the ground. Called step potentials, they are responsible for more injuries and deaths in groups of people or of other animals than the strike itself. Electricity takes every path available to it. Such step potentials will often cause current to flow through one leg and out another, electrocuting an unlucky human or animal standing near the point where the lightning strikes. The electric current of the return stroke averages 30 kiloamperes for a typical negative CG flash, often referred to as "negative CG" lightning. In some cases, a ground-to-cloud (GC) lightning flash may originate from a positively charged region on the ground below a storm. These discharges normally originate from the tops of very tall structures, such as communications antennas. The rate at which the return stroke current travels has been found to be around 100,000 km/s (one-third of the speed of light). A typical cloud-to-ground lightning flash culminates in the formation of an electrically conducting plasma channel through the air in excess of tall, from within the cloud to the ground's surface. The massive flow of electric current occurring during the return stroke combined with the rate at which it occurs (measured in microseconds) rapidly superheats the completed leader channel, forming a highly electrically conductive plasma channel. The core temperature of the plasma during the return stroke may exceed , causing it to radiate with a brilliant, blue-white color. Once the electric current stops flowing, the channel cools and dissipates over tens or hundreds of milliseconds, often disappearing as fragmented patches of glowing gas. The nearly instantaneous heating during the return stroke causes the air to expand explosively, producing a powerful shock wave which is heard as thunder. Discharge – Re-strike High-speed videos (examined frame-by-frame) show that most negative CG lightning flashes are made up of 3 or 4 individual strokes, though there may be as many as 30. Each re-strike is separated by a relatively large amount of time, typically 40 to 50 milliseconds, as other charged regions in the cloud are discharged in subsequent strokes. Re-strikes often cause a noticeable "strobe light" effect. To understand why multiple return strokes utilize the same lightning channel, one needs to understand the behavior of positive leaders, which a typical ground flash effectively becomes following the negative leader's connection with the ground. Positive leaders decay more rapidly than negative leaders do. For reasons not well understood, bidirectional leaders tend to initiate on the tips of the decayed positive leaders in which the negative end attempts to re-ionize the leader network. These leaders, also called recoil leaders, usually decay shortly after their formation. When they do manage to make contact with a conductive portion of the main leader network, a return stroke-like process occurs and a dart leader travels across all or a portion of the length of the original leader. The dart leaders making connections with the ground are what cause a majority of subsequent return strokes. Each successive stroke is preceded by intermediate dart leader strokes that have a faster rise time but lower amplitude than the initial return stroke. Each subsequent stroke usually re-uses the discharge channel taken by the previous one, but the channel may be offset from its previous position as wind displaces the hot channel. Since recoil and dart leader processes do not occur on negative leaders, subsequent return strokes very seldom utilize the same channel on positive ground flashes which are explained later in the article. Discharge – Transient currents during flash The electric current within a typical negative CG lightning discharge rises very quickly to its peak value in 1–10 microseconds, then decays more slowly over 50–200 microseconds. The transient nature of the current within a lightning flash results in several phenomena that need to be addressed in the effective protection of ground-based structures. Rapidly changing (alternating) currents tend to travel on the surface of a conductor, in what is called the skin effect, unlike direct currents, which "flow-through" the entire conductor like water through a hose. Hence, conductors used in the protection of facilities tend to be multi-stranded, with small wires woven together. This increases the total bundle surface area in inverse proportion to the individual strand radius, for a fixed total cross-sectional area. The rapidly changing currents also create electromagnetic pulses (EMPs) that radiate outward from the ionic channel. This is a characteristic of all electrical discharges. The radiated pulses rapidly weaken as their distance from the origin increases. However, if they pass over conductive elements such as power lines, communication lines, or metallic pipes, they may induce a current which travels outward to its termination. The surge current is inversely related to the surge impedance: the higher in impedance, the lower the current. This is the surge that, more often than not, results in the destruction of delicate electronics, electrical appliances, or electric motors. Devices known as surge protectors (SPD) or transient voltage surge suppressors (TVSS) attached in parallel with these lines can detect the lightning flash's transient irregular current, and, through alteration of its physical properties, route the spike to an attached earthing ground, thereby protecting the equipment from damage. Distribution, frequency and properties Global monitoring indicates that lightning on Earth occurs at an average frequency of approximately 44 (± 5) times per second, equating to nearly 1.4 billion flashes per year. Median duration is 0.52 seconds made up from a number of much shorter flashes (strokes) of around 60 to 70 microseconds. Occurrences are distributed unevenly across the planet with about 70% being over land in the tropics where atmospheric convection is the greatest. Many factors affect the frequency, distribution, strength and physical properties of a typical lightning flash in a particular region of the world. These factors include ground elevation, latitude, prevailing wind currents, relative humidity, and proximity to warm and cold bodies of water. To a certain degree the proportions of intra-cloud, cloud-to-cloud, and cloud-to-ground lightning may also vary by season in middle latitudes. This occurs from both the mixture of warmer and colder air masses, as well as differences in moisture concentrations, and it generally happens at the boundaries between them. The flow of warm ocean currents past drier land masses, such as the Gulf Stream, partially explains the elevated frequency of lightning in the Southeast United States. Because large bodies of water lack the topographic variation that would result in atmospheric mixing, lightning is notably less frequent over the world's oceans than over land. The North and South Poles are limited in their coverage of thunderstorms and therefore result in areas with the least lightning. In general, CG lightning flashes account for only 25% of all total lightning flashes worldwide. Since the base of a thunderstorm is usually negatively charged, this is where most CG lightning originates. This region is typically at the elevation where freezing occurs within the cloud. Freezing, combined with collisions between ice and water, appears to be a critical part of the initial charge development and separation process. During wind-driven collisions, ice crystals tend to develop a positive charge, while a heavier, slushy mixture of ice and water (called graupel) develops a negative charge. Updrafts within a storm cloud separate the lighter ice crystals from the heavier graupel, causing the top region of the cloud to accumulate a positive space charge while the lower level accumulates a negative space charge. Because the concentrated charge within the cloud must exceed the insulating properties of air, and this increases proportionally to the distance between the cloud and the ground, the proportion of CG strikes (versus CC or IC discharges) becomes greater when the cloud is closer to the ground. In the tropics, where the freezing level is generally higher in the atmosphere, only 10% of lightning flashes are CG. At the latitude of Norway (around 60° North latitude), where the freezing elevation is lower, 50% of lightning is CG. Lightning is usually produced by cumulonimbus clouds, which have bases that are typically above the ground and tops up to in height. The place on Earth where lightning occurs most often is over Lake Maracaibo, wherein the Catatumbo lightning phenomenon produces 250 bolts of lightning a day. This activity occurs on average, 297 days a year. The second most lightning density is near the village of Kifuka in the mountains of the eastern Democratic Republic of the Congo, where the elevation is around . On average, this region receives . Other lightning hotspots include Singapore and Lightning Alley in Central Florida. According to the World Meteorological Organization, on April 29, 2020, a bolt 768 km (477.2 mi) long was observed in the southern U.S.—sixty km (37 mi) longer than the previous distance record (southern Brazil, October 31, 2018). A single flash in Uruguay and northern Argentina on June 18, 2020, lasted for 17.1 seconds—0.37 seconds longer than the previous record (March 4, 2019, also in northern Argentina). Researchers at the University of Florida found that the final one-dimensional speeds of 10 flashes observed were between 1.0 and 1.4 m/s, with an average of 4.4 m/s. Effects A lightning strike can unleash a variety of effects, some temporary, including very brief emission of light, sound and electromagnetic radiation, and some long-lasting, such as death, damage, and atmospheric and environmental changes. Injury, damage and destruction The immense amount of energy transferred in a lightning strike can have potentially devastating effect in a multitude of areas. To nature Objects struck by lightning experience heat and magnetic forces of great magnitude. Consequently: The heat created by lightning currents travelling through a tree may vaporize its sap, causing a steam explosion that rips off bark or even bursts the trunk. Similarly water in a fractured rock may be rapidly heated such that it splits further apart. A struck tree may catch fire, or a forest fire may be started. See also fire lightning below. As lightning travels through sandy soil, the soil surrounding the plasma channel may melt, forming tubular structures called fulgurites. To man-made structures and their contents Buildings or tall structures hit by lightning may be damaged as the lightning seeks unimpeded paths to the ground. By safely conducting a lightning strike to the ground, a lightning protection system, usually incorporating at least one lightning rod, can greatly reduce the probability of severe property damage. Surge protection devices (SPDs) can additionally or alternatively be used to help protect electrical installations from lightning induced electrical surges that risk damaging or destroying electrical equipment or starting a fire. Electrical fires obviously threaten not only structures but all assets, personal possessions, and living beings (people, pets and livestock) within. What, if any, protection system a building or structure requires is determined through a risk assessment. Threats to structures come not only from direct strikes to the structure itself, but also from direct or indirect strikes to connected electrically conductive services (electrical power lines; communication lines; water/gas pipes), or even to the surrounding area from which a surge may reach a service connection as it spreads out into the ground. To aircraft Aircraft are highly susceptible to being struck due to their metallic fuselages, but lightning strikes are generally not dangerous to them. Due to the conductive properties of aluminium alloy, the fuselage acts as a Faraday cage. Present day aircraft are built to be safe from a lightning strike and passengers will generally not even know that it has happened. However, there have been suspicions that lightning strikes can ignite fuel vapor and cause explosion, and nearby lightning can momentarily blind the pilot and cause permanent errors in magnetic compasses. To living beings Although 90 percent of people struck by lightning survive, humans and other animals struck by lightning may suffer severe injury due to internal organ and nervous system damage. Noise (Thunder) Because the electrostatic discharge of terrestrial lightning superheats the air to plasma temperatures along the length of the discharge channel in a short duration, kinetic theory dictates gaseous molecules undergo a rapid increase in pressure and thus expand outward from the lightning creating a shock wave audible as thunder. Since the sound waves propagate not from a single point source but along the length of the lightning's path, the sound origin's varying distances from the observer can generate a rolling or rumbling effect. Perception of the sonic characteristics is further complicated by factors such as the irregular and possibly branching geometry of the lightning channel, by acoustic echoing from terrain, and by the usually multiple-stroke characteristic of the lightning strike. Thunder is heard as a rolling, gradually dissipating rumble because the sound from different portions of a long stroke arrives at slightly different times. Lightning at a sufficient distance may be seen and not heard; there is data that a lightning storm can be seen at over whereas the thunder travels about . Anecdotally, there are many examples of people describing a 'storm directly overhead' or 'all-around' and yet 'no thunder'. Since thunderclouds can be up to high, lightning occurring high up in the cloud may appear close but is actually too far away to produce noticeable thunder. The distance approximation trick Light travels at about , while sound only travels through air at about . An observer can approximate the distance to the strike by timing the interval between the visible lightning and the audible thunder it generates. A lightning flash preceding its thunder by one second would be approximately away; thus a delay of three seconds would indicate a distance of about ; while a flash preceding thunder by five seconds would indicate a distance of roughly . Consequently, a lightning strike observed at a very close distance will be accompanied by a sudden clap of thunder, with almost no perceptible time lapse, possibly accompanied by the smell of ozone (O3). Electromagnetic radiation and interference Electromagnetic waves are emitted in a variety of wavelengths, most obviously that of visible light – the big bright flash! Radio frequency radiation Lightning discharges generate radio-frequency electromagnetic waves which can be received thousands of kilometers from their source. The discharge by itself is relatively simple short-lived dipole source that creates a single electromagnetic pulse with a duration of about 1 ms and a wide spectral density. In the absence in the nearby environment of materials with magnetic or electrical interaction properties, at a large distances in a far field zone, the electromagnetic wave will be proportional to the second derivation of the discharge current. This is what happens with high-altitude discharges or discharges over areas of a dry land. In other cases, the surrounding environment will change the shape of the source signal by absorbing some of its spectrum and converting it into a heat or re-transmitting it back as modified electromagnetic waves. High-energy radiation The production of X-rays by a bolt of lightning was predicted as early as 1925 by C.T.R. Wilson, but no evidence was found until 2001/2002, when researchers at the New Mexico Institute of Mining and Technology detected X-ray emissions from an induced lightning strike along a grounded wire trailed behind a rocket shot into a storm cloud. In the same year University of Florida and Florida Tech researchers used an array of electric field and X-ray detectors at a lightning research facility in North Florida to confirm that natural lightning makes X-rays in large quantities during the propagation of stepped leaders. The cause of the X-ray emissions is still a matter for research, as the temperature of lightning is too low to account for the X-rays observed. A number of observations by space-based telescopes have revealed even higher energy gamma ray emissions, the so-called terrestrial gamma-ray flashes (TGFs). These observations pose a challenge to current theories of lightning, especially with the recent discovery of the clear signatures of antimatter produced in lightning. Recent research has shown that secondary species, produced by these TGFs, such as electrons, positrons, neutrons or protons, can gain energies of up to several tens of MeV. Environmental changes More permanent or longer-lasting environmental changes include the following. Ozone and nitrogen oxides (atmospheric) The very high temperatures generated by lightning lead to significant local increases in ozone and oxides of nitrogen. Each lightning flash in temperate and sub-tropical areas produces 7 kg of on average. In the troposphere the effect of lightning can increase by 90% and ozone by 30%. Ground fertilisation Lightning serves an important role in the nitrogen cycle by oxidizing diatomic nitrogen in the air into nitrates which are deposited by rain and can fertilize the growth of plants and other organisms. Induced permanent magnetism The movement of electrical charges produces a magnetic field (see electromagnetism). The intense currents of a lightning discharge create a fleeting but very strong magnetic field. Where the lightning current path passes through rock, soil, or metal these materials can become permanently magnetized. This effect is known as lightning-induced remanent magnetism, or LIRM. These currents follow the least resistive path, often horizontally near the surface but sometimes vertically, where faults, ore bodies, or ground water offers a less resistive path. One theory suggests that lodestones, natural magnets encountered in ancient times, were created in this manner. Lightning-induced magnetic anomalies can be mapped in the ground, and analysis of magnetized materials can confirm lightning was the source of the magnetization and provide an estimate of the peak current of the lightning discharge. Magnetic hallucinations Research at the University of Innsbruck has calculated that magnetic fields generated by plasma may induce hallucinations in subjects located within of a severe lightning storm, like what happened in Transcranial magnetic stimulation (TMS). Extraterrestrial Lightning has been observed within the atmospheres of planets other than Earth, such as Jupiter, Saturn, and probably Uranus and Neptune. Lightning on Jupiter is far more energetic than on Earth, despite seeming to be generated via the same mechanism. Recently, a new type of lightning was detected on Jupiter, thought to originate from "mushballs" including ammonia. On Saturn lightning, initially referred to as "Saturn Electrostatic Discharge", was discovered by the Voyager 1 mission. Lightning on Venus has been a controversial subject after decades of study. During the Soviet Venera and U.S. Pioneer missions of the 1970s and 1980s, signals suggesting lightning may be present in the upper atmosphere were detected. The short Cassini–Huygens mission fly-by of Venus in 1999 detected no signs of lightning, but radio pulses recorded by the spacecraft Venus Express (which began orbiting Venus in April 2006) may originate from lightning on Venus. Detection and monitoring The earliest detector invented to warn of the approach of a thunderstorm was the lightning bell. Benjamin Franklin installed one such device in his house. The detector was based on an electrostatic device called the 'electric chimes' invented by Andrew Gordon in 1742. Lightning discharges generate a wide range of electromagnetic radiations, including radio-frequency pulses. The times at which a pulse from a given lightning discharge arrives at several receivers can be used to locate the source of the discharge with a precision on the order of metres. The United States federal government has constructed a nationwide grid of such lightning detectors, allowing lightning discharges to be tracked in real time throughout the continental U.S. In addition, Blitzortung (a private global detection system that consists of over 500 detection stations owned and operated by hobbyists/volunteers) provides near real-time lightning maps at . The Earth-ionosphere waveguide traps electromagnetic VLF- and ELF waves. Electromagnetic pulses transmitted by lightning strikes propagate within that waveguide. The waveguide is dispersive, which means that their group velocity depends on frequency. The difference of the group time delay of a lightning pulse at adjacent frequencies is proportional to the distance between transmitter and receiver. Together with direction-finding methods, this allows locating lightning strikes up to distances of 10,000 km from their origin. Moreover, the eigenfrequencies of the Earth-ionospheric waveguide, the Schumann resonances at about 7.5 Hz, are used to determine the global thunderstorm activity. In addition to ground-based lightning detection, several instruments aboard satellites have been constructed to observe lightning distribution. These include the Optical Transient Detector (OTD), aboard the OrbView-1 satellite launched on April 3, 1995, and the subsequent Lightning Imaging Sensor (LIS) aboard TRMM launched on November 28, 1997. Starting in 2016, the National Oceanic and Atmospheric Administration launched Geostationary Operational Environmental Satellite–R Series (GOES-R) weather satellites outfitted with Geostationary Lightning Mapper (GLM) instruments which are near-infrared optical transient detectors that can detect the momentary changes in an optical scene, indicating the presence of lightning. The lightning detection data can be converted into a real-time map of lightning activity across the Western Hemisphere; this mapping technique has been implemented by the United States National Weather Service. In 2022 EUMETSAT plan to launch the Lightning Imager (MTG-I LI) on board Meteosat Third Generation. This will complement NOAA's GLM. MTG-I LI will cover Europe and Africa and will include products on events, groups and flashes. Artificial triggering Rocket-triggered lightning can be "triggered" by launching specially designed rockets trailing spools of wire into thunderstorms. The wire unwinds as the rocket ascends, creating an elevated ground that can attract descending leaders. If a leader attaches, the wire provides a low-resistance pathway for a lightning flash to occur. The wire is vaporized by the return current flow, creating a straight lightning plasma channel in its place. This method allows for scientific research of lightning to occur under a more controlled and predictable manner. The International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, Florida typically uses rocket triggered lightning in their research studies. Laser-triggered Since the 1970s, researchers have attempted to trigger lightning strikes by means of infrared or ultraviolet lasers, which create a channel of ionized gas through which the lightning would be conducted to ground. Such triggering of lightning is intended to protect rocket launching pads, electric power facilities, and other sensitive targets. In New Mexico, U.S., scientists tested a new terawatt laser which provoked lightning. Scientists fired ultra-fast pulses from an extremely powerful laser thus sending several terawatts into the clouds to call down electrical discharges in storm clouds over the region. The laser beams sent from the laser make channels of ionized molecules known as filaments. Before the lightning strikes earth, the filaments lead electricity through the clouds, playing the role of lightning rods. Researchers generated filaments that lived a period too short to trigger a real lightning strike. Nevertheless, a boost in electrical activity within the clouds was registered. According to the French and German scientists who ran the experiment, the fast pulses sent from the laser will be able to provoke lightning strikes on demand. Statistical analysis showed that their laser pulses indeed enhanced the electrical activity in the thundercloud where it was aimed—in effect they generated small local discharges located at the position of the plasma channels. Impact of climate change and air pollution Due to the low resolution of global climate models, accurately representing lightning in these climate models is difficult, largely due to their inability to simulate the convection and cloud ice fundamental to lightning formation. Research from the Future Climate for Africa programme demonstrates that using a convection-permitting model over Africa can more accurately capture convective thunderstorms and the distribution of ice particles. This research indicates climate change may increase the total amount of lightning only slightly: the total number of lightning days per year decreases, while more cloud ice and stronger convection leads to more lightning strikes occurring on days when lightning does occur. A study from the University of Washington looked at lightning activity in the Arctic from 2010 to 2020. The ratio of Arctic summertime strokes was compared to total global strokes and was observed to be increasing with time, indicating that the region is becoming more influenced by lightning. The fraction of strokes above 65 degrees north was found to be increasing linearly with the NOAA global temperature anomaly and grew by a factor of 3 as the anomaly increased from 0.65 to 0.95 °C There is growing evidence that lightning activity is increased by particulate emissions (a form of air pollution). However, lightning may also improve air quality and clean greenhouse gases such as methane from the atmosphere, while creating nitrogen oxide and ozone at the same time. Lightning is also the major cause of wildfire, and wildfire can contribute to climate change as well. More studies are warranted to clarify their relationship. In culture and religion Humans have deified lightning for millennia. Idiomatic expressions derived from lightning, such as the English expression "bolt from the blue", are common across languages. At all times people have been fascinated by the sight and difference of lightning. The fear of lightning is called astraphobia. The first known photograph of lightning is from 1847, by Thomas Martin Easterly. The first surviving photograph is from 1882, by William Nicholson Jennings, a photographer who spent half his life capturing pictures of lightning and proving its diversity. Religion and mythology In many cultures, lightning has been viewed as a sign or part of a deity or a deity in and of itself. These include the Greek god Zeus, the Aztec god Tlaloc, the Mayan God K, Slavic mythology's Perun, the Baltic Pērkons/Perkūnas, Thor in Norse mythology, Ukko in Finnish mythology, the Hindu god Indra, the Yoruba god Sango, Illapa in Inca mythology and the Shinto god Raijin. The ancient Etruscans produced guides to brontoscopic and fulgural divination of the future based on the omens supposedly displayed by thunder or lightning occurring on particular days of the year or in particular places. Such use of thunder and lightning in divination is also known as ceraunoscopy, a kind of aeromancy. In the traditional religion of the African Bantu tribes, lightning is a sign of the ire of the gods. Scriptures in Judaism, Islam and Christianity also ascribe supernatural importance to lightning. In Christianity, the Second Coming of Jesus is compared to lightning. In popular culture Although sometimes used figuratively, the idea that lightning never strikes the same place twice is a common myth. In fact, lightning can, and often does, strike the same place more than once. Lightning in a thunderstorm is more likely to strike objects and spots that are more prominent or conductive. For instance, lightning strikes the Empire State Building in New York City on average 23 times per year. In French and Italian, the expression for "Love at first sight" is coup de foudre and colpo di fulmine, respectively, which literally translated means "lightning strike". Some European languages have a separate word for lightning which strikes the ground (as opposed to lightning in general); often it is a cognate of the English word "rays". The name of Australia's most celebrated thoroughbred horse, Phar Lap, derives from the shared Zhuang and Thai word for lightning. Political and military culture The bolt of lightning in heraldry is called a thunderbolt and is shown as a zigzag with non-pointed ends. This symbol usually represents power and speed. Some political parties use lightning flashes as a symbol of power, such as the People's Action Party in Singapore, the British Union of Fascists during the 1930s, and the National States' Rights Party in the United States during the 1950s. The Schutzstaffel, the paramilitary wing of the Nazi Party, used the Sig rune in their logo which symbolizes lightning. The German word Blitzkrieg, which means "lightning war", was a major offensive strategy of the German army during World War II. The lightning bolt is a common insignia for military communications units throughout the world. A lightning bolt is also the NATO symbol for a signal asset. See also Lightning strike Volcanic lightning Paleolightning Apollo 12 – A Saturn V rocket that was struck by lightning shortly after liftoff. Harvesting lightning energy Keraunography Keraunomedicine – medical study of lightning casualties Lichtenberg figure Lightning injury Lightning-prediction system Roy Sullivan - Sullivan is recognized by Guinness World Records as the person struck by lightning more recorded times than any other human St. Elmo's fire Upper-atmospheric lightning Vela satellites – satellites which could record lightning superbolts References Citations Sources Further reading This is also available at Sample, in .PDF form, consisting of the book through page 20. Early lightning research. External links World Wide Lightning Location Network Feynman's lecture on lightning Articles containing video clips Atmospheric electricity Electric arcs Electrical breakdown Electrical phenomena Terrestrial plasmas Space plasmas Storm Weather hazards Hazards of outdoor recreation
Lightning
Physics
10,150
2,288,308
https://en.wikipedia.org/wiki/White%20box%20%28software%20engineering%29
A white box (or glass box, clear box, or open box) is a subsystem whose internals can be viewed but usually not altered. The term is used in systems engineering, software engineering, and in intelligent user interface design, where it is closely related to recent interest in explainable artificial intelligence. Having access to the subsystem internals in general makes the subsystem easier to understand, but also easier to hack; for example, if a programmer can examine source code, weaknesses in an algorithm are much easier to discover. That makes white-box testing much more effective than black-box testing but considerably more difficult from the sophistication needed on the part of the tester to understand the subsystem. The notion of a "Black Box in a Glass Box" was originally used as a metaphor for teaching complex topics to computing novices. See also Black box Gray-box testing References Software testing
White box (software engineering)
Engineering
193
2,589,050
https://en.wikipedia.org/wiki/Roleplay%20simulation
Roleplay simulation is an experiential learning method in which either amateur or professional roleplayers (also called interactors) improvise with learners as part of a simulated scenario. Roleplay is designed primarily to build first-person experience in a safe and supportive environment. Roleplay is widely acknowledged as a powerful technique across multiple avenues of training and education. History Howard Barrows invented the model for medical patient role-playing in 1963 at University of Southern California. This program allowed doctors practice taking medical histories and conducting physical examinations by participating in a one-on-one scenario with a role-player. The role-players (called Standardized Patients or SP) were also trained on providing performance evaluations after the fiction of the scenario was complete. Barrows continued to evolve this model, eventually bringing it to other physicians in the 1970s, and into the academic world in the 1980s. Today, many hospitals and medical universities have their own standardized patient programs that employ part-time role-players trained to specific standards of interaction. The Association of Standardized Patient Educators has members from six different continents. An industry of professional skills training emerged in the late 1990s, primarily in the United Kingdom. Companies began hiring acting professionals to create situational dramas to be overcome by learners as part of an experiential learning methodology. Today, there are more than twenty companies in the UK that specialize in providing role-players for workplace simulations. Professional military role-players have been employed by the US Military since 2001, primarily as a response to the September 11, 2001 attacks in the United States. Preparation requirements for the resulting War in Afghanistan created a need for cultural role-players skilled in languages and customs of current theaters of war to populate simulated villages and urban environments. Use in experiential learning Medical training Medical role-players typically fall under the category of Standardized Patients (SP). SPs are extensively used in medical and nursing education to allow students to practice and improve their clinical and conversational skills for an actual patient encounter. SPs commonly provide feedback after such encounters. They are also useful to train students to learn professional conduct in potentially embarrassing situations such as pelvic or breast exams. SPs are also used extensively in testing of clinical skills of students, usually as a part of an objective structured clinical examination. Typically, the SP will use a checklist to record the details of the encounter. Role-players can engage with medical learners in one of two ways: As part of a simulation wherein both learner and role-player are aware of the fiction, and have established rules and boundaries (i.e. the "fiction contract"). Surreptitiously, for purposes of healthcare provider evaluation or health informatics research. Medical role-players can also be used to portray distraught or bereaved family members of patients in emergency medicine scenarios, giving the learners practice in handling emotionally difficult or distracting situations. Military training Role-players in military simulations can portray various types of interactive characters: Opposing force (OPFOR) Role-players are trained to accurately emulate real-life enemies in order to provide a more realistic experience for military personnel. To avoid the diplomatic ramifications of naming a real nation as a likely enemy, training scenarios often use fictional countries with similar military characteristics to the expected real-world foes. Civilians on the battlefield (COB) Some COB role-players are expatriates of foreign countries who have the looks, language skills, and cultural familiarity needed to accurately portray key points of interaction in a military scenario. Others are locals who may be unskilled as actors, and primarily serve to populate a particular area of operations within a military scenario so that soldiers can be challenged with problems of crowd control, or situational awareness. Tactical Combat Casualty Care (TC3) Field medical training, or Tactical Combat Casualty Care training utilizes role-players to portray wounded soldiers and civilians on the battlefield. Role-players will often scream in pain, convulse, and panic to create extreme emotional conditions under which battle medics must operate proficiently. It is not uncommon for TC3 scenarios to employee amputees as role-players. These role-players are fitted with realistic prosthetic wounds that can gush synthetic blood or other bodily fluids in order to heighten the emotional intensity of a simulation. It is expected that trainees who are routinely exposed to such intense situations in simulations will eventually experience a level of "stress-inoculation" that will provide life-saving advantages in real battle situations. Law Enforcement Training Role-players are often hired by law enforcement agencies to portray criminals or victims of crimes in scenarios that simulate typical law enforcement situations. These can range from a response to a domestic violence call to an "active shooter" scenario. Role-players are advantageous over video-based police simulations in that they can escalate or de-escalate a confrontational situation in response to the words, body language, and tone of voice of the trainee. This becomes key in effective use-of-force training. Law enforcement scenarios use role-players for scenarios such as interrogation, hostage negotiation, and witness interviews. Recently, law enforcement agencies have begun to introduce the identification of human trafficking victims into their role-player curriculum. The Federal Law Enforcement Training Center at Glynco, Georgia, is the largest employer of non-military role-players in the United States. Business Leadership Training Role-players are used by businesses to equip their leadership with experience in handling interpersonal conflict, negotiations, interviews, performance reviews, customer service, workplace safety, and ethical dilemmas. A role-player may also simulate difficult and sensitive conversations such as layoffs, or reports of sexual harassment. This gives leaders a chance to make mistakes in a safe environment, rather than learn from a mistake in the real world, which could lead to costly litigation. Mediation and Facilitation Training Role-playing is used to equip future practitioners with experience in using diverse skills, structures, and methods to handle various mediation and facilitation scenarios. These roleplays usually have students roleplaying both the mediation-facilitation and client-sides of the interactions; however, more intense or complicated scenarios can be explored with more experienced or professional role-players. The interactions are usually scaffolded; with various key features of the participants and situation defined, but much of the roleplay is improvised. The practice of roleplay in this context promotes several important factors, beyond basic skill-building. It fosters the capacity for multiperspectival thinking. It helps mediators and facilitators cultivate empathy and compassion for their clients, this cultivation can be critical for achieving better outcomes. Forecasting Role-play also has applications in forecasting. One forecasting method is to simulate the condition(s) being studied. Some experts in forecasting have found that role-thinking for produces less accurate forecasts than when groups act as protagonists in their interactions with one another. Learning advantages The use of skilled role-players in a simulation has several benefits over using unskilled confederates: When untrained fellow learners are asked to serve as role-players in a simulation, the resulting learning experience tends to be ineffective due to embarrassment, intimidation, or unrealistic performances. Skilled role-players also help ensure the conditions for an effective simulation are intact. These conditions include maintaining a safe environment, and dynamically adjusting difficulty, complexity, and intensity to the capabilities and experience level of the learner. Since role-players improvise each interaction, predictability is taken out of the simulation. Predictable scenarios limit the development of decision-making skills. Role-player providers can typically offer broader coverage of demographic representation than is possible by using in-house staff to portray characters. Limitations Role-players can be expensive to organizations with limited training resources. Role-player fees are typically contingent upon skill and level of specialized knowledge, and can range from minimum wage to more than US$100 per hour. Certain types of training that require objectively quantifiable measures of success can be incompatible with the variability of using a human-in-the-loop. In fiction The Diamond Age (novel) Interactors feature prominently in Neal Stephenson's novel, The Diamond Age: Or, A Young Lady's Illustrated Primer. Set in the near-future, artificial intelligence is depicted in the novel as having failed in its goal of creating software capable of passing the Turing Test, therefore it is renamed "pseudo-intelligence". As a result, virtual reality entertainments are augmented by role-players skilled in the use of digital puppetry who don motion capture suits and perform as interactive avatars within virtual environments. These human-in-the-loop simulations are known as "ractives" (an abbreviation of "interactives"), and the performers who drive them are called "ractors" (an abbreviation of "interactors"). The Game (film) Ubiquitous and surreptitious role-players are the primary plot drivers of the 1997 film, The Game, directed by David Fincher. The protagonist agrees to participate in a vaguely defined game hosted by a high-end entertainment company called Consumer Recreation Services. He later ends up being manipulated by dozens of CSR role-players who psychologically torment him to the brink of suicide. The Magus (novel) In the John Fowles' novel, The Magus, an eccentric and wealthy recluse uses surreptitious role-players to manipulate the protagonist. The novel never fully clarifies which characters are "real", and which are being portrayed by actors. Additionally, there are role-players who engage with the protagonist as multiple different characters. He eventually loses the ability to distinguish artifice from reality, and realizes that he has become a fictionalized version of himself in the simulation of his own life. See also Business game Game (simulation) Hyperdrama Interactive theater Military simulation Presentational acting Serious game Simulation References Social learning theory Role-playing Simulation video games
Roleplay simulation
Biology
2,040
59,161,710
https://en.wikipedia.org/wiki/BIM%20Collaboration%20Format
The BIM Collaboration Format (BCF) is a structured file format suited to issue tracking with a building information model. The BCF is designed primarily for defining views of a building model and associated information on collisions and errors connected with specific objects in the view. The BCF allows users of different BIM software, and/or different disciplines to collaborate on issues with the project. The use of the BCF to coordinate changes to a BIM is an important aspect of OpenBIM. The format was developed by Tekla and Solibri and later adopted as a standard by buildingSMART. Most major BIM modelling software platforms support some integration with BCF, typically through plug-ins provided by the BCF server vendor. Although the BCF was originally conceived as a file base there are now many implementations using the cloud-based collaborative workflow described in the BCF API, including Open Source implementation as part of the Open Source BIM collective. Research work has been done in Denmark looking into using the BCF for a broader range of information management and exchange in the architecture, engineering and construction (AEC) sector. Supporting software There are two main categories of support for the BCF: authoring software and coordination software. Authoring software can generate and share BCF issues. Coordination software is most powerful at coordinating issues and presenting a user interface for the management and tracking of issues. Coordination software is typically a web-based service which allows for real-time coordination across multiple authoring software platforms and geographies. Most BIM software has a mix of these functions. The BCF is supported natively by authoring software such as Vectorworks, ArchiCAD, Tekla Structures, Quadri, DDS CAD, BIMcollab Zoom, BIMsight, Solibri, Navisworks, and Simplebim. Standalone BCF plugins include BCF Manager, and BCFier. Coordination software as cloud services offering BCF based issue tracking include BIMcollab, Newforma Konekt, Vrex, Catenda's Bimsync, Bricks app, ACCA software's usBIM.platform, and OpenProject. See also Industry Foundation Classes aecXML BuildingSMART References External links buildingSMART standards library Building information modeling Data modeling Building engineering
BIM Collaboration Format
Engineering
468
72,358,762
https://en.wikipedia.org/wiki/Adherent%20culture
Adherent cell cultures are a type of cell culture that requires cells to be attached to a surface in order for growth to occur. Most vertebrate derived cells (with the exception of hematopoietic cells) can be cultured and require a 2 dimensional monolayer that to facilitate cell adhesion and spreading. Cell samples can be taken from tissue explants or cell suspension cultures. Adherent cell cultures with an excess of nutrient-containing growth medium will continue to grow until they cover the available surface area. Proteases like trypsin are most commonly used to break the adhesion from the cells to the flask. Alternatively, cell scrapers can be used to mechanically break the adhesion if introducing proteases could damage the cell cultures. Unlike suspension cultures, the other main type of cell culture, adherent cultures require regular passaging performed using mechanical or enzymatic dissociation. The culture can be visualized using an inverted microscope, however the growth of adherent cultures is dependent on the available surface area. For this reason, adherent cell cultures are not commonly used to obtain a high yield of cells, instead the use of suspension cultures is preferred. Methods and Maintenance Isolating Cells Primary cells used for adherent cultures must be isolated and treated from a subject, or may be transferred from pre-existing cell lines. Adherent cells must first be transferred to a monolayer attached to a surface, and are categorized by their morphological differences. Fibroblast-like adherent cells have a linear and stretched shape, and migrate when attached to the monolayer. Epithelial-like adherent cells have a wider and polygonal shape, and do not migrate when attached to the monolayer. Once cells are properly isolated from their source and are transferred to the media, cell passaging can be conducted. Adherent Cell Culture Maintenance for Laboratories While passaging adherent cell cultures, spent media must be repeatedly pipetted out and replaced with fresh media. The culture vessel can also be repeatedly tapped, which should be combined with either mechanical or enzymatic methods to facilitate cell detachment. The culture vessel can also be centrifuged, forming a supernatant that can be extracted using a pipette. Cells must be fed 2 to 3 times per week, and must be cultured at an appropriate temperature, humidity, light, and pH in order to ensure optimal cell proliferation. Passaging (subculturing) Cells While adherent cultures share similarities with suspension cultures, there are many key differences in how they are cultured and passaged. For adherent culture passaging, the spent media is first pipetted out of the flask containing cells as a waste product. Cells are adhered to the media that was not removed in a culture vessel, and a series of wash and incubation steps are then necessary to detach the cells. For the wash steps, a balanced salt solution is poured to the side opposite the cell culture, and the culture vessel is then shaken before draining the balanced salt solution. Heat is applied to the culture vessel for the incubation steps, causing protein denaturation and the gradual separation of the cells from the media. Similarly to suspension cultures, the total number of cells can be calculated using a hemocytometer and trypan blue. Commercial Applications and Limitations Adherent cultures are most commonly used for cytology and for harvesting cellular products on a small scale. Since their growth is limited to 2D, it is difficult to use adherent cultures to study in-vivo cell structure and function. Research is being done to grow adherent cell cultures using 3D microcarriers in order to avoid this limitation and to use adherent cell cultures for drug testing. Commercial applications of adherent cultures include: Producing adherent cells that create proteins of interest used for vaccine development. Adherent cells used in conjunction with viral vectors for cell and gene therapy. Delivering micro and nanotechnology to adherent cells in vitro. Adjusting adherent cell morphology for cancer cell screening. References Cell culture
Adherent culture
Biology
814
66,823,220
https://en.wikipedia.org/wiki/Timeline%20of%20Mars%202020
The Mars 2020 mission, consisting of the rover Perseverance and helicopter Ingenuity, was launched on July 30, 2020, and landed in Jezero crater on Mars on February 18, 2021. As of , , Perseverance has been on the planet for sols ( total days; ). Ingenuity operated for sols ( total days; ) until its rotor blades, possibly all four, were damaged during the landing of flight 72 on January 18, 2024, causing NASA to retire the craft. Current weather data on Mars is being monitored by the Curiosity rover and had previously been monitored by the Insight lander. The Perseverance rover is also collecting weather data. (See the External links section) Overview of mission Prelaunch (2012–2020) The Mars 2020 mission was announced by NASA on December 4, 2012. In 2017 the three sites (Jezero crater, Northeastern Syrtis Major Planum, and Columbia Hills) were chosen as potential landing locations, with Jezero crater selected as the landing location, and launched on July 30th, 2020, from Cape Canaveral. Landing and initial tests (February–May 2021) After arriving on February 18, Perseverance focused on validating its systems. During this phase, it used its science instruments for the first time, generated oxygen on Mars with MOXIE, and deployed Ingenuity. Ingenuity began the technology demonstration phase of its mission, completing five flights before transitioning to the operations demonstration phase of its mission. Cratered floor campaign (June 2021-April 2022) The Cratered Floor Campaign was the first science campaign. It began on June 1, 2021, with the goal of exploring the Crater Floor Fractured Rough and Séítah geologic units. To avoid the sand dunes of the Séítah unit, Perseverance mostly traveled within the Crater Floor Fractured Rough geologic unit or along the boundary between the two units. The first nine of Perseverances sample tubes were to be filled during this expedition, including the first three 'witness tubes'. After collecting the samples, Perseverance returned to its landing site, before continuing to the delta for its second science campaign. Some of the sample tubes filled during this campaign were later stored in a designated area for the upcoming NASA-ESA Mars Sample Return mission, during the Delta Front Campaign. While Perseverance embarked on its first science campaign, Ingenuity continued to travel alongside the rover as part of its operations demonstration campaign. Ingenuity's sixth through twenty-fifth flights were completed during this phase, achieving an at-the-time speed record of 5.5 meters per second. Delta front campaign (April 2022 - January 2023) The Delta Front Campaign was the second science campaign of the Mars 2020 mission. The campaign began with Ingenuity continuing to travel alongside the rover as part of its operations demonstration campaign, and Perseverance leaving the rapid traverse mode it had entered at the end of the last mission to rapidly reach the delta. During the campaign, Perseverance would take a further nine samples, in addition to two further witness tubes. Ingenuity would make its 41th flight during this mission. An incident occurred in which Ingenuity was unable to sufficiently charge during the night, leading to a change in how Ingenuity manages its heaters. The MOXIE experiment continued to run, generating a record amount of oxygen-per-hour on Mars. The campaign concluded with Perseverance reaching the top of the delta and the completion of its first sample depot. Upper fan campaign (January 2023 - September 2023) The Upper Fan Campaign, also called the Delta Top Campaign, was the third science campaign of the Mars 2020 mission. Whereas prior campaigns investigated areas that are believed to have been submerged in an ancient lake, this campaign investigated one of the riverbeds that used to feed into the lake. The MOXIE experiment completed its 16th, and final, oxygen generation test during this campaign. Ingenuity completed its 54th flight during this campaign. The helicopter experienced an anomaly that caused it to land outside the range of the rover, but this was ultimately resolved when the rover moved into a position that allowed contact to be restored. The campaign ended with Perseverance reaching the margin carbonate geologic unit, after having taken three further rock samples (and 21 overall). Margin campaign (September 2023 - August 2024) The Margin Campaign was the fourth of the Mars 2020 mission. The campaign was expected to last around 8 months, although it lasted closer to a year, after which point Perseverance began the Crater Rim Campaign. The campaign gets its name from the geological unit it aims to explore - the margin carbonate unit. Rocks in this unit are capable of containing traces of life, and their formation is tied to the presence of liquid water. During the campaign, Ingenuity achieved several records, including a max altitude of 24 meters (flight 61) and a maximum groundspeed of 10 meters per second (flight 62). Unfortunately, due to a failure on the 72nd flight, the helicopter blades became too damaged to fly. On January 25th, 2024, NASA declared the end of Ingenuity's mission - the helicopter's final resting place was named Valinor Hills Station, after a location in the Lord of the Rings franchise. Despite the loss of Ingenuity's blades, the core of the helicopter remained intact; it will continue to monitor atmospheric conditions for as long as it is able. Perseverance took four further rock samples during this campaign (25 overall). The campaign overlapped with solar conjunction, interfering with the ability to communicate with the rover from Earth. Engineers from NASA’s Jet Propulsion Laboratory in Southern California and AeroVironment are completing a detailed assessment of the Ingenuity Mars Helicopter’s final flight on January 18, 2024, the first of its kind on an extraterrestrial planet, concluding that the inability of Ingenuity’s navigation system to provide accurate data during the flight likely caused a chain of events that ended the mission. The helicopter’s vision navigation system was designed to track visual features on the surface using a downward-looking camera over well-textured (pebbly) but flat terrain. This limited tracking capability was more than sufficient for carrying out Ingenuity’s first five flights, but by Flight 72 the helicopter was in a region of Jezero Crater filled with steep, relatively featureless sand ripples. One of the navigation system’s main requirements was to provide velocity estimates that would enable the helicopter to land within a small envelope of vertical and horizontal velocities. Data sent down during Flight 72 shows that, around 20 seconds after takeoff, the navigation system couldn’t find enough surface features to track. Photographs taken after the flight indicate the navigation errors created high horizontal velocities at touchdown. In the most likely scenario, the hard impact on the sand ripple’s slope caused Ingenuity to pitch and roll. The rapid attitude change resulted in loads on the fast-rotating rotor blades beyond their design limits, snapping all four of them off at their weakest point — about a third of the way from the tip. The damaged blades caused excessive vibration in the rotor system, ripping the remainder of one blade from its root and generating an excessive power demand that resulted in loss of communications. Crater rim campaign (August 2024 - present) The Crater Rim Campaign is the fifth, currently ongoing science campaign, and the first new science campaign since the loss of the Ingenuity helicopter. It is expected to last until the end of 2024, and will include a total elevation change of over 1000 feet (~300 meters). The main focuses of the campaign are expected to be at the regions "Pico Turquino" and "Witch Hazel Hill", pictured above. It is expected to encounter rocks as old as 4 billion years. Samples cached for the Mars sample-return mission In the frame of the NASA-ESA Mars Sample Return around of soil samples along with some Martian gas samples from the atmosphere will be cached. Currently, samples are being cached by Mars 2020 Perseverance Rover on the surface of Mars. Out of 43 sample tubes, 8 are igneous rock sample tubes, 12 are sedimentary rock sample tubes, 1 silica-cemented carbonate rock sample tube, 1 gas sample tube, 2 regolith sample tubes, 3 "witness tubes", with 16 tubes remaining unused as of August, 2024. Before launch, 5 of the 43 tubes were designated "witness tubes" and filled with materials that would capture particulates in the ambient environment of Mars. See also Astrobiology Composition of Mars Curiosity rover Exploration of Mars Geography of Mars Geology of Mars InSight lander List of missions to Mars List of rocks on Mars Mars Exploration Rover Mars Express orbiter Mars Odyssey Orbiter Mars Orbiter Mission Mars Pathfinder (Sojourner rover) Mars Reconnaissance Orbiter Mars 2020 rover mission MAVEN orbiter Moons of Mars Phoenix lander Robotic spacecraft Scientific information from the Mars Exploration Rover mission Space exploration Timeline of Mars Science Laboratory U.S. Space Exploration History on U.S. Stamps Viking program Water on Mars Notes References External links Current Weather Report on Mars by the Perseverance rover – MEDA (MarsWxReport/temp; 1st Report/6apr2021; NASA-1; NASA-2) Mars Weather: Perseverance*Curiosity*InSight Current Weather Report on Mars by the Curiosity rover Current Weather Report on Mars by the InSight lander Perseverance rover: Official website Mars 2020: Official website Mars 2020: Location Maps (related site; 2GB PNG-image) Video (60:00) – Minerals and the Origins of Life – (Robert Hazen; NASA; April 2014) Video (86:49) – Search for Life in the Universe – (NASA; July 2014) Video (13:33) – Mars Perseverance rover/Ingenuity helicopter report (9 May 2021; CBS-TV, 60 Minutes) Video (03:04) − Exploring Jezero Crater − (NASA; December 2021) Astrobiology Exploration of Mars Mars 2020 Articles containing video clips 2021 on Mars Mars 2020
Timeline of Mars 2020
Astronomy,Biology
2,076
24,202,448
https://en.wikipedia.org/wiki/C12H18O2
{{DISPLAYTITLE:C12H18O2}} The molecular formula C12H18O2 (molar mass: 194.27 g/mol, exact mass: 194.1307 u) may refer to: 2,5-Dimethoxy-p-cymene, or thymohydroquinone dimethyl ether Hexylresorcinol Sedanolide
C12H18O2
Chemistry
87
72,582,556
https://en.wikipedia.org/wiki/Azurite%20%28pigment%29
Azurite is an inorganic pigment derived from the mineral of the same name. It was likely used by artists as early as the Fourth Dynasty in Egypt, but it was less frequently employed than synthetically produced copper pigments such as Egyptian Blue. In the Middle Ages and Renaissance, it was the most prevalent blue pigment in European paintings, appearing more commonly than the more expensive ultramarine. Azurite's derivation from copper mines tends to give it a greenish hue, in contrast with the more violet tone of ultramarine. Azurite is also less stable than ultramarine, and notable paintings such as Michelangelo's The Entombment have seen their azure blues turn to olive green in time. Azurite pigment typically includes traces of malachite and cuprite; both minerals are found alongside azurite in nature, and they may account for some of the green discoloration of the pigment. The particle size of azurite pigment has been shown to have a significant effect on its chromatic intensity, and the manner of grinding and preparing the pigment therefore has a major impact on its appearance. History Azurite is a naturally occurring mineral found particularly in copper-mining areas of the world. It is often found with malachite, a green basic carbonate of copper. There is evidence that azurite has been used since the dawn of modern civilization, dating back to the Fourth Dynasty in Egypt. For much of its history, azurite was used more frequently than ultramarine, despite ultramarine being held in higher esteem. During the Middle Ages and Renaissance, the two were closely related as azurite would often be used as an under-paint for ultramarine, possibly to lower costs as ultramarine was the more expensive pigment of the two. Hungary was the main supplier of European azurite until the mid 17th century, when it was invaded by the Ottoman Empire, but now Hungary is again the most popular source of the pigment. Azurite was frequently used in East Asia, but is less commonly found in Pre-Columbian indigenous and later Spanish Mission Church paintings. With the invention of Prussian blue in 1704, azurite was displaced as the most commonly used blue pigment in European paintings, but it remained popular, possibly because of its simple preparation. Blue verditer, which is chemically similar to azurite but synthetically produced, was commonly used to paint houses in the 17th century. Chemical composition Azurite is a basic compound that is coordinated with copper. Azurite was popular due to its stability in various light and atmospheric conditions, making it easy to store. Although azurite is permanent in oil and tempura paint, it is darkened when exposed to sulfur; this can be seen in mural paintings that use azurite. Azurite turns green as it degrades into malachite and other products. Azurite is relatively easy to identify in conservation studies because of its characteristic ability to produce copper-coordinated compounds, ability to dissolve in acidic solution, and birefringence interactions with light. It can be identified using various spectroscopy methods such as X-ray diffraction, emission, IR spectroscopy, and Raman spectrophotometry. Conservation Due to its association with copper and malachite, a green pigment, the hue of azurite can change to a greenish blue hue over time. Conservation studies of a 14th-15th century wall painting of San Antonio Abate in the church of San Pietro near Florence, Italy revealed that the azurite degradation product, once thought to be malachite, is actually paratacamite. Paratacamite and atacamite are two different phases of a basic copper chloride that are both formed through the degradation of azurite; they can be distinguished using FTIR techniques. There is controversy over how to restore azurite degradation because the typical technique of applying ammonium carbonate and barium hydroxide does produce a dark blue hue, but it is not azurite. Rather, the dark blue compound is produced due to the action of barium hydroxide, and not ammonium carbonate, although both are present in the typical conservation technique used to restore azurite. Moreover, the blue color is not stable; the San Antonio Abate church wall painting changed color two years after its restoration. Grinding A finer grind makes azurite appear paler whereas a coarser grind deepens its color. For use in early modern paintings, azurite was ground by hand. Artists employed special techniques which required training to grind the pigment in order to achieve different intensities. Azurite grinding therefore varied across workshops. Different grinding styles are characterized by both the pigment-medium ratio and the particle size distribution of the pigment. Azurite particles are irregular in size and often contains impurities such as malachite and cuprite due to its close association with these compounds. The pigment to volume concentration of azurite is difficult to study because azurite was often mixed with varying amounts of lead white, especially in early Netherlandish paintings. Association with ultramarine Azurite was often used with ultramarine as a cost-saving measure. The two can be distinguished by contrasting the blue-green degradation of azurite with the blue-violet degradation of ultramarine. Ultramarine is often more finely ground than azurite; because azurite is a strong pigment if left coarsely ground, artists took care not to grind it too finely. In the painting Mystic Lamb, ultramarine and azurite were used in nearly the same areas and in similar particle size distributions. Both pigments are finely ground. However, The Mystic Lamb alone should not be used to generalize the style of azurite in early Netherlandish paintings. In a different early Netherlandish painting from the workshop of Dieric Bouts, azurite and ultramarine are used together, but azurite is more coarsely ground. In paintings Azurite was frequently used in European Renaissance painting. It appears, for example, in the dark blue sky of a Spanish altarpiece painting by Bartolome Bermejo. In this painting, azurite is also combined with lead white to paint the green robe of the Saint. During this time, azurite was a common pigment used to paint a blue sky. In the 1520 painting titled Christ Taking Leave of His Mother by Albrecht Altdorfer, azurite is used in the blue garments of the figures. In addition, azurite is mixed with lead white to paint the sky. Azurite has been used to produce greens for foliage and landscapes and mixed with red pigments to produce violet. References Inorganic pigments Pigments Shades of blue
Azurite (pigment)
Chemistry
1,362
600,373
https://en.wikipedia.org/wiki/Gradient%20conjecture
In mathematics, the gradient conjecture, due to René Thom (1989), was proved in 2000 by three Polish mathematicians, Krzysztof Kurdyka (University of Savoie, France), Tadeusz Mostowski (Warsaw University, Poland) and Adam Parusiński (University of Angers, France). The conjecture states that given a real-valued analytic function f defined on Rn and a trajectory x(t) of the gradient vector field of f having a limit point x0 ∈ Rn, where f has an isolated critical point at x0, there exists a limit (in the projective space PRn-1) for the secant lines from x(t) to x0, as t tends to zero. The proof depends on a theorem due to Stanis%C5%82aw %C5%81ojasiewicz. References R. Thom (1989) "Problèmes rencontrés dans mon parcours mathématique: un bilan", Publications Math%C3%A9matiques de l%27IH%C3%89S 70: 200 to 214. (This gradient conjecture due to René Thom was in fact well-known among specialists by the early 70's, having been often discussed during that period by Thom during his weekly seminar on singularities at the IHES.) In 2000 the conjecture was proven correct in Annals of Mathematics 152: 763 to 792. The proof is available here. Theorems in analysis
Gradient conjecture
Mathematics
304
3,037,149
https://en.wikipedia.org/wiki/Nanocar
The nanocar is a molecule designed in 2005 at Rice University by a group headed by Professor James Tour. Despite the name, the original nanocar does not contain a molecular motor, hence, it is not really a car. Rather, it was designed to answer the question of how fullerenes move about on metal surfaces; specifically, whether they roll or slide (they roll). The molecule consists of an H-shaped 'chassis' with fullerene groups attached at the four corners to act as wheels. When dispersed on a gold surface, the molecules attach themselves to the surface via their fullerene groups and are detected via scanning tunneling microscopy. One can deduce their orientation as the frame length is a little shorter than its width. Upon heating the surface to 200 °C the molecules move forward and back as they roll on their fullerene "wheels". The nanocar is able to roll about because the fullerene wheel is fitted to the alkyne "axle" through a carbon-carbon single bond. The hydrogen on the neighboring carbon is no great obstacle to free rotation. When the temperature is high enough, the four carbon-carbon bonds rotate and the car rolls about. Occasionally the direction of movement changes as the molecule pivots. The rolling action was confirmed by Professor Kevin Kelly, also at Rice, by pulling the molecule with the tip of the STM. Independent early conceptual contribution The concept of a nanocar built out of molecular "tinkertoys" was first hypothesized by M.T. Michalewicz at the Fifth Foresight Conference on Molecular Nanotechnology (November 1997). Subsequently, an expanded version was published in Annals of Improbable Research. These papers were supposed to be a not-so-serious contribution to a fundamental debate on the limits of bottom-up Drexlerian nanotechnology and conceptual limits of how far mechanistic analogies advanced by Eric Drexler could be carried out. The important feature of this nanocar concept was the fact that all molecular component tinkertoys were known and synthesized molecules (alas, some very exotic and only recently discovered, e.g. staffanes, and notably – ferric wheel, 1995), in contrast to some Drexlerian diamondoid structures that were only postulated and never synthesized; and the drive system that was embedded in a ferric wheel and driven by inhomogeneous or time-dependent magnetic field of a substrate – an "engine in a wheel" concept. Nanodragster The Nanodragster, dubbed the world's smallest hot rod, is a molecular nanocar. The design improves on previous nanocar designs and is a step towards creating molecular machines. The name comes from the nanocar's resemblance to a dragster, as its staggered wheel fitment has a shorter axle with smaller wheels in the front and a larger axle with larger wheels in the back. The nanocar was developed at Rice University’s Richard E. Smalley Institute Nanoscale Science and Technology by the team of James Tour, Kevin Kelly and other colleagues involved in its research. The previous nanocar developed was 3 to 4 nanometers which was a little over[the width of?] a strand of DNA and was around 20,000 times thinner than a human hair. These nanocars were built with carbon buckyballs as their four wheels, and the surface on which they were placed required a temperature of to get it moving. On the other hand, a nanocar which utilized p-carborane wheels moves as if sliding on ice, rather than rolling. Such observations led to the production of nanocars which had both wheel designs. The nanodragster is 50,000 times thinner than a human hair and has a top speed of 0.014 millimeters per hour (0.0006 in/h or 3.89×10−9 m/s). The rear wheels are spherical fullerene molecules, or buckyballs, composed of sixty carbon atoms each, which are attracted to a dragstrip that is made up of a very fine layer of gold. This design also enabled Tour’s team to operate the device at lower temperatures. The nanodragster and other nano-machines are designed for use in transporting items. The technology can be used in manufacturing computer circuits and electronic components, or in conjunction with pharmaceuticals inside the human body. Tour also speculated that the knowledge gained from the nanocar research would help build efficient catalytic systems in the future. Electrically driven directional motion of a four-wheel molecule on a metal surface Kudernac et al. described a specially designed molecule that has four motorized "wheels". By depositing the molecule on a copper surface and providing them with sufficient energy from electrons of a scanning tunnelling microscope they were able to drive some of the molecules in a specific direction, much like a car, being the first single molecule capable to continue moving in the same direction across a surface. Inelastic electron tunnelling induces conformational changes in the rotors and propels the molecule across a copper surface. By changing the direction of the rotary motion of individual motor units, the self-propelling molecular 'four-wheeler' structure can follow random or preferentially linear trajectories. This design provides a starting point for the exploration of more sophisticated molecular mechanical systems, perhaps with complete control over their direction of motion. This electrically driven nanocar was built under supervision of University of Groningen chemist Bernard L. Feringa, who was awarded the Nobel Prize for Chemistry in 2016 for his pioneering work on nanomotors, together with Jean-Pierre Sauvage and J. Fraser Stoddart. Motor nanocar A future nanocar with a synthetic molecular motor has been developed by Jean-Francois Morin et al. It is fitted with carborane wheels and a light-powered helicene synthetic molecular motor. Although the motor moiety displayed unidirectional rotation in solution, light-driven motion on a surface has yet to be observed. Mobility in water and other liquids can be also realized by a molecular propeller in the future. See also NanoPutian Nanocar race References Molecular machines Nanotechnology
Nanocar
Physics,Chemistry,Materials_science,Technology,Engineering
1,263
11,421,894
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD103
In molecular biology, snoRNA U103 (also known as SNORD103) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA U103 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. U103 was identified by computational screening of the introns of ribosomal protein genes for conserved C/D box sequence motifs and expression experimentally verified by northern blotting. U103 is predicted to guide the 2'O-ribose methylation of 18S ribosomal RNA (rRNA) residue G601. In both the human and mouse genome there are two U103 gene copies (called U103A or SNORD103A and U103B or SNORD103B) located within introns 17 and 21 of the PUM1 gene. References External links Small nuclear RNA
Small nucleolar RNA SNORD103
Chemistry
308
11,378,941
https://en.wikipedia.org/wiki/Articulated%20hauler
An articulated hauler, articulated dump truck (ADT), or sometimes a dump hauler, is a very large heavy-duty type of dump truck used to transport loads over rough terrain, and occasionally on public roads. The vehicle usually has all-wheel drive and consists of two basic units: the front section, generally called the tractor, and the rear section that contains the dump body, called the hauler or trailer section. Steering is made by pivoting the front in relation to the back by hydraulic rams. This way, all wheels follow the same path, making it an excellent off-road vehicle. Manufacturers include Volvo CE, Caterpillar, Terex, John Deere/Bell Equipment, Moxy/Doosan, Astra and Komatsu Limited. With half of the global sales, Volvo is the market leader in the segment, and is also the prime pioneer of the vehicle, enabling its introduction to the markets in 1966. Although first envisioned as a soil and aggregate transporter (dumper), the chassis have since been used for many other applications including agriculture, mining, construction and highway maintenance. Ranging from concrete mixer, water tanker and container truck, over to upsize off-road semi-trailer hauler (on-road applications), hook loader or crane, as well as used to transport timber and as a woodchipper platform. Its chassis have also been used for military purposes given that it only is surpassed by tracked vehicles in off-road capabilities. An example is the Archer Artillery System. History In 1955 the Swedish company tractor trailer manufacturer Lihnells Vagn AB (Livab) started to develop a specialized dump vehicle in cooperation with Bolinder-Munktell (BM), which in 1950 had been bought by Volvo but still operated as an independent daughter company. This was essentially a trailer with a powered axle mated to an agricultural tractor and utilizing its power take-off shaft to drive the trailer's axle. These were not articulated haulers in the modern sense, as the tractor retained its front axle to provide steering. As Livab's cooperation with BM deepened, it started experimenting with getting rid of the front axle by permanently attaching the trailer and instead provide steering through hydraulic cylinders forcing the trailer and wagon to turn in relation to each other. This was made in analogue with systems already developed for use in tandem tractors (see for example Doe Triple-D). The first purpose-built articulated hauler was DR 631, a 4x4, released in 1966, with a larger 6x6 model DR 860 being released in 1968. In 1974 Livab was absorbed into Volvo BM. Meanwhile, a very similar vehicle was developed by another Swedish company, Kockum Landsverk AB. Having a similar tractor derived design it released its first articulated 4x4 dumper truck in 1967 named KL 411 that was replaced by a similar sized 6x6 in 1973 named KL 412. This company competed with Volvo BM until 1982 when it was bought by its bigger competitor. The early articulated haulers were rugged, lacked suspension and had manual transmissions. This made them uncomfortable, noisy and demanding to drive and contributed to operator fatigue. The lack of suspension, other than that inherent in the large tires, also put stress on the drive-train and chassis, making them unsuitable for high-speed operation and in need of frequent service. The top speed was a mere 30 km/h. Many of these concerns have been eliminated with development over the years. In 1970 a Norwegian company now known as Moxy developed its first 6x6 articulated dump truck, which was put into serial production two years later as D15. It featured a bogie undercarriage. This combination later became a de-facto industry standard. The driver situation was addressed with the introduction of front suspension in the Volvo BM 5350 of 1979. This model also saw the introduction of automatic transmission and instead of a tractor derived cab, a new purpose designed cab. LeTourneau and Athey ADTs From NZ Contractor magazine· One of RG LeTourneau’s many design innovations was the ADT.. The first ADT, the Tournatailer, built in 1938 and was based on a Model A Tournapull, Then came the 1940 Model C Tournapull. LeTourneau Australia meanwhile built a scow-shaped dumper body for use behind the Australian Tournapull, and this was to become the Tournarocker. Following the war, LeTourneau set about designing tougher articulated rear dumpers which he christened “Tournarockers”. In around 1950 the Athey company introduced their two axle AT-15 Articulated Hauler. Since then there have been developments in brakes, differentials and other aspects of the drive-train to increase speed, usability and reliability. Full suspension (meaning all three live axles now are fitted with suspension) came with the 2007 Volvo CE A35E/A40E. Volvo trucks allow each of the three solid axles to move and twist independent of the other two, and therefore calls it "independent suspension" in their marketing materials, although it still utilizes solid axles, unlike independent suspension in automobiles. Others have since followed suit. Design Retained features from its agricultural tractor heritage is the operators location and the basic layout. The driver thus sit behind the engine and above the transmission and front drive axle. The permanently attached trailer can not move vertically in relation to the tractor, but can rotate and swing on the horizontal plane. The operator has a conventional steering wheel that actuates hydraulic cylinders that push and pull the tractor relative to the trailer. The tractor and the trailer sections can move at great angles to each other making for a small turning radius. The trailer axle(s) are driven by a drive shaft exiting the rear of the transmission with splines and universal joints to accommodate the movement between them. All axles are portals with hub reduction and locking differentials. Initially the axle differentials were permanently locked but recently some models can run with open but lockable differentials for better high-speed capabilities. The usual twin back axles are combined in a separate frame that can pivot in relation to the "trailer" frame, keeping all wheels on the ground, but until recently always unsprung (called a "bogie"). Likewise the less-common trucks with single rear axles usually are unsprung in the rear, while the front axle have suspension to give the operator a better ride. The reinforced cab is sometimes also sprung, as is done in modern cab-over-engine trucks, as is the drivers seat. The way the sections can twist in relation to each other and the way the vehicle steers, making the back tires follow the same path as the front tires, provide for excellent off-road capabilities in combination with all-wheel drive. The top speed is limited at 55–60 km/h (the ungoverned Archer Artillery System has a maximum road speed of "at least 70 km/h"; however this would be at expense of fuel economy and mechanical wear, and would exceed the standard speed of military convoys significantly) and the net loading capacity ranges from just below 25 to a little over 40 tonnes. Applications The front section of an articulated hauler can be adapted to many uses to include water tankers, Comparison to rigid dump trucks Articulated haulers excel in hauling material over rough terrain e.g. swamps, bogs, marshes. They are rugged and are built to handle great inclines and slippery conditions. This is their main advantage over rigid haulers, which excel in carrying capacity. Where an articulated hauler can take no more than 55 metric tonnes there are models of rigid haulers (haulers with conventional front steering and rear-wheel drive) that can carry up to 310 tonnes such as the Belaz 7550. This is also seen in the way they are used. Whereas rigid haulers find their best usage at large surface mines and big quarries, where there is abundant space and hard level surfaces to drive on, articulated haulers are best used at rugged and cramped sites, such as large construction sites. The articulated haulers relatively small size also make them able to drive on public roads between different worksites at a large construction project—something that is impossible for the largest haul trucks, which might even have to be disassembled to be moved between different locations. For transportation between different construction projects, articulated haulers usually have to be hauled on flatbed trailers as oversize cargo due to their width and weight, as well as their limited speed. However, in reality, it is normal for most articulated trucks to be trailered between worksites, as there are few construction sites giving an opportunity to drive on public roads between work zones, depending on the size of the machine (chassis, wheels, etc.) and the local laws this could also be illegal. For any distance greater than a few miles, it would also be considered uneconomical wear-and-tear on the hauler trucks, to be putting hours on them that aren't contributing towards the paying job. It is far more efficient and quicker to use trucks and trailers and let them do what they specialize in, over-road hauling. See also Volvo Construction Equipment Dump truck Doosan Infracore (formerly Daewoo Heavy Industries & Machinery) - including Solar brand Gama Goat Archer Artillery System References External links Information at theconstructionmachinery.com Volvo Group Dump trucks Mining equipment nn:Dumpar#Rammestyrte dumparar
Articulated hauler
Engineering
1,953
30,774,748
https://en.wikipedia.org/wiki/Moser%27s%20worm%20problem
Moser's worm problem (also known as mother worm's blanket problem) is an unsolved problem in geometry formulated by the Austrian-Canadian mathematician Leo Moser in 1966. The problem asks for the region of smallest area that can accommodate every plane curve of length 1. Here "accommodate" means that the curve may be rotated and translated to fit inside the region. In some variations of the problem, the region is restricted to be convex. Examples For example, a circular disk of radius 1/2 can accommodate any plane curve of length 1 by placing the midpoint of the curve at the center of the disk. Another possible solution has the shape of a rhombus with vertex angles of 60° and 120° and with a long diagonal of unit length. However, these are not optimal solutions; other shapes are known that solve the problem with smaller areas. Solution properties It is not completely trivial that a minimum-area cover exists. An alternative possibility would be that there is some minimal area that can be approached but not actually attained. However, there does exist a smallest convex cover. Its existence follows from the Blaschke selection theorem. It is also not trivial to determine whether a given shape forms a cover. conjectured that a shape accommodates every unit-length curve if and only if it accommodates every unit-length polygonal chain with three segments, a more easily tested condition, but showed that no finite bound on the number of segments in a polychain would suffice for this test. Known bounds The problem remains open, but over a sequence of papers researchers have tightened the gap between the known lower and upper bounds. In particular, constructed a (nonconvex) universal cover and showed that the minimum shape has area at most 0.260437; and gave weaker upper bounds. In the convex case, improved an upper bound to 0.270911861. used a min-max strategy for area of a convex set containing a segment, a triangle and a rectangle to show a lower bound of 0.232239 for a convex cover. In the 1970s, John Wetzel conjectured that a 30° circular sector of unit radius is a cover with area . Two proofs of the conjecture were independently claimed by and by . If confirmed, this will reduce the upper bound for the convex cover by about 3%. See also Moving sofa problem, the problem of finding a maximum-area shape that can be rotated and translated through an L-shaped corridor Kakeya set, a set of minimal area that can accommodate every unit-length line segment (with translations allowed, but not rotations) Lebesgue's universal covering problem, find the smallest convex area that can cover any planar set of unit diameter Bellman's lost-in-a-forest problem, find the shortest path to escape from a forest of known size and shape. Notes References . . . . . . . . Discrete geometry Unsolved problems in geometry Recreational mathematics Eponyms in geometry Curves Area
Moser's worm problem
Physics,Mathematics
616
1,648,041
https://en.wikipedia.org/wiki/Vair
Vair (; from Latin varius "variegated"), originating as a processed form of squirrel fur, gave its name to a set of different patterns used in heraldry. Heraldic vair represents a kind of fur common in the Middle Ages, made from pieces of the greyish-blue backs of squirrels sewn together with pieces of the animals' white underbellies. Vair is the second-most common fur in heraldry, after ermine. Origins The word vair, with its variant forms veir and vairé, was brought into Middle English from Old French, from Latin "variegated", and has been alternatively termed (Latin, meaning "variegated work"). The squirrel in question is a variety of the Eurasian red squirrel, Sciurus vulgaris. In the coldest parts of Northern and Central Europe, especially the Baltic region, the winter coat of this squirrel is blue-grey on the back and white on the belly, and was much used for the lining of cloaks called mantles. It was sewn together in alternating cup-shaped pieces of back and belly fur, resulting in a pattern of grey-blue and grey-white which, when simplified in heraldic drawing and painting, became blue and white in alternating pieces. Variations In early heraldry, vair was represented by means of straight horizontal lines alternating with wavy lines. Later it mutated into a pattern of bell or pot-like shapes, conventionally known as panes or "vair bells", of argent and azure, arranged in horizontal rows, so that the panes of one tincture form the upper part of the row, while those of the opposite tincture are on the bottom. The early form of the fur is still sometimes found, under the name vair ondé (wavy vair) or vair ancien (ancient vair)(Ger. Wolkenfeh, "cloud vair"). The only mandatory rule concerning the choice of tincture is the respect of the heraldic rule of tincture, that orders the use of a metal and a colour. When the pattern of vair is used with other colours, the field is termed vairé or vairy of the tinctures used. Normally vairé consists of one metal and one colour, although ermine or one of its variants is sometimes used, with an ermine spot appearing in each pane of that tincture. Vairé of four colours (Ger. Buntfeh, "gay-coloured" or "checked vair") is also known, usually consisting of two metals and two colours. Traditionally vair was produced in three sizes, and each size came to be depicted in armory. A field consisting of only three rows, representing the largest size, was termed gros vair or beffroi (from the same root as the English word belfry); vair of four rows was simply vair, while if there were six rows, representing the smallest size, it was menu-vair (whence the English word miniver). This distinction is not generally observed in English heraldry, and is not strictly observed in continental heraldry, although in French heraldry it is customary to specify the number of rows if there are more than four. Arrangement variants There are also forms of vair in which the arrangement of the rows is changed. The most familiar is counter-vair (Fr. contre vair), in which succeeding rows are reversed instead of staggered, so that the bases of the panes of each tincture are opposite those of the same tincture in adjoining rows. Less common is vair in pale (Fr. vair en pal or vair appointé, Ger. Pfahlfeh), in which the panes of each tincture are arranged in vertical columns. In German heraldry one finds Stürzpfahlfeh, or reversed vair in pale. Vair in bend (Fr. vair en bande) and vair in bend sinister (Fr. vair en barre), in which the panes are arranged in diagonal rows, is found in continental heraldry. Vair in point (Fr. vair en pointe, Ger. Wogenfeh, "wave vair") is formed by reversing alternate rows, as in counter-vair, and then displacing them by half the width of a pane, forming an undulating pattern across adjoining rows. German heraldry also uses a form called Wechselfeh, or "alternate vair", in which each pane is divided in half along a vertical line, one side being argent and the other azure. Any of these may be combined with size or color variations, though the variants which changed several aspects are correspondingly rarer. Potent and other shapes Potent (Ger. Sturzkrückenfeh, "upside-down crutch vair") is a similar pattern, consisting of T-shapes. In this form, the familiar "vair bell" is replaced by a T-shaped figure, known as a "potent" due to its resemblance to a crutch. The pattern used with tinctures other than argent and azure is termed potenté or potenty of those colours. The appearance of this shape is thought by some authorities to have originated from crude draftsmanship, although others regard it as an old and perfectly acceptable variation. A regularly encountered variation of potent is counter-potent or potent-counter-potent (Ger. Gegensturzkrückenfeh), which is produced in the same fashion as counter-vair; potent in point (Ger. Verschobenes Gegensturzkrückenfeh, "displaced counter-potent") is also found, and there is no reason why one could not, in principle, have potent in bend, potent of four colours, etc. Three other, rarer furs are also seen in continental heraldry, of unclear derivation but most likely from variations on vair made to imitate other types of animals: in plumeté or plumetty, the panes are depicted as feathers; and in papelonné or papellony they are depicted as scales, resembling those of a butterfly's wings, whence the name is derived. In German heraldry there is a fur known as Kürsch, or "vair bellies", consisting of panes depicted hairy and brown. Here the phrase "vair bellies" may be a misnomer, as the belly of the red squirrel is always white, although its summer coat is indeed reddish brown. See also Tincture (heraldry) References Veale, Elspeth M.: The English Fur Trade in the Later Middle Ages, 2nd Edition, London Folio Society 2005. Furs Visual motifs
Vair
Mathematics
1,409
24,137,711
https://en.wikipedia.org/wiki/Vertical%E2%80%93horizontal%20illusion
The vertical–horizontal illusion is the tendency for observers to overestimate the length of a vertical line relative to a horizontal line of the same length. This involves a bisecting component that causes the bisecting line to appear longer than the line that is bisected. People often overestimate or underestimate the length of the bisecting line relative to the bisected line of the same length. This even happens if people are aware that the lines are of the same length. Cross-cultural differences in susceptibility to the vertical–horizontal illusion have been noted. People from Western cultures and people living in urban landscapes show more susceptibility than those living in eastern or open landscapes. Types of vertical–horizontal illusions There are several different configurations of the vertical–horizontal illusion. The three configurations which seem to produce the highest illusion magnitude are the L configuration, the plus (+) configuration, and the inverted-T configuration. Of these three, the inverted-T configuration produces the highest illusion magnitude. When the bisecting line of the T illusion is configured horizontally, the illusion magnitude is lowered. However, when the bisecting line of the T illusion is configured vertically, the illusion magnitude is higher. Development A gradual decrease in error of vertical–horizontal illusions occur as participants age increases from eight to fourteen years. Gough and Meschieri attribute this decrease in error to the child's improved ability to detect and de-center their attention in a visual display, i.e. position their body differently to gain other perspectives. Children who showed greater personal independence, verbal articulation, and visual scanning ability were more effective and resourceful in their ability to gauge vertical–horizontal illusions. Cross-cultural differences Cross-cultural differences in susceptibility to the vertical–horizontal illusion have been noted in several studies. People living in developed urban cities show greater susceptibility than people living in rural areas. An explanation could be that those in rural areas are more accustomed to living in round houses on flat plains, or scrubland. Rural inhabitants have more exposure to distance and living on plains than people living in highly developed, commercialized cultures. However, differences in the strength of the vertical-horizontal illusion or the related Müller-Lyer illusion for these groups are inconsistent at best. Hemispheric neglect Participants with hemispatial neglect had increased difficulty perceiving the equality of the lines on the vertical–horizontal illusion, in comparison with those in the control group. Montalembert's study, among others, gives claim to the notion that we perceive these types of illusions utilizing the left hemisphere of our brain. Gender differences Gender differences have been found with regards to vertical–horizontal illusions. Rasmjou's 1998 study found men to outperform women in perceiving the vertical–horizontal illusion. The results of this variation could be from hemispheric asymmetries, and/or biological differences between men's and women's brains. Although women were found to have a higher illusion magnitude on vertical–horizontal illusion tasks, this does not mean men are better judges of distance than women, as there has been minimal research on this topic. Additional research is needed to draw a significant relationship between illusion magnitude and real-world tasks. These differences could also be due to social learning differences between men and women. Functional applications Functional applications of vertical–horizontal illusion exist. Elliot et al. studied the effects of the horizontal and vertical illusion, and how the perceived illusion can influence visuo-motor coordination, i.e. motor activity dependent on sight. The study specifically focused on how the perceived height of a step, manipulated by the vertical and horizontal illusion, influenced stepping strategy as shown in toe elevation during step clearance. Their results showed an increased toe elevation in conditions where an illusion was perceived, leading them to conclude that there was a correlation between visual illusion and visuo-motor coordination. This can be implemented in the real world by developing better safety strategies in places such as nursing homes. See also Vertical–horizontal illusion (Wikiversity tutorial) Geometrical-optical illusions References Vision Optical illusions
Vertical–horizontal illusion
Physics
841
34,778,599
https://en.wikipedia.org/wiki/RNA%20CoSSMos
The RNA Characterization of Secondary Structure Motifs database (RNA CoSSMos) is a repository of three-dimensional nucleic acid PDB structures containing secondary structure motifs ( loops, hairpin loops ...). See also Nucleic acid secondary structure References External links https://www.rnacossmos.com/ Biological databases RNA Biophysics Molecular geometry
RNA CoSSMos
Physics,Chemistry,Biology
74
184,072
https://en.wikipedia.org/wiki/Forced%20perspective
Forced perspective is a technique that employs optical illusion to make an object appear farther away, closer, larger or smaller than it actually is. It manipulates human visual perception through the use of scaled objects and the correlation between them and the vantage point of the spectator or camera. It has uses in photography, filmmaking and architecture. In filmmaking Forced perspective had been a feature of German silent films, and Citizen Kane revived the practice. Movies, especially B-movies in the 1950s and 1960s, were produced on limited budgets and often featured forced perspective shots. Forced perspective can be made more believable when environmental conditions obscure the difference in perspective. For example, the final scene of the famous movie Casablanca takes place at an airport in the middle of a storm, although the entire scene was shot in a studio. This was accomplished by using a painted backdrop of an aircraft, which was "serviced" by dwarfs standing next to the backdrop. A downpour (created in the studio) draws much of the viewer's attention away from the backdrop and extras, making the simulated perspective less noticeable. Role of light Early instances of forced perspective used in low-budget motion pictures showed objects that were clearly different from their surroundings, often blurred or at a different light level. The principal cause of this was geometric. Light from a point source travels in a spherical wave, decreasing in intensity (or illuminance) as the inverse square of the distance travelled. This means that a light source must be four times as bright to produce the same illuminance at an object twice as far away. Thus to create the illusion of a distant object being at the same distance as a near object and scaled accordingly, much more light is required. When shooting with forced perspective, it's important to have the aperture stopped down sufficiently to achieve proper depth of field (DOF), so that the foreground object and background are both sharp. Since miniature models would need to be subjected to far greater lighting than the main focus of the camera, the area of action, it is important to ensure that these can withstand the significant heat generated by the incandescent light sources typically used in film and TV production. In motion Peter Jackson's film adaptations of The Lord of the Rings make extended use of forced perspective. Characters apparently standing next to each other would be displaced by several feet in depth from the camera. This, in a still shot, makes some characters (Dwarves and Hobbits) appear much smaller than others. If the camera's point of view were moved, then parallax would reveal the true relative positions of the characters in space. Even if the camera is just rotated, its point of view may move accidentally if the camera is not rotated about the correct point. This point of view is called the 'zero-parallax-point' (or front nodal point), and is approximated in practice as the centre of the entrance pupil. An extensively used technique in The Lord of the Rings: The Fellowship of the Ring was an enhancement of this principle, which could be used in moving shots. Portions of sets were mounted on movable platforms which would move precisely according to the movement of the camera, so that the optical illusion would be preserved at all times for the duration of the shot. The same techniques were used in the Harry Potter movies to make the character Rubeus Hagrid look like a giant. Props around Harry and his friends are of normal size, while seemingly identical props placed around Hagrid are in fact smaller. Comic effects As with many film genres and effects, forced perspective can be used to visual-comedy effect. Typically, when an object or character is portrayed in a scene, its size is defined by its surroundings. A character then interacts with the object or character, in the process showing that the viewer has been fooled and there is forced perspective in use. The 1930 Laurel and Hardy movie Brats used forced perspective to depict Stan and Ollie simultaneously as adults and as their own sons. An example used for comic effect can be found in the slapstick comedy Top Secret! in a scene which appears to begin as a close-up of a ringing phone with the characters in the distance. However, when the character walks up to the phone (towards the camera) and picks it up, it becomes apparent that the phone is extremely oversized instead of being close to the camera. Another scene in the same movie begins with a close-up of a wristwatch. The next cut shows that the character actually has a gargantuan wristwatch. The same technique is also used in the Dennis Waterman sketch in the British BBC sketch show Little Britain. In the television version, larger than life props are used to make the caricatured Waterman look just three feet tall or less. In The History of the World, Part I, while escaping the French peasants, Mel Brooks' character, Jacques, who is doubling for King Louis, runs down a hall of the palace, which turns into a ramp, showing the smaller forced perspective door at the end. As he backs down into the normal part of the room, he mutters, "Who designed this place?" One of the recurring The Kids in the Hall sketches featured Mr. Tyzik, "The Headcrusher", who used forced perspective (from his own point of view) to "crush" other people's heads between his fingers. This is also done by the character Sheldon Cooper in the TV show The Big Bang Theory to his friends when they displease him. In the making of Season 5 of Red vs. Blue, the creators used forced perspective to make the character of Tucker's baby, Junior, look small. In the game, the alien character used as Junior is the same height as other characters. The short-lived 2013 Internet meme "baby mugging" used forced perspective to make babies look like they were inside items like mugs and teacups. In architecture In architecture, a structure can be made to seem larger, taller, farther away or otherwise by adjusting the scale of objects in relation to the spectator, increasing or decreasing perceived depth. When forced perspective is supposed to make an object appear farther away, the following method can be used: by constantly decreasing the scale of objects from expectancy and convention toward the farthest point from the spectator, an illusion is created that the scale of said objects is decreasing due to their distant location. In contrast, the opposite technique was sometimes used in classical garden designs and other follies to shorten the perceived distances of points of interest along a path. The Statue of Liberty is built with a slight forced perspective so that it appears more correctly proportioned when viewed from its base. When the statue was designed in the late 19th century (before easy air flight), there were few other angles from which to view the statue. This caused a difficulty for special effects technicians working on the movie Ghostbusters II, who had to back off on the amount of forced perspective used when replicating the statue for the movie so that their model (which was photographed head-on) would not look top-heavy. This effect can also be seen in Michelangelo's statue of David. Through depth perception The technique takes advantage of the visual cues humans use to perceive depth such as angular size, aerial perspective, shading, and relative size. In film, photography and art, perceived object distance is manipulated by altering fundamental monocular cues used to discern the depth of an object in the scene such as aerial perspective, blurring, relative size and lighting. Using these monocular cues in concert with angular size, the eyes can perceive the distance of an object. Artists are able to freely move the visual plane of objects by obscuring these cues to their advantage. Increasing the object's distance from the audience makes an object appear smaller, its apparent size decreases as distance from the audience increases. This phenomenon is that of the manipulation of angular and apparent size. A person perceives the size of an object based on the size of the object's image on the retina. This depends solely on the angle created by the rays coming from the topmost and bottommost part of the object that pass through the center of the lens of the eye. The larger the angle an object subtends, the larger the apparent size of the object. The subtended angle increases as the object moves closer to the lens. Two objects with different actual size have the same apparent size when they subtend the same angle. Similarly, two objects of the same actual size can have drastically varying apparent size when they are moved to different distances from the lens. Calculating angular size The formula for calculating angular size is as follows: in which θ is the subtended angle, h is the actual size of the object and D is the distance from the lens to the object. Techniques employed Solely manipulating angular size by moving objects closer and farther away cannot fully trick the eye. Objects that are farther away from the eye have a lower luminescent contrast due to atmospheric scattering of rays. Fewer rays of light reach the eye from more distant objects. Using the monocular cue of aerial perspective, the eye uses the relative luminescence of objects in a scene to discern relative distance. Filmmakers and photographers combat this cue by manually increasing the luminescence of objects farther away to equal that of objects in the desired plane. This effect is achieved by making the more distant object more bright by shining more light on it. Because luminance decreases by ½d (where d is distance from the eye), artists can calculate the exact amount of light needed to counter the cue of aerial perspective. Similarly, blurring can create the opposite effect by giving the impression of depth. Selectively blurring an object moves it out of its original visual plane without having to manually move the object. A perceptive illusion that may be infused in film culture is the idea of Gestalt psychology, which holds that people often view the whole of an object as opposed to the sum of its individual parts. Another monocular cue of depth perception is that of lighting and shading. Shading in a scene or on an object allows the audience to locate the light source relative to the object. Making two objects at different distances have the same shading gives the impression that they are in similar positions relative to the light source; therefore, they appear closer to each other than they actually are. Artists may also employ the simpler technique of manipulating relative size. Once the audience becomes acquainted with the size of an object in proportion to the rest of the objects in a scene, the photographer or filmmaker can replace the object with a larger or smaller replica to change another part of the scene's apparent size. This is done frequently in movies. For example, to aid in the appearance of a person as a giant next to a "regular sized" person, a filmmaker might have a shot of two identical glasses together, then follow with the person who is supposed to play the giant holding a much smaller replica of the glass and the person who is playing the regular-sized person holding a much larger replica. Because the audience sees that the glasses are the same size in the original shot, the difference in relation to the two characters allows the audience to perceive the characters as different sizes based on their relative size to the glasses they hold. A painter can give the illusion of distance by adding blue or red tinting to the color of the object he is painting. This monocular cue takes advantage of the trend for the color of distant objects to shift towards the blue end of the spectrum, while the colors of closer objects shift toward the red end of the spectrum. The optical phenomenon is known as chromostereopsis. Examples In film Forced perspective has been employed to realize characters in film. One notable example is Rubeus Hagrid, the half-giant in the Harry Potter series. The technique is used in the Lord of the Rings series for depicting the apparent heights of the hobbit characters, such as Frodo, who are supposed to be around half the height or less of the humans and wizards, such as Gandalf. In reality, the difference in height between the respective actors playing those roles is only , where Elijah Wood as the hobbit Frodo is tall, and Ian McKellen as the wizard Gandalf is . The use of camera angles and trick scenery and props creates the illusion of a much greater difference in size and height. Numerous camera angle tricks are played in the comedy film Elf (2003) to make the elf characters in the movie appear smaller than the human characters. In art In his painting entitled Still life with a curtain, Paul Cézanne creates the illusion of depth by using brighter colors on objects closer to the viewer and dimmer colors and shading to distance the "light source" from objects that he wanted to appear farther away. His shading technique allows the audience to discern the distance between objects due to their relative distances from a stationary light source that illuminates the scene. Furthermore, he uses a blue tint on objects that should be farther away and redder tint to objects in the foreground. Full size dioramas Modern museum dioramas may be seen in most major natural history museums. Typically, these displays use a tilted plane to represent what would otherwise be a level surface, incorporate a painted background of distant objects, and often employ false perspective, carefully modifying the scale of objects placed on the plane to reinforce the illusion through depth perception in which objects of identical real-world size placed farther from the observer appear smaller than those closer. Often the distant painted background or sky will be painted upon a continuous curved surface so that the viewer is not distracted by corners, seams, or edges. All of these techniques are means of presenting a realistic view of a large scene in a compact space. A photograph or single-eye view of such a diorama can be especially convincing since in this case there is no distraction by the binocular perception of depth. Carl Akeley, a naturalist, sculptor, and taxidermist, is credited with creating the first ever habitat diorama in the year 1889. Akeley's diorama featured taxidermied beavers in a three-dimensional habitat with a realistic, painted background. With the support of curator Frank M. Chapman, Akeley designed the popular habitat dioramas featured at the American Museum of Natural History. Combining art with science, these exhibitions were intended to educate the public about the growing need for habitat conservation. The modern AMNH Exhibitions Lab is charged with the creation of all dioramas and otherwise immersive environments in the museum. Theme parks Forced perspective is extensively employed at theme parks and other such architecture as found in Disneyland and Las Vegas, often to make structures seem larger than they are in reality where physically larger structures would not be feasible or desirable, or to otherwise provide an optical illusion for entertainment value. Most notably, it is used by Walt Disney Imagineering in the Disney Theme Parks. Some notable examples of forced perspective in the parks, used to make the objects bigger, are the castles (Sleeping Beauty, Cinderella, Belle, Magical Dreams, and Enchanted Storybook). One of the most notable examples of forced perspective being used to make the object appear smaller is The American Adventure pavilion in Epcot. See also Ames room Anamorphosis Depth perception Perspective distortion (photography) Trompe-l'œil Vista paradox References External links Special effects Photographic techniques Architectural communication Optical illusions
Forced perspective
Physics,Engineering
3,148
27,096,032
https://en.wikipedia.org/wiki/Recall%20test
In cognitive psychology, a recall test is a test of memory of mind in which participants are presented with stimuli and then, after a delay, are asked to remember as many of the stimuli as possible. Memory performance can be indicated by measuring the percentage of stimuli the participant was able to recall. An example of this would be studying a list of 10 words and later recalling 5 of them. This is a 50 percent recall. Participants' responses also may be analyzed to determine if there is a pattern in the way items are being recalled from memory. For example, if participants are given a list consisting of types of vegetables and types of fruit, their recall can be assessed to determine whether they grouped vegetables together and fruits together. Recall is also involved when a person is asked to recollect life events, such as graduating high school, or to recall facts they have learned, such as the capital of Florida. Measuring recall contrasts with measuring recognition, in which people are asked to pick an item that has previously been seen or heard from a number of other items that have not been previously seen or heard, which occurs, for example, during a typical multiple-choice question exam. Types of recall Free recall test Free recall is one of the most commonly used recall tests. In free recall tests participants are asked to study a list of words and then are asked to recall the words in whatever order they choose to recall them in. The words the participants are to recall are typically presented one at a time and for a short duration. The recalling of the words starts immediately after the final item being recalled is shown. The items can be listed either through verbal or written recall. Immediate recall of the items(Immediate Free Recall) is the most common form of free call tests, but recall of the items can be delayed(Delayed Free Recall). Both the immediate free recall and delayed free recall have been used to test the recency and primacy effects. Free recall is most often used to measure the number of items recalled from a list. Murdock in an experiment on serial position effects, used six groups of 103 participants. Each group was given different combinations of list lengths and presentation rates. Three of the groups were shown lists of ten, fifteen, and twenty words with a presentation rate of two seconds per word. The other three groups were shown lists of twenty, thirty, and forty words with a one-second presentation rate for each word. There were eighty lists in total that included randomly selected common English words. After the presentation of each list, subjects were asked to recall as many words as possible in any order. Results from the experiment showed that all groups expressed both primacy effects and recency effects. Recency effects were exhibited regardless of the length of the list, and it was strongest for the words in the last eight serial positions. The primacy effect extended over the first four serial positions. Serial recall paradigm is a form of free recall where the participants have to list the items presented to them in the correct order they are presented in. Research shows that the learning curve for serial recall increases linearly with every trial. Bruno, Miller, and Zimmerman (1955) in an experiment tested to learn why the serial recall learning curve increases linearly. They were testing to see if this linear increase is a result of the order in which the participant sees the items, or if it is instead dependent on the order in which the participant is told to recall the items. The study involved three different conditions: serial recall, free recall with items to be recalled randomized before each trial, and free recall with the order of the items kept constant. The experiment tested nine college students on 18 series of words. In addition to the linear serial recall learning curve, it was found that more words are forgotten when recall is free than when it is serial. This study also supported the notion that the difference between the types of recall depends on the order in which the learner must recall the items, and not on the order in which the items are presented. Cued recall test A cued recall test is a procedure for testing memory in which a participant is presented with cues, such as words or phrases, to aid recall of previously experienced stimuli. Endel Tulving and Zena Pearlstone (1966) conducted an experiment in which they presented participants with a list of words to be remembered. The words were from specific categories such as birds (pigeon, sparrow), furniture (chair, dresser), and professions (engineer, lawyer). The categories were not made apparent in the original list. Participants in the free recall group were asked to write down as many words as they could remember from the list. Participants in the cued recall group were also asked to recall the words, but this group was provided with the names of the categories, "birds", "furniture", and "professions". The results of Tulving and Pearlstone's experiment demonstrate that retrieval cues aid memory. Participants in the free recall group recalled 40 percent of the words, whereas participants in the cued recall group recalled 75 percent of the words. Factors affecting recall Encoding specificity The principle of encoding specificity states that we encode information along with its context. The memory utilizes cues from which the information was encoded and from the environment in which it is being retrieved. An experiment demonstrating encoding specificity was conducted by D. R. Godden and Alan Baddeley (1975) in their "diving experiment". During this experiment, one group of participants studied a list of words underwater while another group of participants studied the same list of words on land. These groups were then divided, so half of the participants in the land and water groups were tested for recall on land and half were tested underwater. The participants demonstrated a better recall when the context of retrieval matched the context of encoding, for example having studied underwater and being tested underwater. State-dependent learning This is another example of how matching the conditions at the encoding and retrieval can influence memory. State-dependent learning is associated with a specific internal state, such as mood or state of awareness. According to the principle of state-dependent learning, memory will be better when a person's internal state during retrieval matches his or her internal state during encoding. Two ways of matching encoding and retrieval include matching the physical situation (encoding specificity) or an internal feeling (state-dependent learning). Transfer-appropriate processing Transfer -appropriate processing (TAP) shows that our ability to recall information well is not only dependent on the depth at which we learn it. It shows that how we connect the information and build relationships with other encoded memories is important in being able to recall the information. Schendan and Kutas (2007) performed an experiment in which they confirmed that recall of memories is best when we match the situations. They found that significantly more memory can be recalled when what has been learned is grouped together and paired with what the sensory information is saying was learned Franks, and colleagues performed thirteen experiments on TAP and found that memory is best enhanced when the learning situation was matched to the retrieval situation. Levels of processing theory The idea behind the levels of processing theory is that the depth of processing effects how well you encode the information you are learning. Craik and Tulving performed a study in 1975 where the participants were presented a list of 60 words each word having three questions. The questions varied from requiring them to think about the word to just remembering what they saw. Craik and Tulving discovered that the words that required deeper processing were remembered best. Craik and Tulving also discovered that the more familiar the stimulus is recalling the information is increased. The reason for this being when a stimulus is presented is familiar it already has many connections to memories that have been encoded. These connections to the encoded memories strengthens the memory of the stimulus being presented. Levels of processing theory goes even further to show that recall is increased when asked to remember in the way it was originally presented to us. References 2. Goldstein, E.B. (2015). Cognitive Psychology: Connecting Mind, Research, and Everyday Experience. (4th Edition) Standford, CT: Cengage Learning Cognitive psychology
Recall test
Biology
1,644
43,657,724
https://en.wikipedia.org/wiki/Sogou%20Baike
Sogou Baike (); Sogou Encyclopedia, formerly Soso Baike () is a Chinese-language collaborative web-based encyclopedia provided by the Chinese tech company Sogou and formerly by the search engine Soso. Sogou is part of Tencent, China's largest internal portal. It was officially launched as Soso Baike in on 30 March 2009. The Soso Baike officially changed its name to Sogou Baike and launch it in 2013. Like Wikipedia, Sogou Baike is a collaboratively written online encyclopedia with user-generated content, though it is operated by a for-profit company rather than a non-profit organization. Existing problems and defects Although Sosou Baike has a review mechanism similar to Baidu Baike, it is relatively immature. Usually most of the content is discussed and modified by ordinary users, usually in a democratic manner. Sosou Encyclopedia places more emphasis on the opinions of its administrators in this regard, resulting in many malicious entries not being deleted in time, and some entries that are original and attract much attention cannot be promoted. Some parody entries are suspected of infringement and have caused dissatisfaction among netizens. At the same time, some correct modifications failed to pass review, and users were unable to appeal. In addition, some entries that were rated as "high-quality versions" with "rich content", such as the expanded version of the "魔法王国" entry, were deleted without reason. In its early development, Sogou Baike imitated other encyclopedia products to a certain extent. For example, the content of its help page is very similar to that of Baidu Encyclopedia. Some comments also pointed out that some of the entries are also imitations of interactive encyclopedia and Baidu Encyclopedia. The content of Encyclopedia and Chinese Wikipedia is copied directly. Copyright infringement allegations In 2018, the publishers of the Hanyu Da Cidian sued Sogou, the parent company of Sogou Baike, for copyright infringement within the encyclopedia. The People's Court of Haidian District in Beijing took on the case. See also Baidu Baike Chinese Wikipedia Hudong Baike References General references Chinese online encyclopedias Tencent Wiki communities
Sogou Baike
Technology
454
72,430,652
https://en.wikipedia.org/wiki/Novosibirsk%20Institute%20of%20Program%20Systems
Novosibirsk Institute of Program Systems () is a scientific organization in Sovetsky District of Novosibirsk, Russia. It was founded in 1972. History In 1972, a branch of the Lebedev Institute of Precision Mechanics and Computer Engineering was established in Novosibirsk. In 1992, the branch became an independent organization. In 2002, 240 people worked at the institute. Activity The organizstion is engaged in the development of automated control systems. It created automated control systems for the Diamonds of Russia – Sakha, Surgut-1 Power Station etc. References Research institutes in Novosibirsk 1972 establishments in the Soviet Union Research institutes established in 1972 Automation organizations
Novosibirsk Institute of Program Systems
Engineering
140
11,286,678
https://en.wikipedia.org/wiki/Dockmaster
A dockmaster is a person in charge of a dock used for freight, logistics, and repair or maintenance of ships (a shipyard or drydock). This title is distinct from harbormaster, which is sometimes a higher rank than dockmaster. A dockmaster is assisted by a deputy dockmaster and an assistant dockmaster. For example, in the Port of London in the United Kingdom, shipping movements in dock complexes, and within a short distance of the outer lock gates (i.e. in the tidal river), are under the jurisdiction of a dockmaster and staff. Each assistant dockmaster has a marine staff of about 70. In all, each dock complex employed about 360 marine staff. The Port of London consists of all the tidal portions of the River Thames from Margate on the south coast, Clacton-on-Sea on the north, through to Teddington a total of around . Up until the 1980s the Port of London Authority (PLA) dockmasters were responsible for five large enclosed dock systems and miles of quayside isolated from the tides by locks. These systems were London and St Katharine Docks, Surrey Commercial Docks, West India and Millwall Docks, Royal Docks, and Tilbury Docks. Of these only Tilbury is operational, along with DP World's London Gateway. Eventually Tilbury Docks were privatized and became the Port of Tilbury, with their dockmaster being redesignated as harbourmaster; this led to the PLA harbourmaster being redesignated chief harbourmaster. "Dockmaster" could also refer to : DockMaster (Product) A ropeless docking and charging system for electric boats by Lemvos. DockMaster (Software) A marina management software. References Marine occupations
Dockmaster
Physics
347
1,041,245
https://en.wikipedia.org/wiki/Wire%20chamber
A wire chamber or multi-wire proportional chamber is a type of proportional counter that detects charged particles and photons and can give positional information on their trajectory, by tracking the trails of gaseous ionization. The technique was an improvement over the bubble chamber particle detection method, which used photographic techniques, as it allowed high speed electronics to track the particle path. Description The multi-wire chamber uses an array of wires at a positive dc voltage (anode)s, which run through a chamber with conductive walls held at a lower potential (cathode). The chamber is filled with gas, such as an argon/methane mix, so that any ionizing particle that passes through the tube will ionize surrounding gaseous atoms and produce ion pairs, consisting of positive ions and electrons. These are accelerated by the electric field across the chamber, preventing recombination; the electrons are accelerated to the anode, and the positive ions to the cathode. At the anode a phenomenon known as a Townsend avalanche occurs. This results in a measurable current flow for each original ionising event which is proportional to the ionisation energy deposited by the detected particle. By separately measuring the current pulses from each wire, the particle trajectory can be found. Adaptations of this basic design are the thin gap, resistive plate and drift chambers. The drift chamber can also be subdivided into ranges of specific use in the chamber designs known as time projection, microstrip gas, and those types of detectors that use silicon. Development In 1968, Georges Charpak, while at the European Organization for Nuclear Research (CERN), invented and developed the multi-wire proportional chamber (MWPC). This invention resulted in him winning the Nobel Prize for Physics in 1992. The chamber was an advancement of the earlier bubble chamber rate of detection of only one or two particles every second to 1000 particle detections every second. The MWPC produced electronic signals from particle detection, allowing scientists to examine data via computers. The multi-wire chamber is a development of the spark chamber. Fill gases In a typical experiment, the chamber contains a mixture of these gases: argon (about ) isobutane (just under ) freon (0.5%) The chamber could also be filled with: liquid xenon; liquid tetramethylsilane; or tetrakis(dimethylamino)ethylene (TMAE) vapour. Use For high-energy physics experiments, it is used to observe a particle's path. For a long time, bubble chambers were used for this purpose, but with the improvement of electronics, it became desirable to have a detector with fast electronic read-out. (In bubble chambers, photographic exposures were made and the resulting printed photographs were then examined.) A wire chamber is a chamber with many parallel wires, arranged as a grid and put on high voltage, with the metal casing being on ground potential. As in the Geiger counter, a particle leaves a trace of ions and electrons, which drift toward the case or the nearest wire, respectively. By marking off the wires which had a pulse of current, one can see the particle's path. The chamber has a very good relative time resolution, good positional accuracy, and self-triggered operation (Ferbel 1977). The development of the chamber enabled scientists to study the trajectories of particles with much-improved precision, and also for the first time to observe and study the rarer interactions that occur through particle interaction. Drift chambers If one also precisely measures the timing of the current pulses of the wires and takes into account that the ions need some time to drift to the nearest wire, one can infer the distance at which the particle passed the wire. This greatly increases the accuracy of the path reconstruction and is known as a drift chamber. A drift chamber functions by balancing the loss of energy from particles caused by impacts with particles of gas with the accretion of energy created with high-energy electrical fields in use to cause the particle acceleration. Design is similar to the multi-wire proportional chamber but with a greater distance between central-layer wires. The detection of charged particles within the chamber is possible by the ionizing of gas particles due to the motion of the charged particle. The Fermilab detector CDF II contains a drift chamber called the Central Outer Tracker. The chamber contains argon and ethane gas, and wires separated by 3.56-millimetre gaps. If two drift chambers are used with the wires of one orthogonal to the wires of the other, both orthogonal to the beam direction, a more precise detection of the position is obtained. If an additional simple detector (like the one used in a veto counter) is used to detect, with poor or null positional resolution, the particle at a fixed distance before or after the wires, a tri-dimensional reconstruction can be made and the speed of the particle deduced from the difference in time of the passage of the particle in the different parts of the detector. This setup gives us a detector called a time projection chamber (TPC). For measuring the velocity of the electrons in a gas (drift velocity) there are special drift chambers, velocity drift chambers, which measure the drift time for a known location of ionisation. See also Bubble chamber Gaseous ionization detector Micropattern gaseous detector Particle detector Wilson chamber References External links Heidelberg lecture on research ionisation chambers Astroparticle physics CERN Experimental particle physics Ionising radiation detectors Laboratory equipment Nuclear physics Particle detectors French inventions
Wire chamber
Physics,Technology,Engineering
1,130
76,054,877
https://en.wikipedia.org/wiki/Flight%20control%20computer
A flight control computer (FCC) is a primary component of the avionics system found in fly-by-wire aircraft. It is a specialized computer system that can create artificial flight characteristics and improve handling characteristics by automating a variety of in-flight tasks which reduce the workload on the cockpit flight crew. A flight control computer receives and processes data from a multitude of sensors throughout the aircraft. These sensors monitor variables such as airspeed, altitude, and attitude (the aircraft's orientation in three-dimensional space). Embedded within integrated avionics packages, it executes critical functions such as guidance, navigation. It also controls the plane's flight control surfaces, such as the ailerons, elevators, and rudder. A dedicated flight control computer handles high-level computational tasks, including routing, autopilot functions, and flight management. This computer interfaces with the avionics system and is responsible for displaying flight data on the cockpit's flight deck. The flight control system must be fault tolerant, and for that purpose there can exist several primary flight control computers (PFCC) and secondary flight control computers (SFCC), which monitors the data output from PFCC and in the case of failure, SFCC can take over the flight controls. In the Boeing 777 there are three primary flight control computers located in the aircraft's electronic equipment bay, responsible for computing and transmitting commands for normal mode flight control surfaces to maintain normal flight, including rudder, elevators, ailerons, flaperons, horizontal stabilizer, multi-functional spoilers, and ground spoilers. References Aerospace engineering
Flight control computer
Engineering
330
26,448,345
https://en.wikipedia.org/wiki/Plasma%20actuator
Plasma actuators are a type of actuator currently being developed for active aerodynamic flow control. Plasma actuators impart force in a similar way to ionocraft. Plasma flow control has drawn considerable attention and been used in boundary layer acceleration, airfoil separation control, forebody separation control, turbine blade separation control, axial compressor stability extension, heat transfer and high-speed jet control. Dielectric Barrier Discharge (DBD) plasma actuators are widely utilized in airflow control applications. DBD is a type of electrical discharge commonly used in various Electrohydrodynamic (EHD) applications. In DBDs, the emitter electrode is connected to a high-voltage source and exposed to the surrounding air, while the collector electrode is grounded and encapsulated within the dielectric material (see figure). When activated, they form a low-temperature plasma between the electrodes by application of a high-voltage AC signal across the electrodes. Consequently, air molecules from the air surrounding the emitter electrode are ionized, and are accelerated towards the counter electrode through the electric field. Introduction Plasma actuators operating at the atmospheric conditions are promising for flow control, mainly for their physical properties, such as the induced body force by a strong electric field and the generation of heat during an electric arc, and the simplicity of their constructions and placements. In particular, the recent invention of glow discharge plasma actuators by Roth (2003) that can produce sufficient quantities of glow discharge plasma in the atmosphere pressure air helps to yield an increase in flow control performance. Power supply and electrode layouts Either a direct current (DC), an alternating current (AC) power supply, or a microwave microdischarge can be used for different configurations of plasma actuators. One schematic of an AC power supply design for a dielectric barrier discharge plasma actuator is given here as an example. The performance of plasma actuators is determined by dielectric materials and power inputs, later is limited by the qualities of MOSFET or IGBT. The driving waveforms can be optimized to achieve a better actuation (induced flow speed). However, a sinusoidal waveform may be preferable for the simplicity in power supply construction. The additional benefit is the relatively less electromagnetic interference. Pulse-width modulation can be adopted to instantaneously adjust the strength of actuation. Manipulation of the encapsulated electrode and distributing the encapsulated electrode throughout the dielectric layer has been shown to alter the performance of the dielectric barrier discharge (DBD) plasma actuator. Locating the initial encapsulated electrode closer to the dielectric surface results in induced velocities higher than the baseline case for a given voltage. In addition, Actuators with a shallow initial electrode are able to more efficiently impart momentum and mechanical power into the flow. No matter how much funding has been invested and the number of various private claims of a high induced speed, the maximum, average speed induced by plasma actuators on an atmospheric pressure conviction, without any assistant of mechanical amplifier (chamber, cavity etc.), is still less than 10 m/s. Influence of temperature The surface temperature plays an important role in limiting the usefulness of a dielectric barrier discharge plasma actuator. The thrust produced by an actuator in quiescent air increases with a power law of the applied voltage. For voltages greater than a threshold, the exponent of the power-law reduces limiting the thrust increase, and the actuator is said to have “saturated,” limiting the actuator’s performance. The onset of saturation can visually be correlated by the inception of filamentary discharge events. The saturation effect can be manipulated by changing the local surface temperature of the dielectric. Also, when dealing with real-life aircraft equipped with plasma actuators, it is important to consider the effect of temperature. The temperature variations encountered during a flight envelope may have adverse effects in actuator performance. It is found that for a constant peak-to-peak voltage the maximum velocity produced by the actuator depends directly on the dielectric surface temperature. The findings suggest that by changing the actuator temperature the performance can be maintained or even altered at different environmental conditions. Increasing dielectric surface temperature can increase the plasma actuator performance by increasing the momentum flux whilst consuming slightly higher energy. Influence of rain Although plasma actuators have been extensively characterized for their performance as flow control devices, the notion that they might fail under adverse conditions such as dew, drizzle or dust makes them less popular in practical applications. Earlier publications have shown the effect of moisture, water adhesion, and even icing. A recent publication has simulated light rain by directly spraying water droplets on to a working plasma actuator and showed its effect on thrust recovery as the performance metric. It was shown that wet actuators quickly recover plasma glow, and gradually regain thrust comparable to the dry actuator. Flow control applications Some recent applications of plasma actuation include high-speed flow control using localized arc filament plasma actuators, and low-speed flow control using dielectric barrier discharges for flow separation, replacing mechanical high-lift devices, 3D wake control, sound control, andsliding discharges. The present research of plasma actuators is mainly focused on three directions: (1) various designs of plasma actuators; (2) flow control applications; and (3) control-oriented modeling of flow applications under plasma actuation. In addition, new experimental and numerical methods are being developed to provide physical insights. Vortex generator A plasma actuator induces a local flow speed perturbation, which will be developed downstream to a vortex sheet. As a result, plasma actuators can behave as vortex generators. The difference between this and traditional vortex generation is that there are no mechanical moving parts or any drilling holes on aerodynamic surfaces, demonstrating an important benefit of plasma actuators. Three dimensional actuators such as Serpentine geometry plasma actuator generate streamwise oriented vortices, which are useful to control the flow. Recent work showed significant turbulent drag reduction by modifying energetic modes of transitional flow using these actuators. Active noise control Active noise control normally denotes noise cancellation, that is, a noise-cancellation speaker emits a sound wave with the same amplitude but with inverted phase (also known as antiphase) to the original sound. However, active noise control with plasma adopts different strategies. The first one uses the discovery that sound pressure could be attenuated when it passes through a plasma sheet. The second one, and being more widely used, is to actively suppress the flow-field that is responsible to flow-induced noise (also known as aeroacoustics), using plasma actuators. It has been demonstrated that both tonal noise and broadband noise (difference can refer to tonal versus broadband) can be actively attenuated by a carefully designed plasma actuator. Supersonic and hypersonic flow control Plasma has been introduced to hypersonic flow control. Firstly, plasma could be much easier generated for hypersonic vehicle at high altitude with quite low atmospheric pressure and high surface temperature. Secondly, the classical aerodynamic surface has little actuation for the case. Interest in plasma actuators as active flow control devices is growing rapidly due to their lack of mechanical parts, light weight and high response frequency. The characteristics of a dielectric barrier discharge (DBD) plasma actuator when exposed to an unsteady flow generated by a shock tube is examined. A Study shows that not only is the shear layer outside of the shock tube affected by the plasma but the passage of the shock front and high-speed flow behind it also greatly influences the properties of the plasma Flight control Plasma actuators could be mounted on the airfoil to control flight attitude and thereafter flight trajectory. The cumbersome design and maintenance efforts of mechanical and hydraulic transmission systems in a classical rudder can thus be saved. The price to pay is that one should design a suitable high voltage/power electric system satisfying EMC rule. Hence, in addition to flow control, plasma actuators hold potential in top-level flight control, in particular for UAV and extraterrestrial planet (with suitable atmospheric conditions) investigations. On the other hand, the whole flight control strategy should be reconsidered taking account of characteristics of plasma actuators. One preliminary roll control system with DBD plasma actuators is shown in the figure. It can be seen that plasma actuators deployed on the both sides of an airfoil. The roll control can be controlled by activating plasma actuators according to the roll angle feedback. After studying various feedback control methodologies, the bang–bang control method was chosen to design the roll control system based on plasma actuators. The reason is that bang-bang control is time optimal and insensitive to plasma actuations, which quickly vary in difference atmospheric and electric conditions. Another study for rolling moment control using three-dimensional actuation has also been reported for an aircraft wing where actuators were employed as the leading-edge slat, spoiler, flap, and leading-edge aileron. Results show that the serpentine plasma actuators may be employed as high-lift devices (as DBD slat and DBD spoiler) working at low Reynolds numbers and they can have the same effect of a conventional aileron for normal flight maneuvering, with low power consumption. Heat transfer Plasma-actuated heat transfer (or plasma-assisted heat transfer) is a method of cooling hot surfaces assisted by an electrostatic fluid accelerator (EFA) such as a dielectric barrier discharge (DBD) plasma actuator or corona discharge plasma actuator. Plasma-actuated heat transfer is one of the proposed applications of DBD plasma actuators, and needle plasma actuator. Forced cooling All electronic devices generate excess heat which must be removed to prevent premature failure of the device. Since heating occurs at the device, a common method of thermal management for electronics is to generate a bulk flow (for example by external fans) which brings the cooler, ambient air into contact with the hot device. A net heat transfer occurs between the hotter electronics and cooler air, lowering the mean temperature of the electronics. In plasma-actuated heat transfer, EFA plasma actuators generate a secondary flow to the bulk flow, cause local fluid acceleration near the plasma actuator, and ultimately may thin the thermal and velocity boundary layer near the electronics. The result is that the cooler air is brought closer to the hot electronics, improving the forced air cooling. Plasma-actuated heat transfer may be used as a thermal management solution for mobile devices, notebooks, ultra-mobile computers, and other electronics or in other applications which use similar forced air cooling configurations. Film cooling In engineering applications which experience significantly high temperature environments such as those encountered in gas turbine blades, hot structures must be cooled to mitigate thermal stresses and structural failure. In those applications, one of the most common techniques used is film cooling where a secondary fluid such as air or another coolant is introduced to a surface in a high temperature environment. The secondary fluid provides a cooler, insulating layer (or film) along the surface that acts as a heat sink, lowering the mean temperature in the boundary layer. Since the secondary fluid is injected onto the surface at discrete holes on the surface, a portion of the secondary fluid is blown off the surface (especially at high momentum ratios of injected air to cross flow), decreasing the effectiveness of the film cooling process. In plasma-actuated heat transfer, EFA plasma actuators are used to control the secondary fluid via a dynamic force which promotes attachment of the secondary fluid to the hot surface and improves the effectiveness of the film cooling. Modeling Various numerical models have been proposed to simulate plasma actuations in flow control. They are listed below according to the computational cost, from the most expensive to the cheapest. Monte carlo method plus particle-in-cell; Electricity modeling coupled with Navier-Stokes equations; Lumped element model coupled with Navier-Stokes equations Surrogate model to simulate plasma actuation. The most important potential of plasma actuators is its ability to bridge fluids and electricity. A modern closed-loop control system and the following information theoretical methods can be applied to the relatively classical aerodynamic sciences. A control-oriented model for plasma actuation in flow control has been proposed for a cavity flow control case. See also Ion thruster Serpentine geometry plasma actuator Wingless Electromagnetic Air Vehicle Dielectric barrier discharge References Plasma technology and applications Actuators Electrostatics
Plasma actuator
Physics
2,616
2,857,994
https://en.wikipedia.org/wiki/Red%20Hill%20Underground%20Fuel%20Storage%20Facility
The Red Hill Bulk Fuel Storage Facility is a military fuel storage facility in Hawaii. Operated by the United States Navy, Red Hill supports U.S. military operations in the Pacific. As of March 7, 2022, the Department of Defense announced the planned closure of the Red Hill facility, due to reduced military need and water contamination issues. Description Unlike any other facility in the United States, the Red Hill Underground Fuel Storage Facility can store up to 250 million gallons of fuel. It consists of 20 steel-lined underground storage tanks encased in concrete, and built into cavities that were mined inside of Red Hill. Each tank has a storage capacity of approximately 12.5 million gallons. The Red Hill tanks are connected to three gravity-fed pipelines that run 2.5 miles inside a tunnel to fueling piers at Pearl Harbor. Each of the 20 tanks at Red Hill measures 100 feet in diameter and is 250 feet in height. The Joint Task Force Red Hill was established on September 30, 2022, for the purpose of "safely and expeditiously" defueling the Red Hill Bulk Fuel Storage facility. Secretary of Defense Lloyd Austin said: "Defueling and closing Red Hill is the right thing to do – for our service members, our families, the people of Hawaii, the environment, and our nation." The commander of the Joint Task Force is Vice. Adm. John F.G. Wade. Red Hill is located under a volcanic mountain ridge near Honolulu. It was declared a Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1995. History Before the United States entered World War II, the Roosevelt Administration became concerned about the vulnerability of the many above-ground fuel storage tanks at Pearl Harbor. In 1940 it decided to build a new underground facility that would store more fuel and be safe from an enemy aerial attack. The Red Hill site would provide unprecedented flow rates of fuel due to its elevation. In addition, the site's unique geological characteristics, including basalt rock, could support such large tanks. Federal, local government and contracted engineers and geologists performed many surveys of the Koolau Range and eventually reached a consensus on Red Hill as the best choice because it is mostly homogeneous basalt. Their original plan was to build four large underground tanks. These would be horizontal, as all underground tanks were at the time. However, during the planning process, the engineers decided to build the tanks vertically because construction and excavation could occur simultaneously. This was possible because a vertical shaft drilled through the centerline of the tank would allow excavated rock to funnel down onto a series of conveyor belts in the lower access tunnel. Planners and engineers began the process by acquiring the real estate, staging equipment and materials, and building a work camp. The 3,900 workers worked around the clock, seven days a week, to complete the project. Construction started by excavating the vertical shafts for all 20 tanks concurrently with mining of the upper access and lower access tunnels. The tunnels were aligned directly in the middle of the parallel rows of shafts. Once perpendicular to the shafts, cross tunnels were mined to connect the shafts to the main access tunnels. For constructability and safety reasons (cave-ins), the upper domes needed to be built first, so the miners excavated individual ring tunnels around the circumference of each of the future upper dome bases. They then scoped out the area of the domes and proceeded to construct the steel framing, steel liner and rebar. Workers continuously poured concrete that ranged in thickness from two feet at the crown to eight feet at the base. Once the concrete cured, it was pre-stressed by pressure grouting the area between the concrete and the basalt. Upon completion of the excavation, workers erected a steel tower in the center to the full height of 250 feet. The tower served to support concrete chutes, pipes, booms, and other equipment necessary to install the piping, concrete, and steel linings. Workers then began to erect the steel liner and rebar incrementally so that they could pour concrete in stages. Concrete was poured continually and workers had to remove wooden shoring as concrete filled. They injected pressurized grout to pre-stress the concrete by filling void space between the concrete and the gunite. When finished, the tanks were tested by slowly filling them with water while laborers in boats inspected the entire surface area of the steel liner. The Japanese attack on Pearl Harbor took place during the construction of Red Hill, and eight construction workers were killed by strafing aircraft. A portion of the site was used to bury hundreds of bodies of Navy personnel killed in the attack. Those not claimed by families were moved to the National Memorial Cemetery of the Pacific when it opened after the war. Environmental problems 2014 fuel spill In December 2013, contractors completed a three-year, scheduled, routine maintenance upgrade on tank 5 at Red Hill. This work included cleaning, inspecting, and repairing anomalies. At the conclusion of the overhaul in January 2014, the Navy initiated a Return to Service evolution, refilling the tank with jet fuel (JP-8). During this process, the inventory management alarms sounded. Red Hill operators first assumed the alarm system was malfunctioning or producing false positives because the tank was recently overhauled and should not have been leaking. Eventually, the U.S. Navy determined the alarms were not false and reported a 27,000 gallon jet fuel loss to the Hawaii Department of Health and the U.S. Environmental Protection Agency. Subsequent analysis indicated that the leak was the result of faulty work and poor quality control by the Navy's contractor, compounded by a lack of quality assurance oversight by the Navy, as well as operator error. As a consequence, the Navy, in accordance with the Administrative Order on Consent, significantly improved the tank inspection, repair, and maintenance processes in their report submitted on October 11, 2016. Regarding the 27,000 gallons of fuel, the Navy's first concern was the possibility of fuel entering the drinking water supply. The nearest drinking water shaft (operated by the Navy) is 3,000 feet away and is one source that provides water to the military families at Joint Base Pearl Harbor–Hickam. The next closest drinking water shaft, Halawa shaft, which is located approximately one mile away, provides drinking water to the city of Honolulu. While all test results for contamination at the Navy drinking water shaft have come back well within safe drinking water standards and results from the Halawa shaft have shown no jet fuel related contaminants, the Navy, the Environmental Protection Agency, and the Hawaii Department of Health conducted studies to evaluate groundwater conditions and any potential impacts to groundwater resources in the area. Results taken in and around Tank 5 indicated a spike in levels of hydrocarbons in soil vapor and groundwater. Drinking water monitoring results confirmed compliance with federal and state safety standards for drinking water both before and after the January 2014 release. Following the tank 5 release in 2014, the Navy increased the testing frequency for drinking water and groundwater wells. The Navy now sends drinking water samples every quarter to certified independent laboratories that use EPA methods to analyze for contamination. The newest groundwater monitoring well was scheduled to come on line by the end of April 2017. Commander Navy Region Hawaii and Naval Surface Group Middle Pacific, Admiral Fuller noted, "there are have been detects of trace amounts of fuel constituents near the Navy drinking water shaft." This is in the groundwater, not the drinking water. "We're talking 17 parts per billion, as opposed to zero. The misperception is that there's a spike, but the numbers were small enough that the testing facility had to estimate the amount because the numbers were so low. The 17 parts per billion is below the threshold of something we should be concerned about, which is 100 parts per billion." 2021 water contamination On November 20, 2021, another jet fuel leak at the Red Hill Bulk Fuel Storage Facility contaminated the Joint Base Pearl Harbor–Hickam water system, which supplies about 10,000 civilian and military households and schools. In December 2021, more than 1,000 military families stationed in Hawaii had been forced from their homes after apparent jet fuel contamination in the water supply at Joint Base Pearl Harbor–Hickam. About 2,700 homes in 10 communities were in the affected area. Residents had complained about foul-smelling tap water bearing an oily sheen, as well as symptoms such as nausea, diarrhea and intense headaches. Secretary of the Navy Carlos Del Toro told residents, “I deeply apologize to each and every one of you and to the people of Hawaii that this incident may have been destructive to your lives." It was not until May 2022, that the CDC published a study of self-reported symptoms which lasted more than a month. Creation of Joint Task Force Red Hill On September 30, 2022, Joint Task Force Red Hill was officially stood up for the purpose of safely and expeditiously defueling the facility through coordination with State and Federal stakeholders in order to set conditions for closure while continuing to rebuild trust with the State of Hawaii and the local community of Oahu. The Joint Task Force has participated and created numerous community forums (such as the defueling information sharing forum) in order to keep the public and key stakeholders informed of the repair progress that is required to defuel the facility. The commander of the Joint Task Force is Vice. Adm. John F.G. Wade. Groundwater Protection Plan Between 2006 and 2017, the Department of Defense had spent more than $200 million on continual technological modernization and environmental testing at Red Hill. The facility monitors the fuel level in each tank to one sixteenth of an inch and controls the movement of fuel throughout the facility. If a tank level decreases by as little as half an inch, alarms will sound in Red Hill's control room, which is continuously staffed. Many entities, including the U.S. Navy, University of Hawaii, and U.S. Geological Survey, study the movement of groundwater in and around Red Hill. Navy modeling to date indicates any fuel constituents in the groundwater are “not likely” to reach any of Oahu's drinking water sources. Timeline of actions taken by the U.S. Navy: 2005: Quarterly groundwater monitoring and sampling at the Red Hill Facility. 2007: Environmental investigation to collect additional data for groundwater and contaminant fate (how and where it disperses) and transport modeling in order to provide better understanding and forecasting of potential impacts; contingency plan to protect drinking water well located closest to Red Hill. 2008: The Groundwater Protection Plan (subsequently updated in 2009 and 2014) to mitigate risk associated with inadvertent fuel releases from the tanks. It included the following provisions: A tank inspection and maintenance program A soil vapor monitoring program to support primary leak detection processes A groundwater sampling and risk assessment reporting program A long-term groundwater monitoring program that provides warning of potential risk to human health Responsibilities and response actions that will be performed should groundwater data exceed Hawaii Department of Health environmental action levels. Periodic market surveys to evaluate best available leak detection technologies for large, field-constructed fuel storage facilities. Administrative Order on Consent The Administrative Order on Consent is a binding legal agreement administered by the Environmental Protection Agency. The order mandates the corrective actions to be taken in the wake of an environmental violation. Representatives from the Environmental Protection Agency, Hawaii Department of Health, U.S. Navy, and Defense Logistics Agency signed the order for Red Hill in September 2015. It acknowledges the shared responsibility to protect Oahu's drinking water supply and maintain Red Hill as a strategically vital resource. The Environmental Protection Agency and Hawaii Department of Health negotiated the administrative order with the U.S. Navy and the Defense Logistics Agency in response to the January 2014 fuel release from the facility. The order required the Navy and Defense Logistics Agency to take actions, subject to approval by the Department of Health and the Environmental Protection Agency, to address fuel releases and implement infrastructure improvements to protect human health and the environment. The order requires the Navy to evaluate and improve procedures and practices to maintain the integrity of the tanks, evaluate and implement structural upgrades to the tanks, and to use the best technology available to detect leaks. The order also requires the Navy to determine the overall risk the facility poses to the surrounding environment. The document prescribes actions the U.S. Navy must take, along with deadlines for completing each task. There are eight sections: Overall project maintenance guidance Tank inspection, repair, and maintenance Potential tank upgrade review procedures Leak detection and testing Current and future corrosion and metal fatigue Investigating and remediating past fuel releases Development of future groundwater protection and evaluation Risk and vulnerability assessment of Red Hill The Hawaii Department of Health hosted a second meeting to address the document in May 2016. Participants included the Defense Logistics Agency and Environmental Protection Agency as well as subject matter experts from the University of Hawaii, the Honolulu Board of Water Supply and their consultants, the State Department of Land and Natural Resources, and the U.S. Geological Survey. The Environmental Protection Agency and Hawaii Department of Health hired a team of world renowned subject matter experts to assess the entire Red Hill facility and went on record to say that all aspects—to include infrastructure, security measures, and operation practices—currently meet or exceed industry standards. Investigation and remediation The Administrative Order on Consent required the Navy to develop a plan addressing the January 2014 fuel release from Tank #5 as well as plans to address any future releases of fuel. The Navy will evaluate various investigative techniques to determine which are most suitable for determining the extent of contamination in the ground around Red Hill. Each technique will be evaluated in terms of feasibility of implementation and effectiveness in detecting light non-aqueous phase liquid, which refers to fuel floating on top of the groundwater. The Navy will also investigate a number of remediation methods for use in the Red Hill area. The Navy will evaluate methods based on their feasibility of implementation, suitableness for use in fractured basalt geology, and effectiveness in reducing contamination. In May 2016 the Navy submitted a scope of work to address the investigation and remediation of contamination, modeling and the groundwater monitoring network. On September 15, 2016, the regulatory agencies disapproved this scope of work and requested that the Navy revise the document (review and disapproval letter is available in the additional documents page, see link above). The Navy submitted a revised scope of work on November 5, 2016, which was conditionally approved by the regulatory agencies on December 2, 2016, subject to changes as detailed in the conditional approval letter (letter is available for viewing below). On January 5, 2017, the Navy submitted a new version of the Section 6 & 7 Scope of Work, available below, that addressed all of the issues noted in the Conditional Approval letter. Other reactions In 2020, the Honolulu Civil Beat reported that even though Navy and state health officials have known about the groundwater contamination since the late 1990s — and generated thousands of pages of reports on the topic — officials from Honolulu's Board of Water Supply say they were never informed about the problems until after this most recent spill. The Honolulu Board of Water Supply is responsible for purveying the county's drinking water. "I'm very concerned about the situation," said Ernest Lau, manager and chief engineer for the Honolulu Board of Water Supply. He stressed that the nearby aquifers are critical to Oahu's drinking water supply. The Environmental Protection Agency says millions of gallons of fuel stored in a military facility under Red Hill is unlikely to reach the water supply. "It's very unlikely that that contamination, that mass we're seeing under the tanks, gets anywhere near the Board of Water Supply wells," Steven Linder, of the Environmental Protection Agency, told the state Health Department's Underground Fuel Tank Advisory Committee. The Sierra Club of Hawaii launched a "Fix it up or shut it down" petition and has also demanded that the Hawaii Department of Health, the Environmental Protection Agency, and the U.S. Navy: Install sufficient "sentinel" monitoring wells to guard public drinking water sources from possible contamination currently in the aquifer, Locate the fuel that has already leaked from the storage facility and clean it up, Install genuine leak prevention systems, not only leak detection systems, that will guarantee there will be no future leaks from this facility. According to military contracting announcements, Joint Base Pearl Harbor–Hickam has contracted with two corporations to address environmental problems at Red Hill: AECOM has been in charge of investigating and remediating releases, as well as protecting and evaluating groundwater, while APTIM (known as CB&I prior to 2017) has been in charge cleaning, inspecting, and repairing fuel storage tanks. In January 2025, it was reported that the Navy and Defense Logistics Agency were fined $5,000 by the Environmental Protection Agency for not attending a Red Hill Community Representation Initiative meeting, as required by a consent order. References External links U.S. 117th Congress $250M+ appropriation Red Hill water contamination Red Hill weekly – The Museum of Flight Digital Collections Ka hoku o Red Hill – The Museum of Flight Digital Collections Installations of the United States Navy in Hawaii Historic American Engineering Record in Hawaii Historic Civil Engineering Landmarks Buildings and structures in Honolulu County, Hawaii
Red Hill Underground Fuel Storage Facility
Engineering
3,502
29,409
https://en.wikipedia.org/wiki/Spica
Spica is the brightest object in the constellation of Virgo and one of the 20 brightest stars in the night sky. It has the Bayer designation α Virginis, which is Latinised to Alpha Virginis and abbreviated Alpha Vir or α Vir. Analysis of its parallax shows that it is located 250 light-years from the Sun. It is a spectroscopic binary star and rotating ellipsoidal variable; a system whose two stars are so close together they are egg-shaped rather than spherical, and can only be separated by their spectra. The primary is a blue giant and a variable star of the Beta Cephei type. Spica, along with Arcturus and Denebola—or Regulus, depending on the source—forms the Spring Triangle asterism, and, by extension, is also part of the Great Diamond together with the star Cor Caroli. Nomenclature In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Spica for this star. It is now so entered in the IAU Catalog of Star Names. The name is derived from the Latin spīca virginis "the virgin's ear of [wheat] grain". It was also anglicized as Virgin's Spike. α Virginis (Latinised to Alpha Virginis) is the system's Bayer designation. Johann Bayer cited the name Arista. Other traditional names are Azimech , from Arabic السماك الأعزل al-simāk al-ʼaʽzal 'the unarmed simāk (of unknown meaning, cf. Eta Boötis); Alarph, Arabic for 'the grape-gatherer' or 'gleaner', and Sumbalet (Sombalet, Sembalet and variants), from Arabic سنبلة sunbulah "ear of grain". In Chinese, (), meaning Horn (asterism), refers to an asterism consisting of Spica and ζ Virginis. Consequently, the Chinese name for Spica is (, ). In Hindu astronomy, Spica corresponds to the Nakshatra Chitrā. Observational history As one of the nearest massive binary star systems to the Sun, Spica has been the subject of many observational studies. Spica is believed to be the star that gave Hipparchus the data that led him to discover the precession of the equinoxes. A temple to Menat (an early Hathor) at Thebes was oriented with reference to Spica when it was built in 3200 BC, and, over time, precession slowly but noticeably changed Spica's location relative to the temple. Nicolaus Copernicus made many observations of Spica with his home-made triquetrum for his researches on precession. Observation Spica is 2.06 degrees from the ecliptic and can be occulted by the Moon and sometimes by planets. The last planetary occultation of Spica occurred when Venus passed in front of the star (as seen from Earth) on November 10, 1783. The next occultation will occur on September 2, 2197, when Venus again passes in front of Spica. The Sun passes a little more than 2° north of Spica around October 16 every year, and the star's heliacal rising occurs about two weeks later. Every 8 years, Venus passes Spica around the time of the star's heliacal rising, as in 2009 when it passed 3.5° north of the star on November 3. A method of finding Spica is to follow the arc of the handle of the Big Dipper (or Plough) to Arcturus, and then continue on the same angular distance to Spica. This can be recalled by the mnemonic phrase, "arc to Arcturus and spike to Spica." Stars that can set (not in a circumpolar constellation for the viewer) culminate at midnight—noticeable where viewed away from any polar region experiencing midnight sun—when at opposition, meaning they can be viewed from dusk until dawn. This applies to α Virginis on 12 April, in the current astronomical epoch. Physical properties Spica is a close binary star whose components orbit each other every four days. They stay close enough together that they cannot be resolved as two stars through a telescope. The changes in the orbital motion of this pair results in a Doppler shift in the absorption lines of their respective spectra, making them a double-lined spectroscopic binary. Initially, the orbital parameters for this system were inferred using spectroscopic measurements. Between 1966 and 1970, the Narrabri Stellar Intensity Interferometer was used to observe the pair and to directly measure the orbital characteristics and the angular diameter of the primary, which was found to be , and the angular size of the semi-major axis of the orbit was found to be only slightly larger at . Spica is a rotating ellipsoidal variable, which is a non-eclipsing close binary star system where the stars are mutually distorted through their gravitational interaction. This effect causes the apparent magnitude of the star system to vary by 0.03 over an interval that matches the orbital period. This slight dip in magnitude is barely noticeable visually. Both stars rotate faster than their mutual orbital period. This lack of synchronization and the high ellipticity of their orbit may indicate that this is a young star system. Over time, the mutual tidal interaction of the pair may lead to rotational synchronization and orbit circularization. Spica is a polarimetric variable, first discovered to be such in 2016. The majority of the polarimetric signal is the result of the reflection of the light from one star off the other (and vice versa). The two stars in Spica were the first ever to have their reflectivity (or geometric albedo) measured. The geometric albedos of Spica A and B are, respectively, 3.61 percent and 1.36 percent, values that are low compared to planets. The MK spectral classification of Spica is typically considered to be an early B-type main-sequence star. Individual spectral types for the two components are difficult to assign accurately, especially for the secondary due to the Struve–Sahade effect. The Bright Star Catalogue derived a spectral class of B2III-IV for the primary and B4-7V for the secondary, but later studies have given various different values. The primary star has a stellar classification of B2III-IV. The luminosity class matches the spectrum of a star that is midway between a subgiant and a giant star, and it is no longer a main-sequence star. The evolutionary stage has been calculated to be near or slightly past the end of the main-sequence phase. This is a massive star with more than 10 times the mass of the Sun and seven times its radius. The bolometric luminosity of the primary is about 20,500 times that of the Sun, and nine times the luminosity of its companion. The primary is one of the nearest stars to the Sun that has enough mass to end its life in a Type II supernova explosion. However, since Spica has only recently left the main sequence, this event is not likely to occur for several more million years. The primary is classified as a Beta Cephei variable star that varies in brightness over a 0.1738-day period. The spectrum shows a radial velocity variation with the same period, indicating that the surface of the star is regularly pulsating outward and then contracting. This star is rotating rapidly, with a rotational velocity of 199 km/s along the equator. The secondary member of this system is one of the few stars whose spectrum is affected by the Struve–Sahade effect. This is an anomalous change in the strength of the spectral lines over the course of an orbit, where the lines become weaker as the star is moving away from the observer. It may be caused by a strong stellar wind from the primary scattering the light from secondary when it is receding. This star is smaller than the primary, with about 4 times the mass of the Sun and 3.6 times the Sun's radius. Its stellar classification is B4-7 V, making this a main-sequence star. In culture Both a rocket and crew capsule designed and under development by Copenhagen Suborbitals, a crowd-funded space program, is named Spica. Spica aims to make Denmark the first country to launch its own astronaut to space after Russia, the US and China. Spica is one of the Behenian fixed stars. In his Three Books of Occult Philosophy, Cornelius Agrippa attributes Spica's kabbalistic symbol to Hermes Trismegistus. See also Lists of astronomical objects References B-type main-sequence stars B-type subgiants B-type giants Beta Cephei variables Binary stars Rotating ellipsoidal variables Virgo (constellation) Virginis, Alpha 5056 Durchmusterung objects Virginis, 067 116658 065474 Stars with proper names
Spica
Astronomy
1,918
44,249,205
https://en.wikipedia.org/wiki/Thomas%20W.%20Tucker
Thomas William Tucker (born July 15, 1945) is an American mathematician, the Charles Hetherington Professor of Mathematics at Colgate University, and an expert in the area of topological graph theory. Tucker did his undergraduate studies at Harvard University, graduating in 1967, and obtained his Ph.D. from Dartmouth College in 1971, under the supervision of Edward Martin Brown. Tucker's father, Albert W. Tucker, was also a professional mathematician, and his brother, Alan Tucker, and son, Thomas J. Tucker, are also professional mathematicians. References 20th-century American mathematicians 21st-century American mathematicians Harvard University alumni Dartmouth College alumni Colgate University faculty Graph theorists Living people 1945 births Mathematicians from New York (state)
Thomas W. Tucker
Mathematics
145
24,507,886
https://en.wikipedia.org/wiki/Gymnopilus%20spadiceus
Gymnopilus spadiceus is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species External links Gymnopilus spadiceus at Index Fungorum spadiceus Fungus species
Gymnopilus spadiceus
Biology
52
13,629,296
https://en.wikipedia.org/wiki/Pseudomonas%20sRNA%20P24
Pseudomonas sRNA P24 is a ncRNA that was predicted using bioinformatic tools in the genome of the opportunistic pathogen Pseudomonas aeruginosa and its expression verified by northern blot analysis. P24 is conserved across several Pseudomonas species and is consistently located between a hypothetical protein gene and a transcriptional regulator gene (AsnC family) in the genomes of these Pseudomonas species. P24 has a predicted Rho independent terminatorat the 3′ end but the function of P24 is unknown. See also Pseudomonas sRNA P9 Pseudomonas sRNA P11 Pseudomonas sRNA P15 Pseudomonas sRNA P16 Pseudomonas sRNA P26 References External links Non-coding RNA
Pseudomonas sRNA P24
Chemistry
158
5,597,523
https://en.wikipedia.org/wiki/Desert%20exploration
Desert exploration is the deliberate and scientific exploration of deserts, the arid regions of the earth. It is only incidentally concerned with the culture and livelihood of native desert dwellers. People have struggled to live in deserts and the surrounding semi-arid lands for millennia. Nomads have moved their flocks and herds to wherever grazing is available, and oases have provided opportunities for a more settled way of life. Many, such as the Bushmen in the Kalahari, the Aborigines in Australia and various Indigenous peoples of the Americas, were originally hunter-gatherers. Many trade routes have been forged across deserts, especially across the Sahara Desert, and traditionally were used by caravans of camels carrying salt, gold, ivory and other goods. Large numbers of slaves were also taken northwards across the Sahara. Today, some mineral extraction also takes place in deserts, and the uninterrupted sunlight gives potential for the capture of large quantities of solar energy. Many people think of deserts as consisting of extensive areas of billowing sand dunes because that is the way they are often depicted on TV and in films, but deserts do not always look like this. Across the world, around 20% of desert is sand, varying from only 2% in North America to 30% in Australia and over 45% in Central Asia. Where sand does occur, it is usually in large quantities in the form of sand sheets or extensive areas of dunes. The following sections list deserts around the world, and their explorers. Expeditions are listed by their leaders; details of other expedition members may be found via the links. Africa Kalahari Desert Sahara Desert The Romans organized expeditions to cross the Sahara desert with five different routes. All these expeditions were supported by legionaries and had mainly a commercial purpose. One of the main reasons of the explorations was to get gold using the camel to transport it.: through the western Sahara, toward the Niger River and present day Timbuktu. through the Tibesti mountains, toward Lake Chad and present day Nigeria through the Nile river, toward present day Uganda. though the western coast of Africa, toward the Canary Islands and the Cape Verde islands. through the Red Sea, toward present day Somalia and perhaps Tanzania. Michael Asher & Mariantonietta Peru – made the first recorded crossing of the Sahara from west to east, by camel and on foot, from Nouakchott, Mauretania, to Abu Simbel, Egypt, 1986–87, a distance of 4500 miles Ref: The Modern Explorers. Thames & Hudson. London 2013 Michael Asher lived for 3 years with the Kababish nomads in the Sudan. Eva Dickson – was the first woman to cross the Sahara Desert by car. In 1932 she met the former spouse of Karen Blixen, Baron Bror von Blixen-Finecke, in Kenya, and they became lovers. After her meeting with Blixen in 1932, she took a bet and drove by car from Nairobi to Stockholm in 1932, thus becoming the first woman to have crossed the Sahara by car. Heinrich Barth – crossed the Sahara during his travels in Africa and the Middle East during 1845–1847. James Richardson – explored the Sahara and Sudan he died in the notorious hamada (a stony desert) in the Western Sahara. Friedrich Gerhard Rohlfs – German geographer. First person the cross Africa north to south. Named a place Regenfeld near Dakhla Oasis in southern Egypt after experiencing a rare occurrence of desert rain. Karl Alfred von Zittel German palaeontologist who accompanied Rohlfs. Henri Duveyrier – He undertook a number of fossil-hunting explorations in the Sahara. Albert-Félix de Lapparent – Explorer of the northern and western parts of the Sahara. Victor Loche – first identified the sand cat (Felis margarita) while exploring the North Sahara. Joseph Ritchie – sent to find the course of the River Niger and the location of Timbuktu. He died in Murzuk. Helen Thayer – 20th-century walker and explorer. Michiel Franken- First man to ride a sidecar (BMW i8) through the sahara Asia Arabian Desert has been populated since prehistory. Rub' al Khali or the Empty Quarter in its remote center is one of the largest continuous bodies of sand in the world. It was recently explored by Europeans: Bertram Thomas in 1931 and St. John Philby in 1932: first documented journeys by Westerners Wilfred Thesiger in 1946–50 crossed it several times and mapped large parts of it In June 1950, a US Air Force expedition crossed the Rub' al Khali from Dhahran, Saudi Arabia, to central Yemen and back in trucks to collect specimens for the Smithsonian Institution and to test desert survival procedures. Youngho Nam (Korean) in 2013 crossed on foot 1,000 km from "Salalah, Oman" to "Liwa, United Arab Emirates" Taklamakan Desert Xuanzang, a monk in the 7th century The archaeologist Aurel Stein in the 20th century. Charles Blackmore 1993 Gobi Desert has a long history of human habitation, mostly by nomadic peoples. The Gobi Desert as a whole was known only very imperfectly to outsiders, as information was confined to observations by individual travelers engaging in their respective itineraries across the desert. Among the European and American explorers who contributed to the understanding of the Gobi, the most important were the following: Jean-François Gerbillon (1688–1698) Eberhard Isbrand Ides (1692–1694) Lorenz Lange (1727–1728 and 1736) Fuss and Alexander G. von Bunge (1830–1831) Hermann Fritsche (1868–1873) Pavlinov and Z.L. Matusovski (1870) Ney Elias (1872–1873) Nikolai Przhevalsky (1870–1872 and 1876–1877) Zosnovsky (1875) Mikhail V. Pevtsov (1878) Grigory Potanin (1877 and 1884–1886) Béla Széchenyi and Lajos Lóczy (1879–1880) The brothers Grigory Grum-Grshimailo (1889–1890) and M. Y. Grigory Grum-Grshimailo Pyotr Kuzmich Kozlov (1893–1894 and 1899–1900) Vsevolod I. Roborovsky (1894) Vladimir Obruchev (1894–1896) Karl Josef Futterer and Dr. Holderer (1896) Charles-Etienne Bonin (1896 and 1899) Sven Hedin (1897 and 1900–1901) K. Bogdanovich (1898) Ladyghin (1899–1900) and Katsnakov (1899–1900) Jacques Bouly de Lesdain and Martha Mailey, 1902 Roy Chapman Andrews from the American Museum of Natural History who led several palaeontological expeditions to the Gobi Desert between 1922 and 1930 Zofia Kielan-Jaworowska who led Polish-Mongolian palaeontological expeditions in the mid-1960s. Australia Central Australia – general term covering the arid regions in the Australian interior Jon Muir – made the first ever unassisted crossing of the Australian desert on foot Edward John Eyre – expeditions to Lake Eyre and the Flinders Ranges in the 1830. Charles Sturt – expeditions from Adelaide in the 1840s John McDouall Stuart – accompanied Sturt 1844–1845; expeditions 1859 & 1860 (South Australia), 1861–1862 (south-north crossing of Australia) Ludwig Leichhardt – expeditions 1844–1845 Moreton Bay to Port Essington, 1846–1847 and 1848 west from Moreton Bay, where the entire expedition vanished Burke and Wills (Robert O'Hara Burke and William John Wills) – south-north crossing of Australia 1860–1861 where both died on the return journey Augustus Gregory – searched for Leichhardt in 1858 Ernest Giles – expeditions 1872–1876 William Tietkens – expedition in 1889 Gibson Desert Ernest Giles – crossed the desert in 1874 Great Sandy Desert Great Victoria Desert John McDouall Stuart – skirted the desert in 1858 Ernest Giles – crossed the desert in 1875 Nullarbor Plain – desert plain on the western part of the south coast of Australia Edward John Eyre – expedition 1840–1841 Tanami Desert Simpson Desert and Sturt Stony Desert Charles Sturt – expedition 1844–1845 Cecil Madigan – expedition 1939 across the Simpson Desert Warren Bonython and Charles McCubbin were the first North to South traverse on foot in 1973. They pulled a cart with supplies and used two air drops of water and supplies. Louis-Philippe Loncke – unsupported expedition 2008 across the Simpson Desert on foot from North to South Western Australia – a large and generally arid region Robert Austin – expedition 1854 Alexander Forrest – expeditions in the 1870s and 1880s John Forrest – expeditions in the 1870s and 1880s David Carnegie – expedition in 1896-7 Larry Wells – expedition in 1896-7 North America Before the European exploration of North America, tribes of Native Americans, such as the Mohave (in the Mojave desert), the Chemehuevi (in the Great Basin desert), and the Quechan (in the Colorado desert) were hunter-gatherers living in the California deserts. European explorers started exploring the deserts beginning in the 18th century. Francisco Garcés, a Franciscan friar, was the first explorer of the Colorado and Mojave deserts in 1776. Garcés recorded information about the original inhabitants of the deserts. Later, as American interests expanded into California, American explorers started probing the California deserts. Jedediah Smith travelled through the Great Basin and Mojave deserts in 1826, finally reaching the San Gabriel Mission. John C. Frémont explored the Great Basin, proving that water did not flow out of it to the ocean, and provided maps that the forty-niners used to get to California. See also Saharan explorers References https://deserts.fr Deserts Exploration
Desert exploration
Biology
2,033
11,588,774
https://en.wikipedia.org/wiki/Coal-fired%20power%20station
A coal-fired power station or coal power plant is a thermal power station which burns coal to generate electricity. Worldwide there are about 2,500 coal-fired power stations, on average capable of generating a gigawatt each. They generate about a third of the world's electricity, but cause many illnesses and the most early deaths per unit of energy produced, mainly from air pollution. World installed capacity doubled from 2000 to 2023 and increased 2% in 2023. A coal-fired power station is a type of fossil fuel power station. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines that turn generators. Thus chemical energy stored in coal is converted successively into thermal energy, mechanical energy and, finally, electrical energy. Coal-fired power stations are a significant source of greenhouse gas emissions, releasing approximately 12 billion tonnes of carbon dioxide annually, representing about one-fifth of global emissions. This makes them the largest single contributor to climate change. China accounts for over half of global coal-fired electricity generation. While the total number of operational coal plants began declining in 2020, due to retirements in Europe and the Americas, construction continues in Asia, primarily in China. The profitability of some plants is maintained by externalities, as the health and environmental costs of coal production and use are not fully reflected in electricity prices. However, newer plants face the risk of becoming stranded assets. The UN Secretary General has called for OECD nations to phase out coal-fired generation by 2030, and the rest of the world by 2040. History The first coal-fired power stations were built in the late 19th century and used reciprocating engines to generate direct current. Steam turbines allowed much larger plants to be built in the early 20th century and alternating current was used to serve wider areas. Transport and delivery of coal Coal is delivered by highway truck, rail, barge, collier ship or coal slurry pipeline. Generating stations are sometimes built next to a mine; especially one mining coal, such as lignite, which is not valuable enough to transport long-distance; so may receive coal by conveyor belt or massive diesel-electric-drive trucks. A large coal train called a "unit train" may be 2 km long, containing 130-140 cars with around 100 tonnes of coal in each one, for a total load of over 10,000 tonnes. A large plant under full load requires at least one coal delivery this size every day. Plants may get as many as three to five trains a day, especially in "peak season" during the hottest summer or coldest winter months (depending on local climate) when power consumption is high. Modern unloaders use rotary dump devices, which eliminate problems with coal freezing in bottom dump cars. The unloader includes a train positioner arm that pulls the entire train to position each car over a coal hopper. The dumper clamps an individual car against a platform that swivels the car upside down to dump the coal. Swiveling couplers enable the entire operation to occur while the cars are still coupled together. Unloading a unit train takes about three hours. Shorter trains may use railcars with an "air-dump", which relies on air pressure from the engine plus a "hot shoe" on each car. This "hot shoe" when it comes into contact with a "hot rail" at the unloading trestle, shoots an electric charge through the air dump apparatus and causes the doors on the bottom of the car to open, dumping the coal through the opening in the trestle. Unloading one of these trains takes anywhere from an hour to an hour and a half. Older unloaders may still use manually operated bottom-dump rail cars and a "shaker" attached to dump the coal. A collier (cargo ship carrying coal) may hold of coal and takes several days to unload. Some colliers carry their own conveying equipment to unload their own bunkers; others depend on equipment at the plant. For transporting coal in calmer waters, such as rivers and lakes, flat-bottomed barges are often used. Barges are usually unpowered and must be moved by tugboats or towboats. For start up or auxiliary purposes, the plant may use fuel oil as well. Fuel oil can be delivered to plants by pipeline, tanker, tank car or truck. Oil is stored in vertical cylindrical steel tanks with capacities as high as . The heavier no. 5 "bunker" and no. 6 fuels are typically steam-heated before pumping in cold climates. Operation As a type of thermal power station, a coal-fired power station converts chemical energy stored in coal successively into thermal energy, mechanical energy and, finally, electrical energy. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The heat from the burning pulverized coal converts boiler water to steam, which is then used to spin turbines that turn generators. Compared to a thermal power station burning other fuel types, coal specific fuel processing and ash disposal is required. For units over about 200 MW capacity, redundancy of key components is provided by installing duplicates of the forced and induced draft fans, air preheaters, and fly ash collectors. On some units of about 60 MW, two boilers per unit may instead be provided. The hundred largest coal power stations range in size from 3,000 MW to 6,700 MW. Coal processing Coal is prepared for use by crushing the rough coal to pieces less than in size. The coal is then transported from the storage yard to in-plant storage silos by conveyor belts at rates up to 4,000 tonnes per hour. In plants that burn pulverized coal, silos feed coal to pulverizers (coal mills) that take the larger 5 cm pieces, grind them to the consistency of talcum powder, sort them, and mix them with primary combustion air, which transports the coal to the boiler furnace and preheats the coal in order to drive off excess moisture content. A 500 MWe plant may have six such pulverizers, five of which can supply coal to the furnace at 250 tonnes per hour under full load. In plants that do not burn pulverized coal, the larger 5 cm pieces may be directly fed into the silos which then feed either mechanical distributors that drop the coal on a traveling grate or the cyclone burners, a specific kind of combustor that can efficiently burn larger pieces of fuel. Boiler operation Plants designed for lignite (brown coal) are used in locations as varied as Germany, Victoria, Australia, and North Dakota. Lignite is a much younger form of coal than black coal. It has a lower energy density than black coal and requires a much larger furnace for equivalent heat output. Such coals may contain up to 70% water and ash, yielding lower furnace temperatures and requiring larger induced-draft fans. The firing systems also differ from black coal and typically draw hot gas from the furnace-exit level and mix it with the incoming coal in fan-type mills that inject the pulverized coal and hot gas mixture into the boiler. Ash disposal The ash is often stored in ash ponds. Although the use of ash ponds in combination with air pollution controls (such as wet scrubbers) decreases the amount of airborne pollutants, the structures pose serious health risks for the surrounding environment. Power utility companies have often built the ponds without liners, especially in the United States, and therefore chemicals in the ash can leach into groundwater and surface waters. Since the 1990s, power utilities in the U.S. have designed many of their new plants with dry ash handling systems. The dry ash is disposed in landfills, which typically include liners and groundwater monitoring systems. Dry ash may also be recycled into products such as concrete, structural fills for road construction and grout. Fly ash collection Fly ash is captured and removed from the flue gas by electrostatic precipitators or fabric bag filters (or sometimes both) located at the outlet of the furnace and before the induced draft fan. The fly ash is periodically removed from the collection hoppers below the precipitators or bag filters. Generally, the fly ash is pneumatically transported to storage silos and stored on site in ash ponds, or transported by trucks or railroad cars to landfills. Bottom ash collection and disposal At the bottom of the furnace, there is a hopper for collection of bottom ash. This hopper is kept filled with water to quench the ash and clinkers falling down from the furnace. Arrangements are included to crush the clinkers and convey the crushed clinkers and bottom ash to on-site ash ponds, or off-site to landfills. Ash extractors are used to discharge ash from municipal solid waste–fired boilers. Flexibility Effective energy policy, law and electricity markets are essential for grid flexibility. While the flexibility of some coal-fired power stations can be enhanced, they generally offer less dispatchable generation than most gas-fired power plants. A key aspect of flexibility is low minimum load; however, certain flexibility upgrades for coal plants may be more costly than deploying renewable energy sources with battery storage. Coal power generation , coal was the largest single source of electricity generation, fueling two-thirds of global electricity generation, and representing 34% of the total supply. Over half of global coal-fired generation in 2020 occurred in China, and coal provided approximately 60% of electricity in China, India and Indonesia. Globally in 2020, 2,059 GW of coal-fired capacity was operational, with 50 GW newly commissioned and 25 GW under construction (primarily in China), while 38 GW was retired (mainly in the US and EU). By 2023, global coal power capacity had increased to 2,130 GW, largely due to 47.4 GW of additions in China. While some nations pledged to transition away from coal power at the 2021 United Nations Climate Change Conference (COP26) through the Global Coal to Clean Power Transition Statement, significant challenges persist, especially in developing countries such as Indonesia and Vietnam. Efficiency There are 4 main types of coal-fired power station in increasing order of efficiency are: subcritical, supercritical, ultra-supercritical and cogeneration (also called combined heat and power or CHP). Subcritical is the least efficient type, however recent innovations have allowed retrofits to older subcritical plants to meet or even exceed efficiency of supercritical plants. Integrated gasification combined cycle design Integrated gasification combined cycle (IGCC) is a coal-based power generation technology that uses a high-pressure gasifier to convert coal (or other carbon-based fuels) into pressurized synthesis gas (syngas). The gasification process allows the use of a combined cycle generator, typically achieving higher efficiency. IGCC also facilitates removal of certain pollutants from the syngas before power generation. However, this technology is more expensive than conventional coal-fired power stations. Carbon dioxide emissions As coal is mainly carbon, coal-fired power stations have a high carbon intensity. On average, coal power stations emit far more greenhouse gas per unit electricity generated compared with other energy sources (see also life-cycle greenhouse-gas emissions of energy sources). In 2018 coal burnt to generate electricity emitted over 10 Gt of the 34 Gt total from fuel combustion (the overall total greenhouse gas emissions for 2018 was 55 Gt e). Mitigation Phase out From 2015 to 2020, although coal generation hardly fell in absolute terms, some of its market share was taken by wind and solar. In 2020 only China increased coal power generation, and globally it fell by 4%. However, in 2021, China declared that it limited coal generation until 2025 and subsequently phase it out over time. The UN Secretary General has said that OECD countries should stop generating electricity from coal by 2030 and the rest of the world by 2040, otherwise limiting global warming to 1.5 °C, a target of the Paris Agreement, would be extremely difficult. A 2024 analysis by The Economist concluded that financing phase-out would be cheaper than carbon offsets. However phasing out in Asia can be a financial challenge as plants there are relatively young: in China the co-benefits of closing a plant vary greatly depending on its location. Vietnam is among the few coal-dependent fast developing countries that fully pledged to phase out unbated coal power by the 2040s or as soon as possible thereafter. Ammonia co-firing Ammonia has a high hydrogen density and is easy to handle. It can be used as storing carbon-free fuel in gas turbine power generation and help significantly reduce CO₂ emissions as a fuel. In Japan, the first major four-year test project was started in June 2021 to develop technology to enable co-firing a significant amount of ammonia at a large-scale commercial coal-fired plant. However low-carbon hydrogen and ammonia is in demand for sustainable shipping, which unlike electricity generation, has few other clean options. Conversion Some power stations are being converted to burn gas, biomass or waste, and conversion to thermal storage will be trialed in 2023. Carbon capture Retrofitting some existing coal-fired power stations with carbon capture and storage was being considered in China in 2020, but this is very expensive, reduces the energy output and for some plants is not technically feasible. Pollution Coal burning power plants kill many thousands of people every year with their emissions of particulates, microscopic air pollutants that enter human lungs and other human organs and induce a variety of adverse medical conditions, including asthma, heart disease, low birth weight and cancers. In the U.S. alone, such particulates, known as PM2.5 (particulates with a diameter of 2.5 μm or less), caused at least 460,000 excess deaths over two decades. In some countries pollution is somewhat controlled by best available techniques, for example those in the EU through its Industrial Emissions Directive. In the United States, coal-fired plants are governed at the national level by several air pollution regulations, including the Mercury and Air Toxics Standards (MATS) regulation, by effluent guidelines for water pollution, and by solid waste regulations under the Resource Conservation and Recovery Act (RCRA). Coal-fired power stations continue to pollute in lightly regulated countries: such as the Western Balkans, India, Russia and South Africa, causing over a hundred thousand early deaths each year. Local air pollution Damage to health from particulates, sulfur dioxide and nitrogen oxide occurs mainly in Asia and is often due to burning low quality coal, such as lignite, in plants lacking modern flue gas treatment. Early deaths due to air pollution have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Evidence indicates that exposure to sulfur, sulfates, or PM2.5 from coal emissions may be associated with higher relative morbidity or mortality risk than that to other PM2.5 constituents or PM2.5 from other sources per unit concentration. Water pollution Pollutants such as heavy metals leaching into ground water from unlined coal ash storage ponds or landfills pollute water, possibly for decades or centuries. Pollutant discharges from ash ponds to rivers (or other surface water bodies) typically include arsenic, lead, mercury, selenium, chromium, and cadmium. Mercury emissions from coal-fired power plants can fall back onto the land and water in rain, and then be converted into methylmercury by bacteria. Through biomagnification, this mercury can then reach dangerously high levels in fish. More than half of atmospheric mercury comes from coal-fired power plants. Coal-fired power plants also emit sulfur dioxide and nitrogen. These emissions lead to acid rain, which can restructure food webs and lead to the collapse of fish and invertebrate populations. Mitigation of local pollution local pollution in China, which has by far the most coal-fired power stations, is forecast to be reduced further in the 2020s and 2030s, especially if small and low efficiency plants are retired early. Economics Subsidies Coal power plants tend to serve as base load technology, as they have high availability factors, and are relatively difficult and expensive to ramp up and down. As such, they perform poorly in real-time energy markets, where they are unable to respond to changes in the locational marginal price. In the United States, this has been especially true in light of the advent of cheap natural gas, which can serve as a fuel in dispatchable power plants that substitute the role of baseload on the grid. In 2020 the coal industry was subsidized $US18 billion. Finance Coal financing is the financial support provided for coal-related projects, encompassing coal mining and coal-fired power stations. Its role in shaping the global energy landscape and its environmental and climate impacts have made it a subject of concern. The misalignment of coal financing with international climate objectives, particularly the Paris Agreement, has garnered attention. The Paris Agreement aims to restrict global warming to well below 2 degrees Celsius and ideally limit it to 1.5 degrees Celsius. Achieving these goals necessitates a substantial reduction in coal-related activities. Studies, including finance-based accounting of coal emissions, have revealed a misalignment of coal financing with climate objectives. Major nations, such as China, Japan, and the U.S., have extended financial support to overseas coal power infrastructure. The largest backers are Chinese banks under the Belt and Road Initiative (BRI). This support has led to significant long-term climate and financial risks and harms the objectives of reducing CO2 emissions set by the Paris Agreement, of which China, the United States and Japan are signatories. A substantial portion of the associated emissions is anticipated to occur after 2019. Coal financing poses challenges to the global decarbonization of the power generation sector. As renewable energy technologies become cost-competitive, the economic viability of coal projects diminishes, making past fossil fuel investments less attractive. To address these concerns and align with climate goals, there is a growing call for stricter policies regarding overseas coal financing. Countries, including Japan and the U.S., have faced criticism for permitting the financing of certain coal projects. Strengthening the policies, potentially by banning public financing of coal projects entirely, would enhance their climate efforts and credibility. In addition, Enhanced transparency in disclosing financing details is crucial for evaluating their environmental impacts. Capacity factors In India capacity factors are below 60%. In 2020 coal-fired power stations in the United States had an overall capacity factor of 40%; that is, they operated at a little less than half of their cumulative nameplate capacity. Stranded assets If global warming is limited to well below 2 °C as specified in the Paris Agreement, coal plant stranded assets of over US$500 billion are forecast by 2050, mostly in China. In 2020 think tank Carbon Tracker estimated that 39% of coal-fired plants were already more expensive than new renewables and storage and that 73% would be by 2025. about half of China's coal power companies are losing money and old and small power plants "have no hope of making profits". India is keeping potential stranded assets operating by subsidizing them. Politics In May 2021, the G7 committed to end support for coal-fired power stations within the year. The G7's commitment to end coal support is significant as their coal capacity decreased from 23% (443 GW) in 2015 to 15% (310 GW) in 2023, reflecting a shift towards greener policies. This contrasts with China and India, where coal remains central to energy policy. As of 2023, the Group of Twenty (G20) holds 92% of the world's operating coal capacity (1,968 GW) and 88% of pre-construction capacity (336 GW). The energy policy of China regarding coal and coal in China are the most important factors regarding the future of coal-fired power stations, because the country has so many. According to one analysis local officials overinvested in coal-fired power in the mid-2010s because central government guaranteed operating hours and set a high wholesale electricity price. In democracies coal power investment follows an environmental Kuznets curve. The energy policy of India about coal is an issue in the politics of India. Protests In the 21st century people have often protested against opencast mining, for example at Hambach Forest, Akbelen Forest and Ffos-y-fran; and at sites of proposed new plants, such as in Kenya and China. See also Powering Past Coal Alliance Global Energy Monitor Notes References External links Coal fired power plant Energy Education by the University of Calgary How a coal plant works video by the Tennessee Valley Authority How a coal plant works video by Ontario Power Generation Electricity from coal by the World Coal Association World's coal power plants mapped by Carbon Brief by various environmental, social justice and health advocates Coal-fired power by the International Energy Agency Economics of coal by Carbon Tracker Centre for Research on Energy and Clean Air Greenhouse gas emissions Subsidies
Coal-fired power station
Chemistry
4,364
441,228
https://en.wikipedia.org/wiki/Electrical%20synapse
An electrical synapse, or gap junction, is a mechanical and electrically conductive synapse, a functional junction between two neighboring neurons. The synapse is formed at a narrow gap between the pre- and postsynaptic neurons known as a gap junction. At gap junctions, such cells approach within about 3.8 nm of each other, a much shorter distance than the 20- to 40-nanometer distance that separates cells at a chemical synapse. In many animals, electrical synapse-based systems co-exist with chemical synapses. Compared to chemical synapses, electrical synapses conduct nerve impulses faster and provide continuous-time bidirectional coupling via linked cytoplasm. As such, the notion of signal directionality across these synapses is not always defined. They are known to produce synchronization of network activity in the brain and can create chaotic network level dynamics. In situations where a signal direction can be defined, they lack gain (unlike chemical synapses)—the signal in the postsynaptic neuron is the same or smaller than that of the originating neuron . The fundamental bases for perceiving electrical synapses comes down to the connexons that are located in the gap junction between two neurons. Electrical synapses are often found in neural systems that require the fastest possible response, such as defensive reflexes. An important characteristic of electrical synapses is that they are mostly bidirectional, allowing impulse transmission in either direction. Structure Each gap junction (sometimes called a nexus) contains numerous gap junction channels that cross the plasma membranes of both cells. With a lumen diameter of about 1.2 to 2.0 nm, the pore of a gap junction channel is wide enough to allow ions and even medium-size molecules like signaling molecules to flow from one cell to the next, thereby connecting the two cells' cytoplasm. Thus when the membrane potential of one cell changes, ions may move through from one cell to the next, carrying positive charge with them and depolarizing the postsynaptic cell. Gap junction channels are composed of two hemichannels called connexons in vertebrates, one contributed by each cell at the synapse. Connexons are formed by six 7.5 nm long, four-pass membrane-spanning protein subunits called connexins, which may be identical or slightly different from one another. An autapse is an electrical (or chemical) synapse formed when the axon of one neuron synapses with its own dendrites. Effects They are found in many regions in animal and human body. The simplicity of electrical synapses results in synapses that are fast, but more importantly the bidirectional coupling can produce very complex behaviors at the network level. Without the need for receptors to recognize chemical messengers, signal transmission at electrical synapses is more rapid than that which occurs across chemical synapses, the predominant kind of junctions between neurons. Chemical transmission exhibits synaptic delay—recordings from squid synapses and neuromuscular junctions of the frog reveal a delay of 0.5 to 4.0 milliseconds—whereas electrical transmission takes place with almost no delay. However, the difference in speed between chemical and electrical synapses is not as marked in mammals as it is in cold-blooded animals. Because electrical synapses do not involve neurotransmitters, electrical neurotransmission is less modifiable than chemical neurotransmission. The response always has the same sign as the source. For example, depolarization of the pre-synaptic membrane will always induce a depolarization in the post-synaptic membrane, and vice versa for hyperpolarization. The response in the postsynaptic neuron is in general smaller in amplitude than the source. The amount of attenuation of the signal is due to the membrane resistance of the presynaptic and postsynaptic neurons. Long-term changes can be seen in electrical synapses. For example, changes in electrical synapses in the retina are seen during light and dark adaptations of the retina. The relative speed of electrical synapses also allows for many neurons to fire synchronously. Because of the speed of transmission, electrical synapses are found in escape mechanisms and other processes that require quick responses, such as the response to danger of the sea hare Aplysia, which quickly releases large quantities of ink to obscure enemies' vision. Normally, current carried by ions could travel in either direction through this type of synapse. However, sometimes the junctions are rectifying synapses, containing voltage-gated ion channels that open in response to depolarization of an axon's plasma membrane, and prevent current from traveling in one of the two directions. Some channels may also close in response to increased calcium () or hydrogen () ion concentration, so as not to spread damage from one cell to another. There is also evidence of synaptic plasticity where the electrical connection established can either be strengthened or weakened as a result of activity, or during changes in the intracellular concentration of magnesium. Electrical synapses are present throughout the central nervous system and have been studied specifically in the neocortex, hippocampus, thalamic reticular nucleus, locus coeruleus, inferior olivary nucleus, mesencephalic nucleus of the trigeminal nerve, olfactory bulb, retina, and spinal cord of vertebrates. Other examples of functional gap junctions detected in vivo are in the striatum, cerebellum, and suprachiasmatic nucleus. History The model of a reticular network of directly interconnected cells was one of the early hypotheses for the organization of the nervous system at the beginning of the 20th century. This reticular hypothesis was considered to conflict directly with the now predominant neuron doctrine, a model in which isolated, individual neurons signal to each other chemically across synaptic gaps. These two models came into sharp contrast at the award ceremony for the 1906 Nobel Prize in Physiology or Medicine, in which the award went jointly to Camillo Golgi, a reticularist and widely recognized cell biologist, and Santiago Ramón y Cajal, the champion of the neuron doctrine and the father of modern neuroscience. Golgi delivered his Nobel lecture first, in part detailing evidence for a reticular model of the nervous system. Ramón y Cajal then took the podium and refuted Golgi's conclusions in his lecture. Modern understanding of the coexistence of chemical and electrical synapses, however, suggests that both models are physiologically significant; it could be said that the Nobel committee acted with great foresight in awarding the Prize jointly. There was substantial debate on whether the transmission of information between neurons was chemical or electrical in the first decades of the twentieth century, but chemical synaptic transmission was seen as the only answer after Otto Loewi's demonstration of chemical communication between neurons and heart muscle. Thus, the discovery of electrical communication was surprising. Electrical synapses were first demonstrated between escape-related giant neurons in crayfish in the late 1950s, and were later found in vertebrates. See also Junctional complex Cardiac muscle References Further reading Cell communication Electrophysiology Neural synapse
Electrical synapse
Biology
1,534
24,777,686
https://en.wikipedia.org/wiki/IT%20assistant
An Information Technology Assistant (commonly abbreviated to IT Assistant) is a person who works as an assistant in the IT business. Because the term "Information Technology" is commonly abbreviated "IT", job seekers recruiters often use the abbreviated version of the title. Information Technology Assistant Distinguishing Characteristics: Receives supervision and direction from the Information Technology Manager and Network Specialist. May require flexible work schedules including early morning, weekend and evening hours. Qualifications Applicants should have: Bachelor or associate degree in Information Technology or related field like Computer Science, Electronics, Software, Information Systems, Telecommunication, Electrical or a diploma/certificate in Information Technology, Computing Studies or a related discipline; Two years’ post-qualification experience in help desk services; and Good interpersonal and communication skills. IT Certifications to look for are CompTIA, Cisco, Microsoft and W3Schools depending on the project or information systems maintenance needs. Duties As an Information Technology Assistant, user problems should be resolved by communicating with end users and by translating technical problems from end-users to technical support staff. You would install hardware, software, and peripherals; run diagnostic software; utilize mainframe and/or client server software to provide system security access; and accommodate user requests for computer hardware and software. References Computer occupations
IT assistant
Technology
255
68,398,412
https://en.wikipedia.org/wiki/Billet%20%28wood%29
A billet was a specific and standardised form of wood fuel of significant importance in the traditional pre–fossil fuel economy. The term could also be applied to a cudgel. Nature and use Billets were especially designed for burning on open hearth fires, often in conjunction with spits. Measurements and cost The 16th C standardised a billet as three foot four inches in length, and ten inches around. A century later, Anthony A Wood recorded a load of billet wood as costing 12s 6d; while extravagance consisted of "burning in one yeare threescore pounds worth of the choicest billet". Literary references The William Shakespeare play Measure for Measure contains the phrase "beat out my brains with billets". See also Bavin (wood) Fascine billet (heraldry) References External links Definition of billet Firewood Fuels Wood fuel
Billet (wood)
Chemistry
182
21,294,294
https://en.wikipedia.org/wiki/1%2C3-Dimethylurea
1,3-Dimethylurea (DMU) is a urea derivative and used as an intermediate in organic synthesis. It is a colorless crystalline powder with little toxicity. Uses 1,3-Dimethylurea is used for synthesis of caffeine, theophylline, pharmaceuticals, textile aids, herbicides and others. In the textile processing industry, 1,3-dimethylurea is used as intermediate for the production of formaldehyde-free easy-care finishing agents for textiles. The estimated world production of DMU is estimated to be less than 25,000 tons. References Ureas Methyl compounds
1,3-Dimethylurea
Chemistry
131
8,969,189
https://en.wikipedia.org/wiki/Electronic%20switch
In electronics, an electronic switch is a switch controlled by an active electronic component or device. Without using moving parts, they are called solid state switches, which distinguishes them from mechanical switches. Electronic switches are considered binary devices because they dramatically change the conductivity of a path in electrical circuit between two extremes when switching between their two states of on and off. History Many people use metonymy to call a variety of devices that conceptually connect or disconnect signals and communication paths between electrical devices as "switches", analogous to the way mechanical switches connect and disconnect paths for electrons to flow between two conductors. The traditional relay is an electromechanical switch that uses an electromagnet controlled by a current to operate a mechanical switching mechanism. Other operating principles are also used (for instance, solid-state relays invented in 1971 control power circuits with no moving parts, instead using a semiconductor device to perform switching—often a silicon-controlled rectifier or triac). Early telephone systems used an electromagnetically operated Strowger switch to connect telephone callers; later telephone exchanges contain one or more electromechanical crossbar switches. Thus the term 'switched' is applied to telecommunications networks, and signifies a network that is circuit switched, providing dedicated circuits for communication between end nodes, such as the public switched telephone network. The term switch has since spread to a variety of digital active devices such as transistors and logic gates whose function is to change their output state between logic states or connect different signal lines. The common feature of all these usages is they refer to devices that control a binary state of either on or off, closed or open, connected or not connected, conducting or not conducting, low impedance or high impedance. Types The diode can be treated as switch that conducts significantly only when forward biased and is otherwise effectively disconnected (high impedance). Specific diode types that can change switching state quickly, such as the Schottky diode and the 1N4148, are called "switching diodes". Vacuum tubes can be used in high voltage applications. The transistor can be operated as a switch. The bipolar junction transistor (BJT) cutoff and saturation regions of operation can respectively be treated as a closed and open switch. The most widely used electronic switch in digital circuits is the metal–oxide–semiconductor field-effect transistor (MOSFET). The analogue switch uses two MOSFET transistors in a transmission gate arrangement as a switch that works much like a relay, with some advantages and several limitations compared to an electromechanical relay. The power transistor(s) in a switching voltage regulator, such as a power supply unit, are used like a switch to alternately let power flow and block power from flowing. Hall switches are a type of Hall sensor that combine the analog Hall effect with threshold detection to produce a magnetically-operated switch. The opto-isolator uses light from an LED controlled by a current which is received by a phototransistor to switch a galvanically-isolated circuit. The insulated-gate bipolar transistor (IGBT) combines advantages of BJTs and power MOSFETs. A silicon controlled rectifier (SCR) can be used for high speed switching for power control application. A TRIAC (TRIode AC), equivalent to two back-to-back SCRs, is a bidirectional switching device. A DIAC stands for DIode AC Switch. A gate turn-off thyristor (GTO) is a bipolar switching device. Electronic switches may also consist of complex configurations that are assisted by physical contact, for instance resistive or capacitive sensing touchscreens. Network switches reconfigure connections between different ports of computers in a computer network. Applications Electronic switches are used in all kinds of common and industrial applications. See also Relay References Electronic circuits
Electronic switch
Engineering
810
3,305,865
https://en.wikipedia.org/wiki/Louvre%20Pyramid
The Louvre Pyramid () is a large glass-and-metal structure designed by the Chinese-American architect I. M. Pei. The pyramid is in the main courtyard (Cour Napoléon) of the Louvre Palace in Paris, surrounded by three smaller pyramids. The large pyramid serves as the main entrance to the Louvre Museum, allowing light to the underground visitors hall, while also allowing sight lines of the palace to visitors in the hall, and through access galleries to the different wings of the palace. Completed in 1989 as part of the broader Grand Louvre project, it has become a landmark of Paris. Design and construction The Grand Louvre project was announced in 1981 by François Mitterrand, the President of France. In 1983 the Chinese-American architect I. M. Pei was selected as its architect. The pyramid structure was initially designed by Pei in late 1983 and presented to the public in early 1984. Constructed entirely with glass segments and metal poles, it reaches a height of . Its square base has sides of and a base surface area of . It consists of 603 rhombus-shaped and 70 triangular glass segments. The sides' angle relative to the base is 51.52 degrees, an angle similar to that of Ancient Egyptian pyramids. The pyramid structure was engineered by Nicolet Chartrand Knoll Ltd. of Montreal (pyramid structure / design consultant) and Rice Francis Ritchie of Paris (pyramid structure / construction phase). The pyramid and the underground lobby beneath it were created because of deficiencies with the Louvre's earlier layout, which could no longer handle the increasing number of visitors on an everyday basis. Visitors entering through the pyramid descend into the spacious lobby then ascend into the main Louvre buildings. For design historian Mark Pimlott, "I.M. Pei’s plan distributes people effectively from the central concourse to myriad destinations within its vast subterranean network... the architectonic framework evokes, at gigantic scale, an ancient atrium of a Pompeiian villa; the treatment of the opening above, with its tracery of engineered castings and cables, evokes the atria of corporate office buildings; the busy movement of people from all directions suggests the concourses of rail termini or international airports." Several other museums and commercial centers have emulated this concept, most notably the Museum of Science and Industry in Chicago and Pioneer Place in Portland, designed by Kathie Stone Milano with ELS/Elbasani and Logan, Architects from Berkeley, California. The construction work on the pyramid base and underground lobby was carried out by the Vinci construction company. Aesthetic and political debate over its design The construction of the pyramid triggered many years of lively aesthetic and political debate. Criticisms tended to fall into four areas: The modernist style of the edifice being inconsistent with the classic French Renaissance style and history of the Louvre The pyramid being an unsuitable symbol of death from ancient Egypt The project being megalomaniacal folly imposed by then-President François Mitterrand Chinese-American architect I.M. Pei being insufficiently familiar with the culture of France to be entrusted with the task of updating the treasured Parisian landmark. Those criticizing the aesthetics said it was "sacrilegious" to tamper with the Louvre's majestic old French Renaissance architecture, and called the pyramid an anachronistic intrusion of an Egyptian death symbol in the middle of Paris. Meanwhile, political critics referred to the structure as Pharaoh François' Pyramid. Writing in The Nation, Alexander Cockburn ridiculed Pei's rationale that the structure would help visitors locate the entrance: "What Pei really meant was that in our unfolding fin de siècle, public institutions need an area (...) where rich people can assemble for cocktail parties, banquets and kindred functions, to which the word 'charity' is attached to satisfy bodies such as the IRS." Some still feel the modernism of the edifice is out of place. Number of panes The pyramid has a total of 673 panes, as confirmed by the Louvre, 603 rhombi and 70 triangles. Three sides have 171 panes each: 18 triangular ones on the edges and 153 rhombic ones arranged in a triangle; the fourth side, with the entrance, has nine fewer rhombic and two fewer triangular ones, giving 160. Some commentators report that Pei's office counts 689. However, a longstanding rumor claims that the pyramid includes exactly 666 panes, "the number of the beast", often associated with Satan. The story of the 666 panes originated in the 1980s, when the official brochure published during construction cited this number twice. The number 666 was also mentioned in various newspapers. One writer on esoteric architecture asserted that "the pyramid is dedicated to a power described as the Beast in the Book of Revelation.... The entire structure is based on the number six." The myth resurfaced in 2003, with the protagonist of the best-selling novel The Da Vinci Code saying: "this pyramid, at President Mitterrand's explicit demand, had been constructed of exactly 666 panes of glass — a bizarre request that had always been a hot topic among conspiracy buffs who claimed 666 was the number of Satan." In fact, according to Pei's office, Mitterrand never specified the number of panes. Inverted Pyramid The Inverted Pyramid (Pyramide Inversée) is a skylight in the Carrousel du Louvre shopping mall in front of the Louvre Museum. It looks like an upside-down and smaller version of the Louvre Pyramid. Renovation Designed for a museum that then attracted 4.5 million visitors a year, the pyramid eventually proved inadequate, as the Louvre's attendance had doubled by 2014. Over the next three years, the layout of the foyer area in the Cour Napoleon beneath the glass pyramid underwent a thorough redesign, including better access to the pyramid and the Passage Richelieu. Pei's other glass pyramids Prior to designing the Louvre Pyramid, Pei had included smaller glass pyramids in his design for the National Gallery of Art's East Building in Washington, D.C., completed in 1978. Multiple small glass pyramids, along with a fountain, were built in the plaza between the East Building and the pre-existing West Building, acting as a unifying element between the two properties and serving as skylights for the underground atrium that connected the buildings. The same year the Louvre Pyramid opened, Pei included large glass pyramids on the roofs of the IBM Somers Office Complex he designed in Westchester County, New York. Pei returned again to the glass pyramid concept at the Rock and Roll Hall of Fame in Cleveland, Ohio, opened in 1995. Precursor at the Louvre In 1839, according to one newspaper account, in ceremonies commemorating the July Revolution of 1830, "The tombs of the Louvre were covered with black hangings and adorned with tricolored flags. In front and in the middle was erected an expiatory monument of a pyramidal shape, and surmounted by a funeral vase." According to the memoirs of Maximilien de Béthune, Duke of Sully, a 20 foot high pyramid, which stood opposite the Louvre with only a street between them, was torn down in 1605 because the Jesuits objected to an inscription on a pillar. See also Yann Weymouth References External links Great buildings Louvre Buildings and structures completed in 1989 Buildings and structures in Paris Louvre Palace Art gallery districts I. M. Pei buildings Urban legends Lattice shell structures Pyramids in France 1989 establishments in France Architectural controversies 20th-century architecture in France
Louvre Pyramid
Engineering
1,542
1,040,920
https://en.wikipedia.org/wiki/Sigma%20bond
In chemistry, sigma bonds (σ bonds) or sigma overlap are the strongest type of covalent chemical bond. They are formed by head-on overlapping between atomic orbitals along the internuclear axis. Sigma bonding is most simply defined for diatomic molecules using the language and tools of symmetry groups. In this formal approach, a σ-bond is symmetrical with respect to rotation about the bond axis. By this definition, common forms of sigma bonds are s+s, pz+pz, s+pz and dz2+dz2 (where z is defined as the axis of the bond or the internuclear axis). Quantum theory also indicates that molecular orbitals (MO) of identical symmetry actually mix or hybridize. As a practical consequence of this mixing of diatomic molecules, the wavefunctions s+s and pz+pz molecular orbitals become blended. The extent of this mixing (or hybridization or blending) depends on the relative energies of the MOs of like symmetry. For homodiatomics (homonuclear diatomic molecules), bonding σ orbitals have no nodal planes at which the wavefunction is zero, either between the bonded atoms or passing through the bonded atoms. The corresponding antibonding, or σ* orbital, is defined by the presence of one nodal plane between the two bonded atoms. Sigma bonds are the strongest type of covalent bonds due to the direct overlap of orbitals, and the electrons in these bonds are sometimes referred to as sigma electrons. The symbol σ is the Greek letter sigma. When viewed down the bond axis, a σ MO has a circular symmetry, hence resembling a similarly sounding "s" atomic orbital. Typically, a single bond is a sigma bond while a multiple bond is composed of one sigma bond together with pi or other bonds. A double bond has one sigma plus one pi bond, and a triple bond has one sigma plus two pi bonds. Polyatomic molecules Sigma bonds are obtained by head-on overlapping of atomic orbitals. The concept of sigma bonding is extended to describe bonding interactions involving overlap of a single lobe of one orbital with a single lobe of another. For example, propane is described as consisting of ten sigma bonds, one each for the two C−C bonds and one each for the eight C−H bonds. Multiple-bonded complexes Transition metal complexes that feature multiple bonds, such as the dihydrogen complex, have sigma bonds between the multiple bonded atoms. These sigma bonds can be supplemented with other bonding interactions, such as π-back donation, as in the case of W(CO)3(PCy3)2(H2), and even δ-bonds, as in the case of chromium(II) acetate. Organic molecules Organic molecules are often cyclic compounds containing one or more rings, such as benzene, and are often made up of many sigma bonds along with pi bonds. According to the sigma bond rule, the number of sigma bonds in a molecule is equivalent to the number of atoms plus the number of rings minus one. Nσ = Natoms + Nrings − 1 This rule is a special-case application of the Euler characteristic of the graph which represents the molecule. A molecule with no rings can be represented as a tree with a number of bonds equal to the number of atoms minus one (as in dihydrogen, H2, with only one sigma bond, or ammonia, NH3, with 3 sigma bonds). There is no more than 1 sigma bond between any two atoms. Molecules with rings have additional sigma bonds, such as benzene rings, which have 6 C−C sigma bonds within the ring for 6 carbon atoms. The anthracene molecule, C14H10, has three rings so that the rule gives the number of sigma bonds as 24 + 3 − 1 = 26. In this case there are 16 C−C sigma bonds and 10 C−H bonds. This rule fails in the case of molecules which, when drawn flat on paper, have a different number of rings than the molecule actually has - for example, Buckminsterfullerene, C60, which has 32 rings, 60 atoms, and 90 sigma bonds, one for each pair of bonded atoms; however, 60 + 32 - 1 = 91, not 90. This is because the sigma rule is a special case of the Euler characteristic, where each ring is considered a face, each sigma bond is an edge, and each atom is a vertex. Ordinarily, one extra face is assigned to the space not inside any ring, but when Buckminsterfullerene is drawn flat without any crossings, one of the rings makes up the outer pentagon; the inside of that ring is the outside of the graph. This rule fails further when considering other shapes - toroidal fullerenes will obey the rule that the number of sigma bonds in a molecule is exactly the number of atoms plus the number of rings, as will nanotubes - which, when drawn flat as if looking through one from the end, will have a face in the middle, corresponding to the far end of the nanotube, which is not a ring, and a face corresponding to the outside. See also Bond strength Molecular geometry References External links IUPAC-definition Chemical bonding
Sigma bond
Physics,Chemistry,Materials_science
1,085
11,712,041
https://en.wikipedia.org/wiki/Wildlife%20of%20Senegal
The wildlife of Senegal consists of the flora and fauna of this nation in West Africa. Senegal has a long Atlantic coastline and a range of habitat types, with a corresponding diversity of plants and animals. Senegal has 188 species of mammals and 674 species of bird. Geography Senegal is bounded by the Atlantic Ocean to the west, Mauritania to the north, Mali to the east, and Guinea and Guinea-Bissau to the south. It has a long internal border with The Gambia which lies on either side of the Gambia River but is otherwise surrounded by Senegal. The four major rivers, the Senegal River, the Saloum River, the Gambia River and the Casamance River, drain westwards into the Atlantic Ocean. The Lac de Guiers is a large freshwater lake in the north of the country while Lake Retba, near Dakar, is saline. The northern half of the country has an arid or semi-arid climate and is largely desert while south of the Gambia River the rainfall is higher and the terrain consists of savannah grassland and forest. Much of the country is fairly flat and below the contour, but there are some low, rolling hills in the southeast, the foothills of the Fouta Djallon in Guinea. The northern half of the coast is sandy and flat, whereas south of Dakar it is muddy and swampy. The northern part of the country has a semi-arid climate, with precipitation increasing substantially further south to exceed in some areas. Winds blow from the southwest during the rainy season from May to November, and from the northeast during the rest of the year, resulting in well-defined humid and dry seasons. Dakar's maximum temperatures averages in the wet and in the dry season. Biodiversity With four main ecosystems (forest, savanna grassland, freshwater, marine and coastal), Senegal has a wide diversity of plants and animals. However, increases in human activities and changes in weather patterns which include increased deficits in rainfall, are impacting and degrading the natural habitats. This is particularly noticeable with regard to forests, which in the five years to 2010, were being lost at the rate of per year. Flora About 5,213 species, subspecies and varieties of vascular plants had been recorded in Senegal by the end of 2018, of which 515 were trees or woody plants. The Niokolo-Koba National Park is a World Heritage Site and large natural protected area in southeastern Senegal near the Guinea-Bissau border. The park is typical of the woodland savannah of the country. About thirty species of tree are found here, mainly from the families Fabaceae, Combretaceae and Anacardiaceae, and about one thousand species of vascular plant. The drier parts are dominated by the African kino tree and Combretum glutinosum, while the gallery forests beside rivers and streams (many of which dry up seasonally) are largely formed from Erythrophleum guineense and Pseudospondias microcarpa, interspersed with palms and bamboo clumps. Depressions in the ground fill with water in the rainy season and support a wide range of aquatic vegetation. In the coastal zone of Niayes, a coastal strip of land between Dakar and Saint Louis where a line of lakes lie behind the coastal sand dunes, the predominant vegetation is the African oil-palm, along with the African mesquite and Cape fig. Mammals Many of the larger animals of Senegal that used to have a widespread distribution have suffered from loss of habitat, persecution by farmers, and hunting for bushmeat, and are now largely restricted to the national park. The Guinea baboon is one of these, as are the Senegal hartebeest, the western hartebeest, the scimitar oryx, the roan antelope and several species of gazelle. Habitat degradation has caused populations of western red colobus, elephants, lions, and many other species to decrease heavily. The western subspecies of the giant eland is critically endangered, the only remaining known population being in the Niokolo-Koba National Park; the rapid decline in numbers of this antelope has been attributed to poaching. Other mammals found in the country include the green monkey, the Guinean gerbil and the Senegal one-striped grass mouse. Birds Some 674 species of bird had been recorded in Senegal by April 2019. Some of the more spectacular include the red-billed tropicbird, the Arabian bustard, the Egyptian plover, the golden nightjar, the red-throated bee-eater, the chestnut-bellied starling, the cricket warbler, the Kordofan lark and the Sudan golden sparrow. The Djoudj National Bird Sanctuary on the south side of the Senegal River Delta is an important site for migrating and overwintering waterfowl. About three million migratory birds spend the winter here. Some birds that nest and breed in the delta include the great white pelican, lesser flamingo, the marbled duck, African spoonbill, purple heron, black crowned crane, and others. Further south is the Saloum Delta National Park which lies on the East Atlantic Flyway, along which about 90 million birds migrate annually. Some birds that breed or winter in the park include the royal tern, the greater flamingo, the Eurasian spoonbill, the curlew sandpiper, the ruddy turnstone and the little stint. Another important wetland area is the Niayes, which is an important centre for waterbirds and raptors; large numbers of black kites have been recorded here. Fish Some 244 species of marine fish had been recorded off the coast of Senegal by April 2019. Some freshwater species of fish have been impacted by the creation of dams in the Senegal River Delta and the proliferation of some plants such as the southern cattail. Molluscs Insects List of butterflies of Senegal List of moths of Senegal References External links Biota of Senegal Senegal
Wildlife of Senegal
Biology
1,208
26,902,158
https://en.wikipedia.org/wiki/Kerr/CFT%20correspondence
The Kerr/CFT correspondence is an extension of the AdS/CFT correspondence or gauge-gravity duality to rotating black holes (which are described by the Kerr metric). The duality works for black holes whose near-horizon geometry can be expressed as a product of AdS3 and a single compact coordinate. The AdS/CFT duality then maps this to a two-dimensional conformal field theory (the compact coordinate being analogous to the S5 factor in Maldacena's original work), from which the correct Bekenstein entropy can then be deduced. The original form of the duality applies to black holes with the maximum value of angular momentum, but it has now been speculatively extended to all lesser values. See also AdS black hole References External links Motl, Luboš (2010). Kerr black hole: the CFT entropy works for all M,J String theory Conformal field theory Black holes Thermodynamics
Kerr/CFT correspondence
Physics,Chemistry,Astronomy,Mathematics
196
34,365,252
https://en.wikipedia.org/wiki/Estimated%20ultimate%20recovery
Estimated ultimate recovery or Expected ultimate recovery (EUR) of a resource is the sum of the proven reserves at a specific time and the cumulative production up to that point. References Petroleum production Petroleum economics
Estimated ultimate recovery
Chemistry
41
1,454,791
https://en.wikipedia.org/wiki/Gene%20Ontology
The Gene Ontology (GO) is a major bioinformatics initiative to unify the representation of gene and gene product attributes across all species. More specifically, the project aims to: 1) maintain and develop its controlled vocabulary of gene and gene product attributes; 2) annotate genes and gene products, and assimilate and disseminate annotation data; and 3) provide tools for easy access to all aspects of the data provided by the project, and to enable functional interpretation of experimental data using the GO, for example via enrichment analysis. GO is part of a larger classification effort, the Open Biomedical Ontologies, being one of the Initial Candidate Members of the OBO Foundry. Whereas gene nomenclature focuses on gene and gene products, the Gene Ontology focuses on the function of the genes and gene products. The GO also extends the effort by using a markup language to make the data (not only of the genes and their products but also of curated attributes) machine readable, and to do so in a way that is unified across all species (whereas gene nomenclature conventions vary by biological taxon). History The Gene Ontology was originally constructed in 1998 by a consortium of researchers studying the genomes of three model organisms: Drosophila melanogaster (fruit fly), Mus musculus (mouse), and Saccharomyces cerevisiae (brewer's or baker's yeast). Many other Model Organism Databases have joined the Gene Ontology Consortium, contributing not only to annotation data, but also to the development of ontologies and tools to view and apply the data. Many major plant, animal, and microorganism databases make a contribution towards this project. As of July 2019, the GO contains 44,945 terms; there are 6,408,283 annotations to 4,467 different biological organisms. There is a significant body of literature on the development and use of the GO, and it has become a standard tool in the bioinformatics arsenal. Their objectives have three aspects: building gene ontology, assigning ontology to gene/gene products, and developing software and databases for the first two objects. Several analyses of the Gene Ontology using formal, domain-independent properties of classes (the metaproperties) are also starting to appear. For instance, there is now an ontological analysis of biological ontologies. Terms and ontology From a practical view, an ontology is a representation of something we know about. "Ontologies" consist of representations of things that are detectable or directly observable and the relationships between those things. There is no universal standard terminology in biology and related domains, and term usage may be specific to a species, research area, or even a particular research group. This makes communication and sharing of data more difficult. The Gene Ontology project provides an ontology of defined terms representing gene product properties. The ontology covers three domains: cellular component, the parts of a cell or its extracellular environment; molecular function, the elemental activities of a gene product at the molecular level, such as binding or catalysis; biological process, operations or sets of molecular events with a defined beginning and end, pertinent to the functioning of integrated living units: cells, tissues, organs, and organisms. Each GO term within the ontology has a term name, which may be a word or string of words; a unique alphanumeric identifier; a definition with cited sources; and an ontology indicating the domain to which it belongs. Terms may also have synonyms, which are classed as being exactly equivalent to the term name, broader, narrower, or related; references to equivalent concepts in other databases; and comments on term meaning or usage. The GO ontology is structured as a directed acyclic graph, and each term has defined relationships to one or more other terms in the same domain, and sometimes to other domains. The GO vocabulary is designed to be species-neutral and includes terms applicable to prokaryotes and eukaryotes, single and multicellular organisms. GO is not static, and additions, corrections, and alterations are suggested by and solicited from members of the research and annotation communities, as well as by those directly involved in the GO project. For example, an annotator may request a specific term to represent a metabolic pathway, or a section of the ontology may be revised with the help of community experts (e.g.). Suggested edits are reviewed by the ontology editors, and implemented where appropriate. The GO ontology and annotation files are freely available from the GO website in a number of formats or can be accessed online using the GO browser AmiGO. The Gene Ontology project also provides downloadable mappings of its terms to other classification systems. Example term id: GO:0000016 name: lactase activity ontology: molecular_function def: "Catalysis of the reaction: lactose + H2O=D-glucose + D-galactose." [EC:3.2.1.108] synonym: "lactase-phlorizin hydrolase activity" BROAD [EC:3.2.1.108] synonym: "lactose galactohydrolase activity" EXACT [EC:3.2.1.108] xref: EC:3.2.1.108 xref: MetaCyc:LACTASE-RXN xref: Reactome:20536 is_a: GO:0004553 ! hydrolase activity, hydrolyzing O-glycosyl compounds Data source: Annotation Genome annotation encompasses the practice of capturing data about a gene product, and GO annotations use terms from the GO to do so. Annotations from GO curators are integrated and disseminated on the GO website, where they can be downloaded directly or viewed online using AmiGO. In addition to the gene product identifier and the relevant GO term, GO annotations have at least the following data: The reference used to make the annotation (e.g. a journal article); An evidence code denoting the type of evidence upon which the annotation is based; The date and the creator of the annotation Supporting information, depending on the GO term and evidence used, and supplementary information, such as the conditions the function is observed under, may also be included in a GO annotation. The evidence code comes from a controlled vocabulary of codes, the Evidence Code Ontology, covering both manual and automated annotation methods. For example, Traceable Author Statement (TAS) means a curator has read a published scientific paper and the metadata for that annotation bears a citation to that paper; Inferred from Sequence Similarity (ISS) means a human curator has reviewed the output from a sequence similarity search and verified that it is biologically meaningful. Annotations from automated processes (for example, remapping annotations created using another annotation vocabulary) are given the code Inferred from Electronic Annotation (IEA). In 2010, over 98% of all GO annotations were inferred computationally, not by curators, but as of July 2, 2019, only about 30% of all GO annotations were inferred computationally. As these annotations are not checked by a human, the GO Consortium considers them to be marginally less reliable and they are commonly to a higher level, less detailed terms. Full annotation data sets can be downloaded from the GO website. To support the development of annotation, the GO Consortium provides workshops and mentors new groups of curators and developers. Many machine learning algorithms have been designed and implemented to predict Gene Ontology annotations. Example annotation Gene product: Actin, alpha cardiac muscle 1, UniProtKB:P68032 GO term: heart contraction; GO:0060047 (biological process) Evidence code: Inferred from Mutant Phenotype (IMP) Reference: Assigned by: UniProtKB, June 6, 2008 Data source: Tools There are a large number of tools available, both online and for download, that use the data provided by the GO project. The vast majority of these come from third parties; the GO Consortium develops and supports two tools, AmiGO and OBO-Edit. AmiGO is a web-based application that allows users to query, browse, and visualize ontologies and gene product annotation data. It also has a BLAST tool, tools allowing analysis of larger data sets, and an interface to query the GO database directly. AmiGO can be used online at the GO website to access the data provided by the GO Consortium or downloaded and installed for local use on any database employing the GO database schema (e.g.). It is free open source software and is available as part of the go-dev software distribution. OBO-Edit is an open source, platform-independent ontology editor developed and maintained by the Gene Ontology Consortium. It is implemented in Java and uses a graph-oriented approach to display and edit ontologies. OBO-Edit includes a comprehensive search and filter interface, with the option to render subsets of terms to make them visually distinct; the user interface can also be customized according to user preferences. OBO-Edit also has a reasoner that can infer links that have not been explicitly stated based on existing relationships and their properties. Although it was developed for biomedical ontologies, OBO-Edit can be used to view, search, and edit any ontology. It is freely available to download. Consortium The Gene Ontology Consortium is the set of biological databases and research groups actively involved in the gene ontology project. This includes a number of model organism databases and multi-species protein databases, software development groups, and a dedicated editorial office. See also Blast2GO Comparative Toxicogenomics Database DAVID bioinformatics Interferome National Center for Biomedical Ontology Critical Assessment of Function Annotation References External links AmiGO - the current official web-based set of tools for searching and browsing the Gene Ontology database Gene Ontology Consortium - official site PlantRegMap - GO annotation for 165 plant species and GO enrichment Analysis Biological databases Ontology (information science)
Gene Ontology
Biology
2,133
21,189,621
https://en.wikipedia.org/wiki/SMD%20LED
The light from white LED lamps and LED strip lights is usually provided by industry standard surface-mounted device LEDs (SMD LEDs). Non-SMD types of LED lighting also exist, such as COB (chip on board) and MCOB (multi-COB). Surface-mounted device LED modules are described by the dimensions of the LED package. A single multicolor module may have three individual LEDs within that package, one each of red, green and blue, to allow many colors or shades of white to be selected, by varying the brightness of the individual LEDs. LED brightness may be increased by using a higher driving current, at the cost of reducing the device's lifespan. References Electronic design Electronics manufacturing LED lamps
SMD LED
Engineering
153
40,753,902
https://en.wikipedia.org/wiki/Business%20process%20validation
Business Process Validation (BPV) is the act of verifying that a set of end-to-end business processes function as intended. If there are problems in one or more business applications that support a business process, or in the integration or configuration of those systems, then the consequences of disruption to the business can be serious. A company might be unable to take orders or ship product – which can directly impact company revenue, reputation, and customer satisfaction. It can also drive additional expenses, as defects in production are much more expensive to fix than if identified earlier. For this reason, a key aim of Business Process Validation is to identify defects early, before new enterprise software is deployed in production so that there is no business impact and the cost of repairing defects is kept to a minimum. During Business Process Validation (BPV), the business process is checked step-by-step using representative data to confirm that all business rules are working correctly and that all underlying transactions are performing properly across every enterprise application used in the business process. When defects are identified, these problems are logged for repair by IT personnel, business analysts or the software vendor, as appropriate. Business Process Validation can be performed on various timescales, including the following: Project basis, when new enterprise software systems (such as mobile, cloud, or web applications) are being deployed for the first time Periodic basis, when there are regular monthly, quarterly, or annual updates to enterprise software Continuous basis, when companies want to validate the readiness of their processes and enterprise systems 24/7/365 Business Process Validation Methods Manual Manual Business Process Validation is where one or more people (typically a cross-functional team) work at keyboards or mobile devices to execute the various business process steps directly in the enterprise software by hand. Defects are manually noted and typically logged in a defect tracking system. There are several shortcomings to the manual approach. First, since all data is entered by hand, it can be time consuming for subject matter experts and business analysts. These are expensive staff resources that could be deployed on other higher value activities. Second, manual testing extends project timelines. This slows the deployment of innovation and makes business users wait longer for cost saving and revenue generating new technology. Third, the manual process is often incomplete, since the time-intensive nature means that IT teams cannot test all business processes, given their resource constraints. This lack of coverage introduces technology risk in a company’s business processes. Finally, if business process validation is done manually by IT teams, then business requirements and processes have to be unambiguously documented in advance, which is a time-consuming task. Automated Automated Business Process Validation relies on software to execute the various business process steps directly in the enterprise software systems in an automated fashion. BPV software automatically uses standard business process data during the validation, and interprets the correctness of each transaction and result. Defects are automatically noted and logged. Automated business process validation is a way to ensure that a company’s business processes continue to work, even when mission critical enterprise systems change. See also Process validation Business process preservation References Software testing Formal methods Software quality Enterprise modelling Business process management
Business process validation
Engineering
638
7,147,618
https://en.wikipedia.org/wiki/CALS%20Table%20Model
The CALS Table Model is a standard for representing tables in SGML/XML. It was developed as part of the Continuous Acquisition and Life-cycle Support (CALS) initiative by the United States Department of Defense. History and rationale The CALS Table Model was developed by the Continuous Acquisition and Life-cycle Support (CALS) Industry Steering Group Electronic Publishing Committee (EPC). The EPC subcommittee, of which Harvey Bingham was co-chair and a major contributor, designed the CALS Table Model in 1989–1990. The EPC was made up of industry and military service representatives. Some represented traditional military document printing agencies. Others represented electronic publishing organizations. SGML itself was new. At that time, the CALS intent for all their technical manuals was to use that document type definition (DTD) to achieve system-neutral interchange of content and structure. Its basis was a minimal description and example of a table from the prior Mil-M-38784B specification for producing technical manuals. The incomplete specification of the semantics associated with the table model allowed too much freedom for vendor interpretation, and resulted in problems with interchange. SGML-Open, the former name of the Organization for the Advancement of Structured Information Standards (OASIS), surveyed the implementing vendors to identify differences as the initial step toward reaching a common interpretation. The next step was an updated CALS Table Model DTD and semantics. Both are now available from OASIS. As implementations of the CALS Table Model were developed, a number of ambiguities and omissions were detected and reported to the EPC. The differences in interpretation had led to serious interoperability problems. To resolve these differences, OASIS identified a subset of the full CALS table model that had a high probability of successful interoperability among the OASIS vendor products. This subset is the Exchange Table Model DTD. Example <table frame="none"> <tgroup cols="2" colsep="0"> <colspec colnum="1" colname="col1" colwidth="32mm"/> <colspec colnum="2" colname="col2" colwidth="132mm"/> <thead> <row> <entry valign="top"/> <entry valign="top">(IUPAC) name</entry> </row> </thead> <tbody> <row rowsep="0"> <entry>pyro-EGTA</entry> <entry>2,2',2'',2'''-(2,2'-(1,2-phenylene bis(oxy))bis(ethane-2,1-diyl)) bis(azanetriyl)tetraacetic acid</entry> </row> <row rowsep="0"> <entry>EGTA</entry> <entry>ethylene glycol-bis(2-aminoethylether)-N,N,N',N'-tetraacetic acid</entry> </row> <row rowsep="0"> <entry>EDTA</entry> <entry>2,2',2'',2'''-(ethane-1,2-diyldinitrilo)tetraacetic acid (ethylenediamine tetraacetic acid)</entry> </row> <row rowsep="0"> <entry>AATA</entry> <entry>2,2'-(2-(2-(2-(bis(carboxymethyl)amino)ethoxy)ethoxy) phenylazanediyl)diacetic acid</entry> </row> <row rowsep="0"> <entry>APTRA</entry> <entry>2-carboxymethoxy-aniline-N,N-diacetic acid</entry> </row> <row rowsep="0"> <entry>BAPTA</entry> <entry>1,2-bis(-2-aminophenoxy)ethane- N,N,N',N'-tetraacetic acid</entry> </row> <row rowsep="0"> <entry>HIDA</entry> <entry>N-(2-hydroxyethyl)iminodiacetic acid</entry> </row> <row rowsep="0"> <entry>Carboxyglutamate</entry> <entry>3-Aminopropane-1,1,3-tricarboxylic acid</entry> </row> </tbody> </tgroup> </table> See also OASIS, a global consortium that develops data representation standards for use in computer software Footnotes External links OASIS table models CALS Table Model History by Harvey Bingham OASIS official website Computer-related introductions in 1990 Technical communication United States Department of Defense standards XML-based standards XML markup languages
CALS Table Model
Technology
1,098
19,714,462
https://en.wikipedia.org/wiki/IET%20Mountbatten%20Medal
The IET Mountbatten Medal is awarded annually for an outstanding contribution, or contributions over a period, to the promotion of electronics or information technology and their application. The Medal was established by the National Electronics Council in 1992 and named after Louis Mountbatten, The Earl Mountbatten of Burma, Admiral of the Fleet and Governor-General of India. Since 2011, the medal has been awarded as one of the IET Achievement Medals. Eligibility One of the IET's Prestige Achievement Medals, the Medal is awarded to an individual for an outstanding contribution, or contributions over a period, to the promotion of electronics or information technology and in the dissemination of the understanding of electronics and information technology to young people, or adults. Criteria In selecting a winner, the Panel give particular emphasis to: the stimulation of public awareness of the significance and value of electronics; spreading recognition of the economic significance of electronics and IT, and encouraging their effective use throughout industry in general; encouraging excellence in product innovation and the successful transition of scientific advances to wealth-creating products; recognising brilliance in academic and industrial research; encouraging young people of both sexes to make their careers in the electronics and IT industries; increasing the awareness of the importance of electronics and IT amongst teachers and others in the educational disciplines. Recipients See also List of computer science awards List of computer-related awards List of engineering awards List of awards named after people References External links 1992 establishments in the United Kingdom Awards established in 1992 British science and technology awards Computer science awards Engineering awards Institution of Engineering and Technology Lord Mountbatten
IET Mountbatten Medal
Technology,Engineering
315
50,204,706
https://en.wikipedia.org/wiki/Debate%20chamber
A debate chamber a room for conducting the business of a deliberative assembly or otherwise for debating. When used as the meeting place of a legislature, a debate chamber may also be known as a council chamber, legislative chamber, assembly chamber, or similar term depending on the relevant body. Some countries, such as New Zealand, use the term debating chamber as a name for the room where the legislature meets. Debating Debating can happen more or less anywhere that is not immediately hazardous. Whether informal or structured, debates often have an audience. The debate does not involve the audience as such; they may even be watching remotely. Therefore, a debate can occur basically anywhere, even in the street, in a hallway, on board a moving vehicle, or any number of other unusual locations. However, in common parlance, a debating chamber is a room set aside for the purposes of holding debates, usually permanently. It usually contains furniture set up to organize the debate, so as to clearly separate the people participating in the debate and the audience, and usually to clearly separate the sides of the debate. If the format of the debate includes a moderator (such as the speaker of a legislature) they must sit in a clear position of authority. In general, a debate chamber has seats and tables for the moderator and the debate participants, and a separate seating area for the audience. Other facilities may include one or more podiums for delivering speeches, possibly located on a stage to facilitate presentation of the debate to an audience. Recording and broadcasting equipment may be installed in a debating chamber so that proceedings there can be shown to the public at large. In the case of a legislative chamber or the like, there may be separate galleries for the public, while members of the legislature (and appropriate staff) are the only ones permitted in the chamber proper. Psychology and geometry The configuration of seating affects interpersonal communication on conscious and subconscious levels. For example, disagreements over the shape of a negotiation table delayed the Vietnam War peace talks for almost a year. The geometry of seating position can support or determine a sense of opposition/confrontation, hierarchy/dominance, or collaboration/equality. Factors such as angle/rotation, proximity/distance, median/termini, and height/incline are all relevant considerations. The more directly two parties are positioned across from one another, the more likely their relationship will be one of opposition to each other; the less direct, or more "side-by-side" these positions are, the less likely such an opposing relationship becomes, but also the less effective it will be at fostering collaboration. These effects can be observed in debate chambers, meeting rooms, and at dining or restaurant tables. For instance, with a long rectangular table, those seated at the "head" or "end" of the table are in a position of dominance; they can see everybody, and normally everybody can see them, but the others are restricted to seeing only those across from them. Circular, square, or elliptical tables facilitate more equal status between those seated, as well as less obstructed lines of sight. A circular gathering with three participants provides the only non-oppositional configuration of more than two persons that allows equal line of sight (all 120 degrees apart). The smaller the group and setting, the greater the equity of participants and sight lines. Conversely, the more participants that are present, the greater is the disparity of sight lines between those sitting immediately adjacent and those more directly across, whose position in turn becomes more oppositional. Winston Churchill recognized this when he insisted the British House of Commons be rebuilt (after wartime bombing) in a similar size and configuration as the prior chamber, to maintain the intimate and adversarial style of debate which he believed was responsible for creating the British form of government. History Whether outdoors or in an enclosed space or chamber, such as a cave, it is likely that the earliest designated places for group discourse or debate occurred around a fire, for light, heat, or protection from predators. Throughout recorded history there have been a variety of places and spaces designated for similar purposes. An early gathering for assembly purposes was the Ecclesia of ancient Athens, a popular assembly open to all male citizens with two years of military service. This was held in an Ekklesiasterion, which varied from small amphitheaters to a variety of buildings, including ones that could accommodate over 5,000 people. These assemblies were also held in amphitheater-like, open air theaters. Bouleuterions, also translated as council house, assembly house, and senate house, was a building in ancient Greece which housed the council of citizens of a democratic city state. In Ancient Rome, the earliest recorded debating chamber was for the deliberative body of the Roman Senate. The first official debating model that emerged (centuries later) after the fall of the Roman Empire was the Magnum Concilium, or Great Council, after the Norman Conquest of England in 1066. These were convened at certain times of the year when church leaders and wealthy landowners were invited to discuss the affairs of the country with the king (of England, Normandy, and France). In the 13th century this developed into the Parliament of England (concilium regis in parliamento). Similar models emerged at roughly the same time with the Parliament of Scotland and Parliament of Ireland. These were later consolidated into the Parliament of Britain and the current Parliament of the United Kingdom (or British Parliament). The system of government that emerged in this model is known as the Westminster system. In Europe, similar models to parliament emerged, termed Diet and Thing, or Ting, thing derived from old Norse for "appointed time" or "assembly". The parliament that claims to have the longest continuous existence is the Tynwald of the Isle of Man. In 19th century Russia, the Duma emerged to perform similar advisory functions to the monarch. In the 14th century, the king of France established the Estates General, a legislative and consultative assembly of the different classes (or estates) of French subjects. In the 18th Century French Revolution, this was transformed into the National Assembly (1789), the National Constituent Assembly (1789–1791), the Legislative Assembly (1971–1792), the National Convention (1792–1795), the Council of Five Hundred (1795–1799), and eventually the tricameral (three-house) French Consulate during the reign of Napoleon Bonaparte. These bodies met in a variety of palaces, a riding academy, a large theater, and a tennis court. In the late 18th century the United States of America established the U.S. Congress, a bicameral legislative model that would form the template of many newly emergent republics around the world. The form adopted involved two legislative bodies, each with its own chamber. The lower house, the U.S. House of Representatives, was intended to provide representation based on population. The upper house, the U.S. Senate, was intended to provide more deliberative oversight on legislation and was to represent the States (equally). Each was created and its chambers designed before political parties were well established. Names The names given to debating places or spaces may refer to an activity, such as assembly or debating; it may refer to the persons performing that activity, such as noblemen (Oireachtas in Ireland), lords, or estates; or it may refer to both, such as Senate (derived from the Latin for elder, and assembly). Some examples of the more common names for debating spaces: Assembly, also Dáil in Irish, as in National Assembly. Chamber or House, as in House of Representatives or Chamber of Deputies. Council, as in Magnum Concilium, or Federal Council. Diet derived from Medieval Latin dieta, meaning assembly. Used in reference to many historical European assemblies, such as the Imperial Diet of the Holy Roman Empire, the Diet of Worms, or the Hungarian Diet. The term is also used in reference to the modern-day Japanese parliament. Cognate terms include the German Tag (Bundestag, Landtag) and Dag in various Scandinavian languages (Riksdag, Rigsdagen). Duma: Russian, meaning "consider". Parliament: derived from Anglo-Norman parler, meaning speak. Rada: Derived from Old East Slavic Рада, meaning council (ex. Verkhovna Rada in Ukraine, meaning Supreme Council). Sejm: Proto-Lechitic, meaning "gathering" or "meeting". Senate: used in many countries since the time of Ancient Rome, where the Senate (Latin senatus) was an assembly of elders (the term derives from senex, meaning "old man"). Thing: Derived from Proto-Germanic *þingą meaning "appointed time", later "meeting" or "assembly". A thing was historically the governing assembly of a Germanic society, made up of the free people of the community presided over by lawspeakers. Modern day cognates include Icelandic: þing, German or Dutch ding, and ting in modern Scandinavian languages. In English, the word "thing" has not kept its original meaning of "assembly", although it retains that sense in derived terms such as "husting", and the name of the Manx parliament, the Tynwald. Seating configuration There are several common configurations of seating used in debate chambers: auditorium, rectangular, fan-shaped, circular, and hybrids. The shapes of the room vary and do not necessarily reflect or match the seat configurations. The architectural design of the chamber can shape the style of debating: a semicircular design may promote discussion for the purpose of reaching a consensus, while an arrangement with two opposing sides may promote adversarial debating. Auditorium The auditorium form of seating (and chamber) is a large audience facing a stage, often with a proscenium. The model is similar to direct instruction whereby the communication is unidirectional without active interaction or debate. Response is limited to applause or speakers coming onto the stage, from the audience or backstage, to provide a subsequent presentation to the audience. Given the scale and format, there is little opportunity for any direct discourse. Examples and images: USSR Supreme Soviet Council and court The council and courtroom configuration of seating is one that fosters interaction between the "panel" (court, council, board, or other officials) and the public. The panel members may debate or engage in discourse amongst themselves, particularly in a council of elected officials, but that is not normally the main portion of discourse. The more linear the seating arrangement is, the less supportive of it is for discourse. City Council chamber are less likely to use a linear configuration whereas judges in a court of law (where there is more than one judge in a sitting) frequently sit in a straight or nearly straight line.Examples and images:Rectangular The rectangular (bifurcated) seating configuration comprises two opposing rows of seats or benches facing towards a central aisle which bisects the room. At one end is commonly found a chair, throne, or podium for a Speaker, a monarch or president, or chairperson, respectively. This format is used in the Westminster style of parliamentary debating chambers, such as in the Parliaments of the UK, Canada, Australia, New Zealand, and other former British colonies. In this configuration, on one side of the aisle is the government and the other the opposition. This supports oppositional or divided groupings, from which emerged in the 19th century the two-party political system in the UK, and its dominions and colonies. Each person speaking is nominally directing his or her comments towards the speaker, but they do so facing the opposing members with their own group facing the same way they are. Without having one's own side turn around, it is not possible to face all members of the chamber simultaneously. In the British Parliament, the traditional method of recorded voting is called "division of the assembly" is by members placing themselves in separate rooms called division lobbies'', one each for the "Ayes" and "Noes". (This is derived from the Roman Senate which voted by division, by a senator seating himself on one side of the chamber or the other to indicate a vote. Common folklore speaks of the aisle between the government and the opposition sides as being "two sword lengths", or "two sword lengths plus an inch", apart, although there is no record of this being a criterion. Examples and images: House of Commons of Canada, House of Commons of the United Kingdom, Cortes of Castilla–La Mancha Hybrid A hybrid of the bifurcated and semi-circular seating configurations combines a central aisle with a curved end at one end facing the focal point (e.g. Speaker's chair) at the other. Another hybrid form is one that is rectangular, but not bi-furcated; the overall arrangement is rectangular, as is each of the three seat groupings. For example, in both the lower house of the Czech Republic's Chamber of Deputies and in the Palace of Assembly at Chandigarh, India, the seating arrangement is a series of straight rows all facing inward in three groupings, two on either side of a central aisle and one at the end facing the podium. Examples and images: India's Lok Sabha, Australia's House of Representatives, National Assembly of South Africa, Legislative Assembly of Manitoba, New Zealand's House of Representatives Fan-shaped The hemicycle or semi-circular seating configuration originated in late 18th century France when the post-revolutionary leaders selected the amphitheater form as one that would symbolize and foster unity, in contrast to the "impression of parliamentary fragmentation" of the British configuration. This configuration was soon emulated in other parts of Europe and in the United States Congress, the Capitol Building being designed by French architect Benjamin Latrobe. This adoption of the ancient Greek theater form coincided with the Greek revival movements in architecture, including literal use of the symbology of the ancient democracy. Its form allows for presentation by a single person, or small group, to speak or present to all members of the chamber on a face-to-face basis from a podium (or similar element) at the focal point of the room. The primary hierarchy of position is largely distance from the podium, and is not in a position of support or opposition. This position gives pride of place to the podium, is not inherently partisan, and if each member of the group is given the chance to address the group, everyone has a (theoretically) equal position. Examples and images: France's National Assembly, U.S. House of Representatives, UN General Assembly, Parliament of Finland, Brazilian Chamber of Deputies, Scottish Parliament, German Bundestag, Riksdag of Sweden Circular Circular seating configurations for places of discourse have been envisioned as early as the 12th Century story of the Knights of the Round Table. As with many later versions, this was intended to be a collaborative forum. In the late 1940s, facilities for the United Nations Security Council, a body formed during and immediately after World War II, were designed to support collaboration and avoid confrontation. Since the early 1990s, several debating chambers have been constructed that support, or were designed to support, consensus-style or collaboration-style discourse and government. These include legislative assembly facilities for indigenous and non-indigenous peoples in Northern Canada, Great Britain, and Polynesia. Most are for bodies that do not involve formal political parties. Examples and images: United Nations Security Council, Senedd of Wales, Wilp Si A'yuukhl Nisga'a), Legislative Assembly of Nunavut, Legislative Assembly of the Northwest Territories, meeting halls of the Society of Friends, National Parliament of the Solomon Islands. Virtual The introduction of regular live television broadcasts of legislative chambers, which began with the Canadian House of Commons in 1977, has influenced debate and extended the audience well beyond the physical location of the debate chamber. More recently this has developed into direct two-way communication in small and large meeting rooms (virtual events), and even through personal hand-held devices into nearly every corner of the world. This has both changed the nature of the physical nature of the debating environment into a digital and virtual one, and in a non-literal sense into a series of ever-changing and highly varied configuration and collection of spaces determined by where each debate participant happens to be located. This may also have the added effect of drawing others into the debate, whether as passive observers or active participants, unwittingly, uninvited, or by active invitation of a single participant. For those meetings or debates who remain grounded in a structured location, such as a conference room or legislative chamber who connect to one or several remote participants via video-conferencing, the configuration of the room may be re-focused onto the video screen and away from those in the room. Notes and references Manow, Philip: In the King's Shadow. Polity, 2010. . External links The Shape of Debate to Come Parliaments around the world: what can architecture teach us about democracy? Deliberative groups Debating Legislatures Legislative buildings Rooms
Debate chamber
Engineering
3,486
16,122,488
https://en.wikipedia.org/wiki/Wave%20Mate%20Bullet
The Wave Mate Bullet was a Z80 single-board computer from 1982 which used the CP/M operating system. It was sold in Australia, the United States and Europe and was apparently popular in academic settings. Notability The Wave Mate Bullet is notable because it represents CP/M machines at their apex. Small yet affordable machines which were quite powerful at the time with plentiful applications. Wave Mate, Inc. is a historically relevant company because one of the original microcomputer companies which released their first computer kit the Wave Mate Jupiter II in 1975. The Wave Mate Bullet represents the end of the CP/M era as the IBM PC and its clones ascended to marketplace domination. Configurations The Wave Mate Bullet runs CP/M 3.0 and CP/M 2.2 is available. It is available in many configurations but typically is found in a small chassis with two 96 tracks per inch 5.25" floppy disk drives. The 5.25" disks were formatted on both sides with five 1024 byte sectors per track with 80 tracks per side for a total of 800K per disk. The standard configurations includes two serial ports, a parallel port, a general purpose external DMA bus (GPED), separate connectors for 5.25" and 8" floppy disk drives, and a hard disk interface. The hard disk interface is either IMI hard disk controller model #7710 or SCSI depending on the motherboard version. References Notes Wave Mate Bullet manual External links Google Group for people interested in the Wave Mate Bullet Home computers Z80-based home computers Computer-related introductions in 1982
Wave Mate Bullet
Technology
325
11,291,779
https://en.wikipedia.org/wiki/Arnold%20Tustin
Arnold Tustin, (16 July 1899 – 9 January 1994), was a British engineer and Professor of Engineering at the University of Birmingham and at Imperial College London who made important contributions to the development of control engineering and its application to electrical machines. Biography Tustin started working in 1914 at the age of 16 as an apprentice to the C. A. Parsons and Company, of Newcastle upon Tyne. He entered Armstrong College, later part of Newcastle University, in 1916, served in the Royal Engineers in World War I, and eventually received his master's degree in science in 1922. In 1922 he joined Metropolitan-Vickers (Metro-Vick) as a graduate trainee. In the early 1930s he worked for Metro-Vick in Russia for two years, advising and selling equipment to the government companies. Here, he wrote his first book on the design of electric motors, which was also translated into Russian. In the late 1930s and during World War II Tustin was working on the Metadyne constant-current DC generator for gun control. He also developed new methods for gyroscopic stabilisation and further applied servo-mechanisms to tanks and naval guns. After the war, in 1947, he was appointed Professor of Engineering and head of the Department of Electrical Engineering at the University of Birmingham, a post in which he remained until 1955. In 1953-54 he had been Visiting Professor at Massachusetts Institute of Technology, and from 1955 to 1964 he was Professor of Engineering at Imperial College London. Tustin was married to Frances Tustin, a pioneering psychotherapist and authority on autism. Tustin's primary concern has been in the field of electrical machines, but his interests extended into the fields of systems thinking, control systems, and even economics and biology. Publications Tustin was the author of several books and many published papers on electrical machines, a selection. 1952. Automatic and manual control: Papers contributed to the Conferences at Cranfield, 1951, Volume 1951, Deel 1 Academic Press 1952. Direct current machines for control systems 1953, The Mechanism of Economic Systems, Cambridge, MA. : Harvard Univ. Press., (2e ed. 1957) 1956. The Next Ten Years of Electrical Engineering 1957. Automatic Control. With Ernest Nagel About Tustin 1992, "Pioneers of Control: an interview with Arnold Tustin", Chris Bissell in: IEE Review, June 1992, pp. 223–226 1994, "Arnold Tustin 1899-1994", Chris Bissell in: Int. J. Control, Vol 60, No 5, Nov 1994, pp. 649 – 652 References External links Institution of Engineering and Technology website 1899 births 1994 deaths Engineers from Tyne and Wear British electrical engineers Systems engineers Academics of the University of Birmingham Metropolitan-Vickers people
Arnold Tustin
Engineering
573
14,817,183
https://en.wikipedia.org/wiki/ERF%20%28gene%29
ETS domain-containing transcription factor ERF is a protein that in humans is encoded by the ERF gene. References Further reading External links Transcription factors
ERF (gene)
Chemistry,Biology
32
4,050,188
https://en.wikipedia.org/wiki/Red%20yeast%20rice
Red yeast rice or red rice koji is a bright reddish purple fermented rice, which acquires its color from being cultivated with the mold Monascus purpureus. Red yeast rice is what is referred to as a kōji in Japanese, meaning "grain or bean overgrown with a mold culture", a food preparation tradition going back to ca. 300 BC. In addition to its culinary use, red yeast rice is also used in Chinese herbology and Traditional Chinese medicine, possibly during the Tang dynasty around AD 800. Red yeast rice is described in the Chinese pharmacopoeia Ben Cao Gang Mu by Li Shizhen. A modern-era use as a dietary supplement developed in the late 1970s after researchers were isolating lovastatin from Aspergillus and monacolins from Monascus, the latter being the same fungus used to make red yeast rice. Chemical analysis soon showed that lovastatin and monacolin K were identical. Lovastatin became the patented prescription drug Mevacor. Red yeast rice went on to become a non-prescription dietary supplement in the United States and other countries. In 1998, the U.S. Food and Drug Administration (FDA) initiated action to ban a dietary supplement containing red yeast rice extract, stating that red yeast rice products containing monacolin K are identical to a prescription drug, and thus subject to regulation as a drug. Terminology Red yeast rice is also known as red fermented rice, red kojic rice or red koji rice from its Japanese name, and anka or angkak from Southern Min pronunciations of its Chinese name. In both the scientific and popular literature in English that draws principally on Japanese traditional use, red yeast rice is most often referred to as "red rice koji". English language articles favoring Chinese literature sources prefer the translation "red yeast rice". Production Red yeast rice is produced by cultivating the mold species Monascus purpureus on rice for 3–6 days at room temperature. The rice grains turn bright red at the core and reddish purple on the outside. The fully cultured rice is then either sold as the dried grain, or cooked and pasteurized to be sold as a wet paste, or dried and pulverized to be sold as a fine powder. China is the world's largest producer of red yeast rice, but European companies have entered the market. Uses Culinary Red yeast rice is used to color a wide variety of food products, including fermented tofu, red rice vinegar, char siu, Peking duck, and Chinese pastries that require red food coloring. In China, documentation dates back to at least the first century AD. It is also traditionally used in the production of several types of Chinese huangjiu (Shaoxing jiu), and Japanese sake (akaisake), imparting a reddish color to these wines. It was called a "koji" in Japanese, meaning "grain or bean overgrown with a mold culture". The lees left over from wine production, known as hóngzāo (), can be used as flavoring, imparting a subtle but pleasant taste to food. The lees are particularly commonly used in Fujian cuisine, where they are used for dishes like Fujian red wine chicken, a celebratory dish associated with birthdays and Chinese New Year. Red yeast rice (angkak in Filipino) is also used widely in the Philippines to traditionally color and preserve certain dishes like fermented shrimp (bagoong alamang), burong isda (fermented rice and fish), and balao-balao (fermented rice and shrimp). Traditional Chinese medicine In addition to its culinary use, red yeast rice is also used in Chinese herbology and traditional Chinese medicine. Medicinal use of red yeast rice is described in the Chinese pharmacopoeia Ben Cao Gang Mu compiled by Li Shizhen ca. 1590. Recommendations were to take it internally to invigorate the body, aid in digestion, and revitalize the blood. One reference provided the Li Shizhen health claims as a quotation "...the effect of promoting the circulation of blood and releasing stasis, invigorating the spleen, and eliminating [in]digestion." Red yeast rice and statin drugs In the late 1970s, researchers in the United States and Japan were isolating lovastatin from Aspergillus and monacolins from Monascus, the latter being the same fungus used to make red yeast rice (RYR) when cultured under carefully controlled conditions. Chemical analysis soon showed that lovastatin and monacolin K are identical chemical compounds. The two isolations, documentations, and patent applications occurred months apart. Lovastatin became the patented, prescription drug Mevacor. Red yeast rice went on to become a non-prescription dietary supplement in the United States and other countries. Lovastatin and other prescription statin drugs inhibit cholesterol synthesis by blocking action of the enzyme HMG-CoA reductase. As a consequence, circulating total cholesterol and LDL-cholesterol are lowered by 24–49% depending on the statin and dose. Different strains of Monascus fungus will produce different amounts of monacolins. The 'Went' strain of Monascus purpureus (purpureus=dark red in Latin), when properly fermented and processed, will yield a dried red yeast rice powder that is approximately 0.4% monacolins, of which roughly half will be monacolin K (chemically identical to lovastatin). U.S. regulatory restrictions The US Food and Drug Administration (FDA) position is that red yeast rice products that contain monacolin K are identical to a prescription drug and, thus, subject to regulation as a drug. In 1998, the FDA initiated action to ban a product (Cholestin) containing red yeast rice extract. The U.S. District Court in Utah ruled in favor of allowing the product to be sold without restriction. This decision was reversed on appeal to the U.S. Court of Appeals in 2001. In 2007, the FDA sent warning letters to two dietary supplement companies. One was making a monacolin content claim about its RYR product and the other was not, but the FDA noted that both products contained monacolins. Both products were withdrawn. In a press release the FDA "...is warning consumers to not buy or eat red yeast rice products... may contain an unauthorized drug that could be harmful to health." The rationale for "harmful to health" was that consumers might not understand that the dangers of monacolin-containing red yeast rice are the same as those of prescription statin drugs. A products analysis report from 2010 tested 12 products commercially available in the U.S. and reported that per 600 mg capsule, total monacolins content ranged from 0.31 to 11.15 mg. A 2017 study tested 28 brands of red yeast rice supplements purchased from U.S. retailers, stating "the quantity of monacolin K varied from none to prescription strength". Many of these avoid FDA regulation by not having any appreciable monacolin content. Their labels and websites say no more than "fermented according to traditional Asian methods" or "similar to that used in culinary applications". The labeling on these products often says nothing about cholesterol lowering. If products do not contain lovastatin, do not claim to contain lovastatin, and do not make a claim to lower cholesterol, they are not subject to FDA action. Two reviews confirm that the monacolin content of red yeast rice dietary supplements can vary over a wide range, with some containing negligible monacolins. Clinical evidence The amount typically used in clinical trials is 1200–2400 mg/day of red yeast rice containing approximately 10 mg total monacolins, of which half are monacolin K. A meta-analysis reported LDL-cholesterol lowered by 1.02 mmol/L (39.4 mg/dL) compared to placebo. The incidence of reported adverse effects ranged from 0% to 5% and was not different from controls. A second meta-analysis incorporating more recent clinical trials also reported significant lowering of total cholesterol and LDL-cholesterol. Within the first review, the largest and longest duration trial was conducted in China. Close to 5,000 post-heart attack patients were enrolled for an average of 4.5 years to receive either a placebo or a RYR product named Xuezhikang (血脂康). The test product was an ethanol extract of red yeast rice, with a monacolin K content of 11.6 mg/day. Key results: in the treated group, risk of subsequent heart attacks was reduced by 45%, cardio deaths by 31%, and all-cause deaths by 33%. These heart attack and cardiovascular death outcomes appear to be better than what has been reported for prescription statin drugs. A 2008 review pointed out that the cardioprotective effects of statins in Japanese populations occur at lower doses than are needed in Western populations, and theorized that the low amount of monacolins found in the Xuezhikang product might have been more effectively athero-protective than expected in the Chinese population for the same reason. Safety The safety of red yeast rice (RYR) products has not been established. Some supplements have been found to contain high levels of citrinin, which can be toxic to the liver, kidneys, and cellular DNA. Commercial products also have highly variable amounts of monacolins and rarely declare this content on the label, making risk assessment difficult. Ingredient suppliers have been suspected of "spiking" red yeast rice preparations with purified lovastatin. One published analysis reported several commercial products as being almost entirely monacolin K—which would occur if the drug lovastatin was illegally added—rather than the expected composition of many monacolin compounds. There are reports in the literature of muscle myopathy and liver damage resulting from red yeast rice usage. From a review: "The potential safety signals of myopathies and liver injury raise the hypothesis that the safety profile of RYR is similar to that of statins. Continuous monitoring of dietary supplements should be promoted to finally characterize their risk profile, thus supporting regulatory bodies for appropriate actions." The European Food Safety Authority (EFSA) Panel on Food Additives and Nutrient Sources added to Food concluded that when red yeast rice preparations contained monacolins, the Panel was unable to identify an intake that it could consider as safe. The reason given was case study reports of severe adverse reactions to products containing monacolins at amounts as low as 3 mg/day. Red yeast rice is not recommended during pregnancy or breast-feeding. In March 2024, the Japanese Ministry of Health ordered stores to remove three RYR dietary supplements (Benikoji ColesteHelp, NaishiHelp Plus Cholesterol and Natto-kinase Sarasara Tsubu) produced by Kobayashi Pharmaceutical after reports of thousands made ill. Over a hundred people between the ages of 40 to 80 were hospitalized, and five had died , with four of them from kidney problems. There have been more than twelve thousand cases of health problems reported by users. The company said it uses a strain that does not produce citrinin. It has found puberulic acid in the recalled products and is looking into whether the substance might be linked to the fatalities. The suspect batch was manufactured in 2023. Some analysts have placed the blame on industry deregulation, intended to boost economic growth by facilitating the approval of health products. Benikoji products such as miso paste, crackers, food coloring, and a vinegar dressing made by other companies were also recalled. Kobayashi Pharmaceutical officially discontinued production of beni koji products on 8 August 2024. See also List of microorganisms used in food and beverage preparation Medicinal fungi References External links Chinese rice dishes Dietary supplements Fermented foods Food colorings Medical controversies Medicinal fungi Traditional Chinese medicine
Red yeast rice
Biology
2,483
324,317
https://en.wikipedia.org/wiki/Theatrical%20scenery
Theatrical scenery is that which is used as a setting for a theatrical production. Scenery may be just about anything, from a single chair to an elaborately re-created street, no matter how large or how small, whether the item was custom-made or is the genuine item, appropriated for theatrical use. History The history of theatrical scenery is as old as the theatre itself, and just as obtuse and tradition bound. What we tend to think of as 'traditional scenery', i.e. two-dimensional canvas-covered 'flats' painted to resemble a three-dimensional surface or vista, is a relatively recent innovation and a significant departure from the more ancient forms of theatrical expression, which tended to rely less on the actual representation of space and more on the conveyance of action and mood. By the Shakespearean era, the occasional painted backdrop or theatrical prop was in evidence, but the show itself was written so as not to rely on such items to convey itself to the audience. However, this means that today's set designers must be that much more careful, so as to convey the setting without taking away from the actors. Contemporary scenery Our more modern notion of scenery, which dates back to the 19th century, finds its origins in the dramatic spectacle of opera buffa, from which the modern opera is descended. Its elaborate settings were appropriated by the 'straight', or dramatic, theatre, through their use in comic operettas, burlesques, pantomimes and the like. As time progressed, stage settings grew more realistic, reaching their peak in the Belasco realism of the 1910-'20s, in which complete diners, with working soda fountains and freshly made food, were recreated onstage. Perhaps as a reaction to such excess and in parallel with trends in the arts and architecture, scenery began a trend towards abstraction, although realistic settings remained in evidence, and are still used today. At the same time, the musical theatre was evolving its own set of scenic traditions, borrowing heavily from the burlesque and vaudeville style, with occasional nods to the trends of the 'straight' theatre. Everything came together in the 1980s and 1990s and, continuing to today, until there is no established style of scenic production and pretty much anything goes. Modern stagecraft has grown so complex as to require the highly specialized skills of hundreds of artists and craftspeople to mount a single production. Types of scenery The construction of theatrical scenery will be frequently one of the most time-consuming tasks when preparing for a show. As a result, many theatres have a place for storing scenery (such as a loft) so that it can be used for multiple shows. Since future shows typically are not known far in advance, theatres will often construct stock scenery that can be easily adapted to fit a variety of shows. Common stock scenery types include: Curtains Flats Platforms Scenery wagons Gallery See also Set (film and TV scenery) Scenic design Set construction Scenography References Scenic design Stagecraft
Theatrical scenery
Engineering
602
27,988,307
https://en.wikipedia.org/wiki/Grain
A grain is a small, hard, dry fruit (caryopsis) – with or without an attached hull layer – harvested for human or animal consumption. A grain crop is a grain-producing plant. The two main types of commercial grain crops are cereals and legumes. After being harvested, dry grains are more durable than other staple foods, such as starchy fruits (plantains, breadfruit, etc.) and tubers (sweet potatoes, cassava, and more). This durability has made grains well suited to industrial agriculture, since they can be mechanically harvested, transported by rail or ship, stored for long periods in silos, and milled for flour or pressed for oil. Thus, the grain market is a major global commodity market that includes crops such as maize, rice, soybeans, wheat and other grains. Grains and cereal Grains and cereal are synonymous with caryopses, the fruits of the grass family. In agronomy and commerce, seeds or fruits from other plant families are called grains if they resemble caryopses. For example, amaranth is sold as "grain amaranth", and amaranth products may be described as "whole grains". The pre-Hispanic civilizations of the Andes had grain-based food systems, but at higher elevations none of the grains belonged the cereal family. All three grains native to the Andes (kaniwa, kiwicha, and quinoa) are broad-leafed plants rather than grasses such as corn, rice, and wheat. Classification Cereal grains Warm-season cereals finger millet fonio foxtail millet Japanese millet Job's tears kodo millet maize (corn) millet pearl millet proso millet sorghum Cool-season cereals barley oats rice rye spelt teff triticale wheat wild rice Pseudocereal grains Starchy grains from broadleaf (dicot) plant families: amaranth (Amaranth family) also called kiwicha buckwheat (Smartweed family) chia (Mint family) quinoa (Amaranth family, formerly classified as Goosefoot family) kañiwa Pulses Pulses or grain legumes, members of the pea family, have a higher protein content than most other plant foods, at around 20%, while soybeans have as much as 35%. As is the case with all other whole plant foods, pulses also contain carbohydrates and fat. Common pulses include: chickpeas common beans common peas (garden peas) fava beans lentils lima beans lupins mung beans peanuts pigeon peas runner beans soybeans Oilseeds Oilseed grains are grown primarily for the extraction of their edible oil. Vegetable oils provide dietary energy and some essential fatty acids. They are also used as fuel and lubricants. Mustard family black mustard India mustard rapeseed (including canola) Aster family safflower sunflower seed Other families flax seed (Flax family) hemp seed (Hemp family) poppy seed (Poppy family) Ancient grains Historical importance Because grains are small, hard and dry, they can be stored, measured, and transported more readily than can other kinds of food crops such as fresh fruits, roots and tubers. The development of grain agriculture allowed excess food to be produced and stored easily which could have led to the creation of the first temporary settlements and the division of society into classes. This assumption that grain agriculture led to early settlements and social stratification has been challenged by James Scott in his book Against the Grain. He argues that the transition from hunter-gatherer societies to settled agrarian communities was not a voluntary choice driven by the benefits of increased food production due to the long storage potential of grains, but rather that the shift towards settlements was a coerced transformation imposed by dominant members of a society seeking to expand control over labor and resources. Trade Occupational safety and health Those who handle grain at grain facilities may encounter numerous occupational hazards and exposures. Risks include grain entrapment, where workers are submerged in the grain and unable to extricate themselves; explosions caused by fine particles of grain dust, and falls. See also Ancient grains Cereals Domestication Grain drying Legume List of dried foods Mycoestrogen Perennial grain Staple foods Vegetable fats and oils Gluten References External links Edible nuts and seeds Crops Staple foods Food ingredients Types of food
Grain
Technology
902
24,344,062
https://en.wikipedia.org/wiki/C25H35NO4
{{DISPLAYTITLE:C25H35NO4}} The molecular formula C25H35NO4 (molar mass: 413.54 g/mol, exact mass: 413.2566 u) may refer to: Dihydroetorphine, an analgesic drug Norbuprenorphine Molecular formulas
C25H35NO4
Physics,Chemistry
72
39,201,489
https://en.wikipedia.org/wiki/The%20Noun%20Project
The Noun Project is a website that aggregates and catalogs symbols that are created and uploaded by graphic designers around the world. Based in Los Angeles, the project functions both as a resource for people in search of typographic symbols and a design history of the genre. History The Noun Project was co-founded by Sofya Polyakov, Edward Boatman, and Scott Thomas and is headed by Polyakov. Boatman recalled his frustration while working at an architectural firm at the lack of a central repository for common icons, "things such as airplanes, bicycles and people." That idea morphed into a broader platform for visual communication. The site was launched on Kickstarter in December 2010, which raised more than $14,000 in donations, with symbols from the National Park Service and other sources whose content was in the public domain. Site design was by the firm Simple.Honest.Work, with mentoring from the Designer Fund. The Noun Project has generated interest and new symbols by hosting a series of "Iconathons", the first of which was held in the summer of 2011. The sessions typically run five hours and include graphic designers, content experts, and interested volunteers, all working in small groups that focus on a specific issue, such as democracy, transportation or nutrition. The idea for the event came from Chacha Sikes, who was at the time a fellow at Code for America. Operation Contributors come from around the world. A 2012 New York Times story profiled one of them: Luis Prado, a graphic designer at the Washington State Department of Natural Resources, who uploaded 83 icons he had created for his agency, including a pruning saw, a logging truck and a candidate symbol for global warming, which he created when he could not find one online. The site has four stylistic guidelines: include only the essential characteristics of the idea conveyed, maintain a consistent design style, favor an industrial look over a hand-drawn one, and avoid conveying personal opinions, feelings and beliefs. Contributors select a public domain mark or a Creative Commons attribution license, which enables others to use the symbol with attribution, free of charge. The attribution requirement can be waived upon payment of a nominal fee, which is split between the artist and The Noun Project. The founders envisioned the site as being primarily useful for designers and architects, but the range of users includes people with autism and amyotrophic lateral sclerosis, who sometimes favor a visual language, as well as business professionals incorporating the symbols into presentations. References External links Official site Internet properties established in 2010 American companies established in 2010 Graphic design Kickstarter-funded software Symbols Companies based in Los Angeles 2010 establishments in California Creative Commons-licensed websites
The Noun Project
Mathematics
554
31,573,452
https://en.wikipedia.org/wiki/Euthanasia%20Coaster
The Euthanasia Coaster is the name given to a hypothetical steel roller coaster and euthanasia device designed with the sole purpose of killing its passengers. The concept was conceived in 2010 and made into a scale model by Lithuanian artist Julijonas Urbonas, a PhD candidate at the Royal College of Art in London. Urbonas, who had formerly been an amusement park employee, stated that the goal of his concept roller coaster is to take lives "with elegance and euphoria", either for euthanasia or execution purposes. John Allen, who had been the president of the Philadelphia Toboggan Company, inspired Urbonas with his description of the "ultimate" roller coaster as one that "sends out 24 people and they all come back dead". Design The concept design of the layout begins with a steep-angled lift that takes riders up to the top (for comparison, the tallest roller coaster in the world, Kingda Ka, has a top hat that is 139 metres [456 ft] in height), a climb that would take a few minutes to complete, allowing the passengers to contemplate their life. From there, all passengers are given the choice to exit the train, if they wish to do so. If they do not, they would have some time to say their last words. All passengers are required to press a button to continue the ride, which then takes the train down a drop, propelling the train at speeds up to , close to its terminal velocity, before flattening out and speeding into the first of its seven slightly clothoid inversions. Each inversion would decrease in diameter to maintain the lethal 10 g onto passengers as the train loses speed. After a sharp right-hand turn, the train would enter a straight track that goes back to the station, where the dead are unloaded and new passengers can board. Mechanism of action The Euthanasia Coaster would kill its passengers through prolonged cerebral hypoxia, or insufficient supply of oxygen to the brain. The ride's seven inversions would inflict 10 g (g-force) on its passengers for 60 seconds, causing g-force related symptoms starting with greyout through tunnel vision to black out, and eventually g-LOC (g-force induced loss of consciousness) and death. Subsequent inversions or a second run of the rollercoaster would serve as insurance against unintentional survival of more robust passengers. Exhibition The Euthanasia Coaster was first shown as part of the HUMAN+ display at the Science Gallery in Dublin in 2011. The display was later named the year's flagship exhibition by the Science Gallery. Within this theme, the coaster highlights the issues that come with life extension. The item was also displayed at the HUMAN+ exhibit at Centre de Cultura Contemporània de Barcelona in 2015. In pop culture In 2012, Norwegian rock group Major Parkinson released "Euthanasia Roller Coaster", a digital single with lyrics alluding to Urbonas's Euthanasia Coaster. Sequoia Nagamatsu's novel How High We Go in the Dark, published on January 18, 2022, prominently features a euthanasia roller coaster for children afflicted with an incurable plague. References External links Computer animated simulation of the ride Urbonas explaining his design 2010 works Conceptual art Euthanasia device Execution methods Lithuanian inventions
Euthanasia Coaster
Physics,Technology
684
54,740,174
https://en.wikipedia.org/wiki/Bioinformatics%20Institute%20%28Singapore%29
The Bioinformatics Institute (Abbreviation: BII) is one of the Biomedical Sciences Institutes of the Agency for Science, Technology and Research, (A*STAR). BII was originally founded in 2001 by Dr Rajagopal as a support unit for Bioinformatics and IT service management. However, since August 2007, it has been redefined as a biological research organisation upon the arrival of the current executive director, Dr Frank Eisenhaber. BII focuses on "computationally biology-driven life science research aimed at the discovery of biomolecular mechanisms." BII also develops computer based research tools and performs experimental verifications in its own experimental facilities or by collaborating with appropriate groups. BII is home to the journal Scientific Phone Apps and Mobile Devices with SpringerNature. There are currently four research divisions in BII: Biomolecular Sequence to Function Biomolecular Modelling and Design Imaging Informatics Translational Research Under Dr. Sebastian Maurer-Stroh, the team at BII quality-checked genomic sequences uploaded by various countries to the GISAID database that stores and shares COVID-19 virus data. External links References Genetics or genomics research institutions Bioinformatics Research institutes in Singapore
Bioinformatics Institute (Singapore)
Engineering,Biology
253
33,218,470
https://en.wikipedia.org/wiki/List%20of%20pusher%20aircraft%20by%20configuration
A pusher aircraft is a type of aircraft using propellers placed behind the engines and may be classified according to engine/propeller location and drive as well as the lifting surfaces layout (conventional or 3 surface, canard, joined wing, tailless and rotorcraft), Some aircraft have a Push-pull configuration with both tractor and pusher engines. The list includes these even if the pusher engine is just added to a conventional layout (engines inside the wings or above the wing for example). Conventional and three surface layouts The conventional layout of an aircraft has wings ahead of the empennage. Direct drive Prop ahead of tail Between booms or frames Abrams P-1 Explorer 1937, 1 built Acapella 200 1982 homebuilt, 1 built AD Scout 1915 interceptor, 4 built ADI Condor 1981 2 seat motorglider, unk no. built AEA June Bug 1908 experimental, 1 built AEA Silver Dart 1909, first flight in Canada, 1 built Aero Dynamics Sparrow Hawk Mk.II 1984 2 seat homebuilt AGO C.II 1915 reconnaissance biplane, 15 built AHRLAC Holdings Ahrlac 2014 reconnaissance attack, 1 built Airco DH.1 1915 biplane, 2 seat, 100 built, Airco DH.2 1915 biplane fighter, 453 built Alliet-Larivière Allar 4, 1938 experimental 2 seat, 1 built Akaflieg Stuttgart FS-26 Moseppl 1970 1 seat powered sailplane, unk no. built Akaflieg Stuttgart FS-28 Avispa 1972 2 seat transport, 1 built Alaparma Baldo 1949 1 seat, ca.35 built Anderson Greenwood AG-14 1950 2 seats, 6 built Applebay Zia 1982 1 seat ultralight motorglider, 4 built Avro 508 1915, 1 built Baldwin Red Devil 1911 aerobatic biplane, 6 built Blackburn Triplane 1917 fighter, 1 built Breguet Bre.4 1914 2 seat military biplane, about 100 built Breguet Bre.5 1915 2 seat military biplane, unk no. built Breguet Bre.12 1916 2 seat military biplane, unk no. built Bristol Boxkite 1910 trainer, 78 built Cessna XMC 1971 research aircraft, 1 built Cody British Army Aeroplane No 1 1908, 1 built Cody Michelin Cup Biplane 1910, 1 built Cody Circuit of Britain biplane 1911, 1 built Cody V biplane 1912, 2 built Cody VI biplane/floatplane 1913, 1 built Curtiss No. 1 1909 Golden Flyer biplane, 1 built Curtiss No. 2 1909 Reims racer biplane, 1 built Curtiss Model D 1911 biplane, 1 seat Curtiss Model E 1911 biplane floatplane, 17+ built Curtiss Autoplane 1917 (hops only) roadable aircraft, 1 built de Schelde Scheldemusch 1935 1 seat biplane trainer, 6 built de Schelde S.21 fighter mockup, 1940 (unflown) Edgley Optica 1979 ducted fan observation aircraft 21 built Fane F.1/40 1941 observation monoplane, 1 built Farman HF.20 1913 military biplane, unk no. built Farman MF.7 1911 biplane, unk no. built Farman MF.11 1913 biplane, unk no. built Farman F.30 1915 military biplane, unk no. built Farman F.40 1915 military biplane, unk no. built Fokker F.25 Promotor 1946 transport, 20 built Friedrichshafen FF.34 1916 patrol seaplane, 1 built General Aircraft GAL.33 Cagnet 1939 trainer, 1 built General Aircraft GAL.47 1940 observation, 1 built Grahame-White Type X Charabanc 1913 transport, 1 built Grahame-White Type XI 1914 reconnaissance biplane, 1 built Grahame-White Type XV 1913 trainer, 135 built Häfeli DH-1 1916 reconnaissance biplane, 6 built Hanriot H.110 1933 fighter, 1 built Henderson H.S.F.1 1929 transport, 1 built Heston JC.6/AOP 1947 2 seat reconnaissance, 2 built HFL Stratos 300 1996 1 seat ultralight motorglider Howard Wright 1910 Biplane 1910, 7 built NPP Aerorik Dingo 1997 multi-role amphibian (air cushion), 6 built Otto C.I 1915 reconnaissance biplane, unk no. built Pemberton-Billing P.B.25 1915 scout, 20 built Port Victoria P.V.4 1917 floatplane, 1 built. Potez 75 1953 reconnaissance, 1 built Royal Aircraft Factory F.E.1 1910 biplane, 1 built Royal Aircraft Factory F.E.2 1915 military biplane, 1939 built Royal Aircraft Factory F.E.8 1915 biplane fighter, 295 built Royal Aircraft Factory F.E.9 1917 2 seat fighter, 3 built Royal Aircraft Factory N.E.1 1917 night fighter, 6 built Saab 21 1943 fighter, 298 built SAIMAN LB.2 1937 2 seat monoplane, 1 built Savoia-Pomilio SP.3 1917 reconnaissance biplane ca.350 built SCAL FB.30 Avion Bassou 1936 2 seat light aircraft, 2 built SECAN Courlis 1946 transport, unk no. built Short S.38 1912, 48 built Short S.80 Nile Pusher Biplane Seaplane 1913, 1 built Short S.81 1914, 1 built SIAI-Marchetti FN.333 Riviera 1962 4 seat amphibian, 29 built SNCASO SO.8000 Narval 1949 naval fighter, 2 built Sopwith Bat Boat 1913, 6 built Sopwith Gunbus 1914, 35 built (including floatplanes) Stearman-Hammond Y-1 1934 safety airplane ca.20 built Vickers F.B.5 1914, 224 built Vickers F.B.12 1916 fighter, ca.22 built Vickers F.B.26 Vampire 1917, 4 built Vickers VIM 1920, 35 built Voisin-Farman I 1907, 60 built Voisin L 1912, about 470 built Voisin III 1914 bomber, ca.3200 built Voisin IV Voisin V 1915 bomber, about 350 built Voisin VII 1916 reconnaissance biplane, about 100 built Voisin VIII 1916 bomber, about 1,100 built Voisin IX 1917 reconnaissance biplane, 1 built Voisin X 1917 bomber, about 900 built Vultee XP-54 1943 fighter, 2 built Wight Pusher Seaplane 1914, 11 built WNF Wn 16 1939, Austrian experimental aircraft Coaxially on rear fuselage Brditschka HB-3 1971 2 seat motorglider, unk no. built Buselec 2, 2010 project, with electric motor Gallaudet D-4 1918 seaplane, 2 built Austria Krähe 1960 1 seat motorglider, unk no. built Royal Aircraft Factory F.E.3/A.E.1 1913 armoured biplane, 1 built Royal Aircraft Factory F.E.6, 1914, 1 built RFB/Grumman American Fanliner 1973, 2 seats, 2 built RFB Fantrainer 1977, 2 seats, 47 built Rhein Flugzeugbau RW 3 Multoplan 1955 27 built Rhein Flugzeugbau Sirius I 1969, 2 seats Vickers Type 161 1931 fighter, 1 built Nacelle above fuselage 3I Sky Arrow (now marketed by Magnaghi Aeronautica) 1982 maiden flight, ULM/LSA/GA tandem two-seater high wing, some 50 built. AD Flying Boat, Supermarine Channel & Sea Eagle 1916 patrol and airline flying boat, 27 built. Aeromarine 40 1919 flying boat trainer, 50 built Aeromarine 50 1919 transport flying boat, unk no. built Aerosport Rail 1970 single seat ultralight, twin engine, 1 built Aichi AB-4 1932 flying boat, 6 built Aichi E10A 1934 reconnaissance flying boat, 15 built Aichi E11A 1937 reconnaissance flying boat, 17 built Airmax Sea Max 2005 2 seat biplane amphibian, unk no. built Amiot 110-S 1931 patrol flying boat, 2 built Benoist XIV 1913 transport flying boat, 2 built Beriev MBR-2 1931 flying boat, 1365 built Beriev MBR-7 1937 flying boat, unk no. built Boeing B-1 1919 transport flying boat, 1 built Boeing Model 204 Thunderbird 1929 flying boat, 7 built Boeing-Canada A-213 Totem 1932 flying boat, 1 built CAMS 30 1922 flying boat trainer, 31 built CAMS 31 1922 flying boat fighter, 2 built CAMS 37 1926 reconnaissance flying boat, 332 built CAMS 38 1923 racing flying boat, 1 built CAMS 46 1926 flying boat trainer, unk. no built Canadian Vickers Vedette 1924 forestry patrol flying boat, 60 built Canadian Vickers Vista 1927 1 seat monoplane flying boat, 1 built CANT 7 1924 flying boat trainer, 34 built CANT 10 1925 flying boat airliner, 18 built CANT 18 1926 flying boat trainer, 29 built CANT 25 1927 flying boat fighter, unk no. built Curtiss Model F 1912 flying boat, 150+ built Curtiss HS 1917 patrol flying boat, ca.1,178 built Curtiss-Wright CA-1 1935 amphibious flying boat, 3 built CZAW Mermaid 2005 2 seat amphibious biplane, unk no. built Donnet-Denhaut flying boat 1915 patrol flying boat, ca. 1,085 built Dornier Do 12 1932 amphibian, 4 seats, 1 built Dornier Do 18 1935 monoplane flying boat, 170 built FBA Type A, B, C 1913 patrol flying boat, unk no. built FBA Type H 1915 patrol flying boat, ~2000 built FBA 17 1923 flying boat trainer, 300+ built FBA 290 1931, amphibious flying boat trainer, 10 built FBA 310 1930 amphibious flying boat transport, 9 built Fizir AF-2 1931 amphibious flying boat trainer, 1 built Fokker B.I & III 1922 biplane reconnaissance flying boat, 2 built Fokker F.11/B.IV 1928 monoplane transport flying boat, 7 built General Aviation PJ 1933 monoplane flying boat, 5 built Grigorovich M-5 1915 patrol flying boat, ca.300 built Grigorovich M-9 1916 patrol flying boat, ca.500 built Grigorovich M-11 1916 fighter flying boat, ca.60 built Grigorovich M-15 1917 patrol flying boat, unk no. built Hansa-Brandenburg CC 1916 flying boat fighter, 73 built Hansa-Brandenburg W.20 1918 U-boat flying boat, 3 built Ikarus ŠM 1924 flying boat trainer, 42 built Kawanishi E11K 1937 monoplane flying boat, 2 built Lake Buccaneer 1959 amphibian, 4 seats, 1000+ built Loening XSL 1931 submarine airplane, 1 built Lohner E 1913, ca.40 built Lohner L, R and S 1915, 100+ built Loire 50 1933 training amphibian, 7 built Loire 130 1934 reconnaissance flying boat, 125 built Macchi L.2 1916, reconnaissance flying boat, 17 built Macchi M.3 1916, reconnaissance flying boat, 200 built Macchi M.5 1917, flying boat fighter, 244 built Macchi M.7 1918 flying boat fighter, 100+ built Macchi M.9 1918 flying boat bomber, 30 built Macchi M.12 1918 flying boat bomber, ca.10 built Macchi M.18 1920 flying boat, 90+ built Macchi M.26 1924 flying boat fighter, 2 built Macchi M.41 1927 flying boat fighter, 42 built Microleve Corsario 1988 ultralight amphibious homebuilt, unk no. built Norman Thompson N.T.2B 1917 flying boat trainer, 100+ built Norman Thompson N.T.4 1916 patrol flying boat, 72 built Nikol A-2 1939 amphibious flying boat trainer, 1 built Oeffag-Mickl G 1916 trimotor patrol flying boat, 12 built Osprey Osprey 2 1973 2 seat homebuilt, unk no. built Rohrbach Ro VII Robbe 1925 flying boat, 3 built Rohrbach Ro X Romar 1928 flying boat, 3 built Royal Aircraft Factory C.E.1 1918 flying boat, 2 built Savoia-Marchetti S.57 1923 reconnaissance flying boat, 20 built Savoia-Marchetti S.59 1925 reconnaissance flying boat, 240+ built Savoia-Marchetti S.62 1926 reconnaissance flying boat, 175+ built Savoia-Marchetti S.64 1928 distance record monoplane, 2 built Savoia-Marchetti S.66 1931 airliner flying boat, 24 built Savoia-Marchetti SM.78 1932 patrol flying boat, 49 built Savoia-Marchetti SM.80bis 1933 transport amphibian, 1+ built SCAN 20 1945 flying boat trainer, 24 built SIAI S.9 1918 flying boat, unk no. built SIAI S.12 1918 flying boat, 1 built SIAI S.13 1919 reconnaissance flying boat, unk no. built SIAI S.16 1919 flying boat, 100+ built SIAI S.51 1922 racing flying boat, 1 built SIAI S.67 1930 flying boat fighter, 3 built SNCAO 30 1938 flying boat trainer, 2 built Sperry Land and Sea Triplane 1918 patrol flying boat, 2 built Supermarine Baby 1918 flying boat fighter, 1 built Supermarine Commercial Amphibian 1920, 1 built Supermarine Scarab 1923, 12 built Supermarine Seal 1921, 4+ built Supermarine Seagull 1921, 34 built Supermarine Sea Eagle 1923, 3 built Supermarine Sea Lion I & II 1919 racing flying boats, 2 built Supermarine Sheldrake 1927, 1 built Supermarine Seagull/Walrus 1933 military flying boat, 740 built Taylor Coot 1969 2 seat homebuilt amphibian, 70 built Tellier T.3 and Tc.6 1917 patrol flying boat, ca.155 built Tisserand Hydroplum and SMAN Pétrel 1983 homebuilt amphibian, ca.63 built Tupolev MDR-2 1931 flying boat, 1 built Vickers Viking, Vulture and Vanellus 1919 amphibious flying boats, 34 built. Volmer VJ-21 Jaybird 1947 2 seat light aircraft, unk no. built Volmer VJ-22 Sportsman 1958 2 seat homebuilt amphibian, (not all are pushers), 100+ built Vought VE-10 Batboat 1919 navy flying boat, 1 built Below tail boom Alpaero Sirius 1984 1 seat UL motorglider, 20 built AmEagle American Eaglet 1975 ultralight motorglider, 12 built Jean St-Germain Raz-Mut 1976 1 seat ultralight, 7 built Nelson Dragonfly 1947 motorglider, 7 built Taylor Tandem, unk no. built Above tailboom, behind fuselage AAC SeaStar 1998 2 seat amphibious biplane, 91 built Advanced Aeromarine Buccaneer 1988 2 seat amphibious biplane, unk no. built Aerauto PL.5C 1949 1949 roadable aircraft, 1 built Aérostructure Lutin 80 1983 1 seat ultralight motorglider, 2 built Alpha J-5 Marco 1983 1 seat ultralight motorglider, unk no. built British Aircraft Company Drone 1932 1 seat ultralight, 33 built Curtiss-Wright Junior 1930 2 seat ultralight, 270 built Curtiss-Wright CW-3 Duckling 1931 ultralight amphibious flying boat, 3 built Fokker F.25 Promotor 1946 transport, 20 built Funk Fk6 1985 1 seat ultralight motorglider, unk no. built ICON Aircraft A5 2015 2 seat amphibious light sport, in production Janowski Don Kichot/J-1 1970 1 seat homebuilt, unk no. built Koolhoven F.K.30 Toerist 1927 2 seat monoplane, 1 built Loening Model 23 Air Yacht 1921 transport flying boat, 16 built Quad City Challenger 1983 2 seat ultralight, 3,000+ built Republic RC-3 Seabee 1945 4 seat amphibian, 1,060 built Siebel Si 201 1938 reconnaissance 2 built Spencer Air Car 1970 4 seat homebuilt amphibian, 51 built SZD-45 Ogar 1973 2 seat motorglider, 65 built Taylor Bird 1980 2 seat homebuilt, unk no. built Technoflug Piccolo 1989 1 seat ultralight motorglider, unk no. built Vickers Aircraft Wave 2 seat carbon fiber amphibious light sport aircraft, in final development Propeller behind the tail Air Quest Nova 21 1992 2 seat homebuilt, unk no. built Convair 111 Air Car 1945 roadable airplane, 1 built Pénaud Planophore 1871 first aerodynamically stable fixed-wing aeroplane, rubber powered model, 1 built Prescott Pusher 1985 4 seat homebuilt, ca.30 built Lateral behind wing AAC Angel, 1984 transport, 4 built Airco DH.3 1916 bomber, 2 built Avro 523 Pike 1916 bomber, 2 built Baumann Brigadier 1947 transport, 2 built Bell YFM-1 Airacuda 1937 interceptor, 13 built Boeing GA-1 1920 bomber 10 built Convair B-36 Peacemaker 1946 bomber, 384 built Curtiss H-1 America 1914 transatlantic biplane, 2 built EM-11 Orka 2003 4 seat transport, 5 built Friedrichshafen G.I 1915 bomber, 1 built Friedrichshafen G.II 1916 bomber, 35 built Friedrichshafen G.III 1917 bomber, 338 built Gotha G.II 1916 bomber, 11 built Gotha G.III 1916 bomber, 25 built Gotha G.IV 1916 bomber, 230 built Gotha G.V 1917 bomber, 205 built LFG Roland G.I 1915 bomber, 1 built Monsted-Vincent MV-1 Starflight 1948 airliner, 1 built Nord 2100 Norazur 1947 transport, 1 built OMA SUD Skycar 2007 transport, 1 built Piaggio P.136 1948 amphibious transport, 63 built Piaggio P.166 1957 transport, 145 built Piaggio P.180 Avanti 1986 executive transport, 216+ built Praga E-210 and E-211 1936 transport, 2 built Royal Aircraft Factory F.E.4 1916 bomber, 2 built Rumpler G.I, II and III 1915 bomber c.220 built Schutte-Lanz G.I 1915 bomber 1 built (behind wing) Udet U 11 Kondor 1926 airliner, 1 built Lateral nacelles Custer Channel Wing 1942 experimental aircraft, 4 built Embraer/FMA CBA 123 Vector 1990 airliner, 2 built NAL Saras 2004 airliner, 2 built Engines and props behind the pilot Birdman Chinook 1982 ultralight homebuilt, 1100+ built Ultralight trike or Flexwing Spectrum Beaver 1983 ultralight homebuilt, 2080+ built Paramotor or Powered paraglider Powered parachute Remote drive Propeller ahead of tail Within airframe Fischer Fibo-2a 1954 1 seat motorglider, 1 built Rhein Flugzeugbau RW 3 Multoplan 1955 RFB Fantrainer prototype, 27 built Rhein-Flugzeugbau Sirius II 1972 2 seat motorglider, unk no. built Megone biplane 1913 2 seat, 1 built Neukom AN-20C 1983 1 seat ultralight homebuilt motorglider, 1 built Behind wing Burgess model I 1913 patrol floatplane, 1 built Carden-Baynes Bee 1937 2 seat tourer, 1 built Eipper Quicksilver 1974 1 seat ultralight Mann & Grimmer M.1 1915, 1 built Raab Krähe 1958 1 seat motorglider, 30 built Inside tail Bede XBD-2/BD-3 1961 ducted fan boundary layer control aircraft, 1 built Mississippi State University XAZ-1 Marvelette 1962 experimental aircraft to test ideas XV-11 Marvel, 1 built Mississippi State University XV-11 Marvel 1965 boundary layer control test aircraft, 1 built Behind tail Aceair AERIKS 200 2002 2 seat kitplane, 1 built Acme Sierra 1948 1 seat experimental, 1 built Aerocar Mini-IMP 1974 1 seat homebuilt, 250+ built Aerocar IMP 4 seat, 1 built AmEagle American Eaglet 1975 1 seat self-launching ultralight sailplane, 12 built Antoinette I, 1906, 2 seats experimental, project Bede BD-5 1973 1 seat homebuilt, ca.150 built Bede BD-12 1998 2 seat homebuilt, 1 built Cirrus VK-30 1988 5 seat homebuilt, ca.13 built Dornier Do 212 1942 experimental amphibian, 1 built Douglas XB-42 Mixmaster 1944, bomber, 2 built Douglas DC-8 (piston airliner) 1945, transport project, not built Douglas Cloudster II 1947 transport, 1 built Göppingen Gö 9 1941 experimental propulsion aircraft, 1 built Grinvalds Orion 1981 4 seat homebuilt, ca.17 built Grob GF 200 1991 transport, 1 built HMPAC Puffin 1961 human powered aircraft, 2 built HPA Toucan 1972 human powered aircraft, 1 built Kasyanenko No. 5 1917 experimental biplane, 1 built LearAvia Lear Fan 1981 transport, 3 built LH Aviation LH-10 Ellipse 2007 2 seat homebuilt, 3 built Lockheed Big Dipper 1945 transport, 1 built Myasishchev Mayal 1992 multi-purpose amphibian, 1 built Miller JM-2 and Pushy Galore 1989 racer, 3 built Planet Satellite 1949 4 seat transport, 1 built Paulhan-Tatin Aéro-Torpille No.1 1911 monoplane, 1 built Pützer Bussard SR-57 1958 experimental 2 seater, 90 hp, 1 built Ryson STP-1 Swallow 1972 2 seat homebuilt motorglider, 1 built Taylor Aerocar 1949 2 seat roadable aircraft, 6 built Vmax Probe 1997 homebuilt racer, 1 built Waco Aristocraft 1947 transport, 1 built Propeller above fuselage Schleicher ASH 26 1995 1 seat glider with retractable propeller, 234 built Canard and tandem layouts A canard is an aircraft with a smaller wing ahead of the main wing. A tandem layout has both front and rear wings of similar dimensions. Direct drive AASI Jetcruzer 1989 transport, 3 built Ambrosini SS.2 & 3 1935 experimental aircraft, 2 built Ambrosini SS.4 1939 prototype fighter, 1 built Avtek 400 1984 transport, 1 built Beechcraft Starship 1989 airliner, 53 built Curtiss-Wright XP-55 Ascender 1943 prototype fighter, 3 built E-Go Aeroplanes e-Go 2013 ultralight and light-sport aircraft, 1 built Fabre Hydravion 1910, first successful floatplane, 1 built Gee Bee Model Q 1931 experimental, 1 built Lockspeiser LDA-01 1971 experimental scale development aircraft, 1 built Mikoyan-Gurevich MiG-8 Utka 1945 swept wing demonstrator prototype, 1 built Miles M.35 Libellula 1942, experimental tandem wing carrier-based fighter, 1 built Miles M.39B Libellula 1943, experimental (5/8 scale) tandem wing carrier-based bomber, 1 built OMAC Laser 300 1981, transport, 3 built Paulhan biplane 1910, 3 built Rutan Defiant 1978 4 seat homebuilt, 19 built Rutan Long-EZ 1979 2 seat homebuilt, ca.800 built Rutan VariEze 1975 2 seat homebuilt, ca.400 built Rutan VariViggen 1972 homebuilt, ca.20 built Santos-Dumont 14-bis 1906 first public controlled sustained flight, 1 built Steve Wright Stagger-Ez 2003 modified Cozy homebuilt, 1 built Voisin Canard 1911 biplane, 10+ built Remote engine mounting AeroVironment Gossamer Albatross 1979 human powered aircraft, 2 built AeroVironment Gossamer Condor 1977 human powered aircraft won Kremer prize, 1 built British Aerospace P.1233-1 Saba 1988 anti-helicopter and close air support attack aircraft, project Deperdussin-de Feure model 2, 1910, experimental, 1 built Dickey E-Racer 1986 homebuilt, unk no. built Kyūshū J7W, prototype fighter, 1 seat, 2130 hp, 1945, 2 built Langley Aerodrome Number 5 1896 experimental model Wright Flyer 1903 experimental airplane, first recognized powered, sustained flight, 1 built Wright Model A 1906 biplane, ca.60 built Wright Model B 1910 biplane, ca.100 built Tailless aircraft, flying wings and closed wing Tailless aircraft lack a horizontal stabilizer, flying wings lack a distinct fuselage, with crew, engines, and payload contained within the wing structure. Ben Brown SC ca.1932, experimental joined wing, 1 built DINFIA IA 38 1960 transport, 1 built Dunne D.4 1908, 1 built Dunne D.5 1910, 1 built Dunne D.6 & D.7 1911 monoplane, 2 built Dunne D.8 1912, 5 built Facet Opal, 1988, 1 seat, experimental flying wing, 1 built Fauvel AV.45 1960 1 seat motor glider, unk no. built Handley Page Manx 1943 experimental tailless aircraft, 1 built Horten V 1938 powered testbed, 3 built Kayaba Ku-4 1941 (not flown) research aircraft, 1 built Ligeti Stratos 1985 1 seat homebuilt, 2 built Lippisch Delta 1 1931, experimental tailless monoplane, 1 built M.L. Aviation Utility 1953 inflatable wing, 4 built Northrop N-1M 1940 experimental flying wing, 1 built Northrop N-9M 1942 experimental flying wing, 4 built Northrop XP-56 Black Bullet 1943 tailless fighter, 2 built Northrop B-35 1946 bomber, 4 built Pterodactyl Ascender 1979 1 seat ultralight, 1396 built Rohr 2-175 1974 2 seat roadable aircraft, 1 built Sud-Est SE-2100, prototype tourer, 2 seats, 140 hp, 1945 Waterman Arrowbile 1937 roadable aircraft, 5 built Waterman Arrowplane 1935 roadable aircraft, 1 built Waterman Whatsit 1932 roadable aircraft, 1 built Westland-Hill Pterodactyl series 1928, several built Rotorcraft Avian Gyroplane 1960, 2 seats, ca.6 built Bensen autogyros CarterCopter / Carter PAV Fairey Jet Gyrodyne experimental gyrodyne Wallis autogyros McDonnell XV-1 experimental compound helicopter, 550 hp Sikorsky X2 experimental compound helicopter Sikorsky S-97 Raider experimental compound helicopter Push-pull aircraft Sides of fuselage Bristol Braemar 1918 bomber, 2 built Dornier Do K 1929 airliner, 3 built Fokker F.32 1929 airliner, 7 built Farman F.121 Jabiru 1923 airliner, 9 built Farman F.220 1932 airliner and bomber, ca.80 built Handley Page V/1500 1918 bomber, 63 built Zeppelin-Staaken R.V 1917 bomber, 3 built Above fuselage Bartini DAR 1936 patrol flying boat, 1 built Blériot 125 1931 airliner, 1 built Boeing XPB 1925 patrol flying boat, 1 built Bratu 220 1932 airliner, 1 built Bristol Pullman 1920 airliner, 1 built CAMS 33 1923 patrol flying boat, 21 built CAMS 51 1926 flying boat, 3 built CAMS 53 1928 transport flying boat, 30 built CAMS 55 1928 patrol flying boat, 112 built CAMS 58 1933 airliner flying boat, 4 built Caproni Ca.73 1925 bomber unk. no. built Caproni Ca.90 1929 bomber, 1 built Chyetverikov ARK-3 1936 flying boat, 7 built Comte AC-3 1930 bomber, 1 built Curtiss NC 1918 patrol flying boat, 10 built Dornier Do 18 1935 patrol flying boat, 170 built Dornier Do 26 1939 push-pull flying boat, 6 built Dornier Wal 1922 flying boat, ca.300 built Dornier Do P 1930 bomber, 1 built Dornier Do R Superwal 1926 airliner flying boat, 19 built Dornier Do S 1930 flying boat, 1 built Dornier X 1929 airliner flying boat, 3 built Farman F.180 1927 airliner, 3 built Felixstowe Porte Baby 1915 patrol flying boat, 11 built Hinkler Ibis 1930 2 seat monoplane, 1 built Johns Multiplane 1919 bomber, 1 built Kawasaki Ka 87 1926 bomber, 28 built Latécoère 21 1926 airliner flying boat, 7 built Latécoère 23 1927 transport flying boat, 1 built Latécoère 24 1927 mailplane flying boat, 1 built Latécoère 32 1928 mailplane flying boat, 8 built Latécoère 340 1930 airliner flying boat, 1 built Latécoère 380 1930 flying boat, 5 built Latécoère 500 1932 transport flying boat, 2 built Latham 47 1928 patrol flying boat, 16 built Lioré et Olivier LeO H-27 1933 mailplane flying boat 1 built Loire 70 1933 patrol flying boat, 8 built Macchi M.24 1924 flying boat, unk. no built Naval Aircraft Factory TF 1920 fighter flying boat, 4 built NVI F.K.33 1925 airliner, 1 built Savoia-Marchetti S.55 1924 flying boat, 243+ built Savoia-Marchetti S.63 1927 flying boat, 1 built SIAI S.22 1921 racing flying boat, 1 built Sikorsky XP2S 1932 patrol flying boat, 1 built Tupolev ANT-16 1933 bomber 1 built Tupolev ANT-20 1934 transport, 2 built Tupolev MTB-1 1934 patrol flying boat, 25 built Extremities Aero Design DG-1 1977 push-pull racer, 1 built Caproni Ca.60 1921 airliner flying boat, 1 built Dornier Do 335 1943 push-pull fighter, 38 built Moynet Jupiter 1963 push-pull transport, 2 built Rutan Defiant 1978 transport, 19+ built Rutan Voyager 1984 endurance record aircraft, 1 built Star Kraft SK-700 1994 push-pull transport, On nose and between booms Adam A500 2002 push-pull transport, 7 built Bellanca TES distance record aircraft, 1 built Canaero Toucan 1986 ultralight, 16+ built Cessna Skymaster 1963 push-pull transport, 2993 built Fokker D.XXIII 1939 fighter, 1 built Marton X/V (RMI-8) 1944 (unflown) fighter, 1 destroyed before completion Moskalyev SAM-13 1940 (unflown) push-pull fighter, 0 built Schweizer RU-38 Twin Condor 1995 push-pull reconnaissance aircraft, 5 built Savoia-Marchetti S.65 1929 racing floatplane 1 built Siemens-Schuckert DDr.I 1917 fighter, 1 built Thomas-Morse MB-4 1920 mailplane, 2+ built On wings and between booms AD Seaplane Type 1000 1916 bomber, 1 built Anatra DE 1916 bomber, 1 built Caproni Ca.1 1914 bomber, 162 built Caproni Ca.2 1915 bomber, 9 built Caproni Ca.3 1916 bomber, ca.300 built Caproni Ca.4 1917 triplane bomber, 44-53 built Caproni Ca.5 1917 bomber, 662 built Gotha G.VI 1918 bomber, 2 built Grahame-White Ganymede 1919 bomber/airliner, 1 built See also Push-pull configuration Tractor configuration Pusher configuration List of pusher aircraft by configuration and date References Notes Citations Bibliography Aircraft configurations
List of pusher aircraft by configuration
Engineering
6,514
69,155,769
https://en.wikipedia.org/wiki/%28%2B%29-Morphine
(+)-Morphine also known as dextro-morphine is the "unnatural" enantiomer of the opioid drug (−)-morphine. Unlike "natural" levo-morphine, unnatural dextro-morphine is not present in Papaver somniferum and is the product of laboratory synthesis. In contrast to natural morphine, the unnatural enantiomer has no affinity or efficacy for the mu opioid receptor and therefore has no analgesic effects. To the contrary, in rats, (+)-morphine acts as an antianalgesic and is approximately 71,000 times more potent as an antianalgesic than (−)-morphine is as an analgesic. (+)-Morphine derives its antianalgesic effects by being a selective-agonist of the Toll-like receptor 4 (TLR4), which due to not binding to opioid receptors allows it to effectively reverse the analgesic properties of (−)-morphine. TLR4 is involved in immune system responses, and activation of TLR4 induces glial activation and release of inflammatory mediators such as TNF-α and Interleukin-1. See also (+)-Naloxone Dextromethorphan References Morphine Cyclohexenols Ethers Hydroxyarenes
(+)-Morphine
Chemistry
290
40,558,045
https://en.wikipedia.org/wiki/Myeloid-derived%20suppressor%20cell
Myeloid-derived suppressor cells (MDSC) are a heterogeneous group of immune cells from the myeloid lineage (a family of cells that originate from bone marrow stem cells). MDSCs expand under pathologic conditions such as chronic infection and cancer, as a result of altered haematopoiesis. MDSCs differ from other myeloid cell types in that they have immunosuppressive activities, as opposed to immune-stimulatory properties. Similar to other myeloid cells, MDSCs interact with immune cell types such as T cells, dendritic cells, macrophages and natural killer cells to regulate their functions. Tumors with high levels of infiltration by MDSCs have been associated with poor patient outcome and resistance to therapies. MDSCs can also be detected in the blood. In patients with breast cancer, levels of MDSC in blood are about 10-fold higher than normal. The size of the myeloid suppressor compartment is considered to be an important factor in the success or failure of cancer immunotherapy, highlighting the importance of this cell type for human pathophysiology. A high level of MDSC infiltrate in the tumor microenvironment correlates with shorter survival times of patients with solid tumors and could mediate resistance to checkpoint inhibitor therapy. Studies are needed to determine whether MDSCs are a population of immature myeloid cells that have stopped differentiation or a distinct myeloid lineage. Formation MDSCs are formed from bone marrow precursors when myelopoietic processes are interrupted, caused by several illnesses. Cancer patients' growing tumors produce cytokines and other substances that affect MDSC development. Tumor cell lines overexpress colony-stimulating factors (G-CSF and GM-CSF) and IL6, which promote development of MDSCs that have immune suppressive function in vivo. Other cytokines, including IL10, IL1, VEGF, and PGE2 have been associated with the formation and regulation of MDSCs. GM-CSF promotes synthesis of MDSCs from bone marrow, and the transcription factor c/EBP regulates development of MDSCs in bone marrow and in tumors. STAT3 also promotes development of MDSCs, whereas IRF8 could counteract MDSC-inducing signals. MDSCs migrate as immature cells from the bone marrow to peripheral tissues (or tumors), where they differentiate into mature macrophages, dendritic cells, and neutrophils without suppressive phenotypes under homeostatic conditions, but become polarized when exposed to pro-inflammatory compounds, chemokines, and cytokines. In the tumor microenvironment, they suppress the anti-tumor immune response. The presence of MDSCs has been associated with progression of colon cancer, tumor angiogenesis, and metastases. In addition to producing NO and ROS, MDSCs secrete immune-regulatory cytokines such as TNF, TGFB, and IL10. There are subpopulations of MDSC that have some common suppressive characteristics but also have their own unique features; different subpopulations can be found in different areas of the same tissue or tumor. Tumor-infiltrating MDSCs develop in response to environmental factors, upregulating CD38 (which removes NAD from the environment and is necessary for mitochondrial biosynthesis), PDL-1 (an immune checkpoint protein) and LOX1 (promotes fatty acid consumption and fatty acid oxidation). Tumor-infiltrating MDSCs also secrete exosomes that can inhibit the anti-tumor immune response. Immature Myeloid Cells in Formation of MDSCs Myeloid-derived suppressor cells (MDSCs) are a recently discovered bone-marrow-derived cell type. They have characteristic of immature stem cells with immunomodulatory properties. In fact, they are used in research to develop therapeutic strategies against both autoimmune diseases and exacerbate inflammation, which has especial interest in the central nervous system). The main inconvenient of MDSCs is that they are only formed during inflammatory conditions, thus being commonly gathered from diseased subjects. However, a recent research of the University of Salamanca has demonstrated that immature myeloid cells (IMCs), the precursors of MDSCs, have also potential immunosuppressive activity under pathological conditions. IMCs can be directly gathered from healthy bone marrow, which is a more clinically feasible source. Then, IMCs under pathological conditions behave as MDSCs exerting immunomodulation. In this sense, IMCs can be directly used thus avoiding their gathering from diseased subjects. In addition, IMCs are promising adjuvants when performing neurosurgery. They application in an intracranial surgery almost completely prevented the impairments caused by this procedure in mice, probably by the modulation of the inflammatory patterns. In this sense, IMCs have a direct pre-clinical application to minimize the secondary effects inherent to every single intracranial surgery, especially in a diseased environment. MDSC differentiation In humans MDSCs derive from bone marrow precursors usually as the result of a perturbed myeloipoiesis caused by different pathologies. In cancer patients, growing tumors secrete a variety of cytokines and other molecules which are key signals involved in the generation of MDSC. Tumor cell lines overexpressing colony stimulating factors (e.g. G-CSF and GM-CSF) have long been used in vivo models of MDSC generation. GM-CSF, G-CSF and IL-6 allow the in vitro generation of MDSC that retain their suppressive function in vivo. In addition to CSF, other cytokines such as IL-6, IL-10, VEGF, PGE2 and IL-1 have been implicated in the development and regulation of MDSC. The myeloid-differentiation cytokine GM-CSF is a key factor in MDSC production from bone marrow, and it has been shown that the c/EBPβ transcription factor plays a key role in the generation of in vitro bone marrow-derived and in vivo tumor-induced MDSC. Moreover, STAT3 promotes MDSC differentiation and expansion and IRF8 has been suggested to counterbalance MDSC-inducing signals. In mice Murine MDSCs show two distinct phenotypes which discriminate them into either monocytic MDSCs or granulocytic MDSCs. The relationship between these two subtypes remains controversial, as they closely resemble monocytes and neutrophils respectively. While monocyte and neutrophil differentiation pathways within the bone marrow are antagonistic and dependent on the relative expression of IRF8 and c/EBP transcription factors (and hence there is not a direct precursor-progeny link between these two myeloid cell types), this seems not to be the case for MDSCs. Monocytic MDSCs seem to be precursors of granulocytic subsets demonstrated both in vitro and in vivo. This differentiation process is accelerated upon tumor infiltration and possibly driven by the hypoxic tumor microenvironment. Phenotype Natural killer cells The depletion of MDSCs from mice with liver cancer significantly increases natural killer (NK) cell cytotoxicity, NKG2D expression, and IFNg (IFNg) production and induces NK cell energy. MDSC depletion restored the function of impaired hepatic NK cells. An MDSC derived from chronic inflammation caused T and NK-cell dysfunction along with downregulation of the TCR z chain (CD247). The immunosuppressive milieu directly affects CD247, which is crucial in initiating immune responses. MDSCs, acting through membrane-bound TGF-b1, inhibit NK cells in tumor-bearing hosts due to the activity of TGF-b1 on MDSCs. Therefore, MDSCs constitutively suppress hepatic NK cells in tumor-bearing hosts through TGF-b1 on MDSCs. B cells A number of studies have reported MDSC regulation of B-cell responses to activators and mitogens that are not MHC-regulated, as well as antigen-specific T cell responses. An infection with the LP-BM5 retrovirus can cause acquired immune deficiency in mice, which causes highly immunosuppressive CD11bCGr-1CLy6CC MDSCs. These cells suppress T and B cells by signaling via nitric oxide (NO). Dendritic cells Immune responses against tumors and infections are regulated by myeloid-derived suppressor cells and dendritic cells (DCs). The combination of LPS and IFNg treatment of bone marrow-derived MDSCs limits DC formation and improves MDSC suppressive action. MDSCs have been shown to reduce the effectiveness of DC vaccinations. MDSC frequency has no effect on DC production or survivability, but it does cause a dose-dependent reduction in DC maturation. High CD14CHLA-DR/low cell frequencies can stifle DC maturation and decrease DC function, both of which are critical for vaccination effectiveness. As a result, the balance between MDSCs and DCs might be crucial in tumor and infection treatment. Thus, the balance between MDSCs and DCs may play an important role in tumor and infection therapy. Activity/function MDSCs are immune suppressive and play a role in tumor maintenance and progression. MDSCs also obstruct therapies that seek to treat cancer through both immunotherapy and other non-immune means. MDSC activity was originally described as suppressors of T cells, in particular of CD8+ T-cell responses. The spectrum of action of MDSC activity also encompasses NK cells, dendritic cells and macrophages. Suppressor activity of MDSC is determined by their ability to inhibit the effector function of lymphocytes. Inhibition can be caused by different mechanisms. It is primarily attributed to the effects of the metabolism of L-arginine. Another important factor influencing the activity of MDSC is oppressive ROS. Effect of MMR vaccination MDSCs can also play a positive regulatory role. It is stated that MMR vaccine stimulates MDSC populations in people taking the vaccine, inhibiting septic inflammation and mortality that is broadly applicable not only to measles, mumps, and rubella, but extends to covid-19 induced cytokine inflammation. This vaccination inducement appears to be neither permanent nor chronic. Despite MDSC's being immunosuppressive in certain instances, the MMR vaccine itself is immunostimulatory. MDSC inhibitors In addition to host-derived factors, pharmacologic agents also have profound impact on MDSC. Chemotherapeutic agents belonging to different classes have been reported to inhibit MDSC. Although this effect may well be secondary to inhibition of hematopoietic progenitors, there may be grounds for search of selectivity based on long-known differential effects of these agents on immunocompetent cells and macrophages. In 2015, MDSCs were compared to immunogenic myeloid cells highlighting a group of core signaling pathways that control pro-carcinogenic MDSC functions. Many of these pathways are known targets of chemotherapy drugs with strong anti-cancer properties. there are no FDA approved drugs developed to target MDSCs but experimental INB03 has entered early clinical trials. There is promising evidence for inhibiting Galectin-3 as a therapeutic target to reduce MDSCs. In a Phase 1b clinical trial of GR-MD-02 developed by Galectin Therapeutics, investigators observed a significant decrease in the frequency of suppressive myeloid-derived suppressor cells following treatment in responding melanoma patients. History The term myeloid-derived suppressor cell originated in a 2007 journal article published in Cancer Research by Gabrilovich et al. Publications in 2008 established that there are two subpopulations of MDSC: mononuclear MDSC (M-MDSC) and polymorphonuclear or granulocytic MDSC (PMN-MDSC). M-MDSC are similar to monocytes found in blood, while PMN-MDSC are physically akin to neutrophils. References Cell biology
Myeloid-derived suppressor cell
Biology
2,619
22,419,757
https://en.wikipedia.org/wiki/Name%20collision
In computer programming, a name collision is the nomenclature problem that occurs when the same variable name is used for different things in two separate areas that are joined, merged, or otherwise go from occupying separate namespaces to sharing one. As with the collision of other identifiers, it must be resolved in some way for the new software (such as a mashup) to work right. Problems of name collision, and methods to avoid them, are a common issue in an introductory level analysis of computer languages, such as for C++. History The term "name collision" has been used in computer science for more than three decades, when referring to names in various classification systems. Avoiding name collisions There are several techniques for avoiding name collisions, including the use of: namespaces - to qualify each name within a separate name group, so that the totally qualified names differ from each other. renaming - to change the name of one item (typically the one used less often) into some other name. prefixing - putting unique characters before the names so that the names differ and further name collisions are unlikely to happen by accident. See also Local variables, variable data items that are local to a module Name mangling Naming collision Notes References Programming language design Information theory
Name collision
Mathematics,Technology,Engineering
255
24,257,028
https://en.wikipedia.org/wiki/NDepend
NDepend is a static analysis tool for C# and .NET code to manage code quality and security. The tool proposes a large number of features, from CI/CD Web Reporting to Quality Gate and Dependencies Visualization. For that reason, the community refers to it as the "Swiss Army Knife" for .NET Developers. Features The main features of NDepend are: Interactive Web Reports about all Aspects of .NET Code Quality and Security Sample Reports Here. Reports can be built on any platform: Windows, Linux and MacOS [Roslyn Analyzers Issues Import https://www.ndepend.com/docs/reporting-roslyn-analyzers-issues] Quality Gates CI/CD Integration with Azure DevOps, GitHub Action, Bamboo, Jenkins, TeamCity, AppVeyor Dependency Visualization (using dependency graphs, and dependency matrix) Smart Technical Debt Estimation Declarative code rule over C# LINQ query (CQLinq). Software Metrics (NDepend currently supports more than 100 code metrics: Cyclomatic complexity; Afferent and Efferent Coupling; Relational Cohesion; Google page rank of .NET types; Percentage of code covered by tests, etc.) Code Coverage data import from Visual Studio coverage, dotCover, OpenCover, NCover, NCrunch. All results are compared against a baseline allowing the user to focus on newly identified issues. Integration with Visual Studio 2022, 2019, 2017, 2015, 2013, 2012, 2010, or can run as a standalone through VisualNDepend.exe, side by side with JetBrains Rider or Visual Studio Code. Code rules through LINQ queries (CQLinq) Live code queries and code rules through LINQ queries is the backbone of NDepend, all features use it extensively. Here are some sample code queries: Base class should not use derivatives: // <Name>Base class should not use derivatives</Name> warnif count > 0 from baseClass in JustMyCodeTypes where baseClass.IsClass && baseClass.NbChildren > 0 // <-- for optimization! let derivedClassesUsed = baseClass.DerivedTypes.UsedBy(baseClass) where derivedClassesUsed.Count() > 0 select new { baseClass, derivedClassesUsed } Avoid making complex methods even more complex (source code cyclomatic complexity): // <Name>Avoid making complex methods even more complex (source code cyclomatic complexity)</Name> warnif count > 0 from m in JustMyCodeMethods where !m.IsAbstract && m.IsPresentInBothBuilds() && m.CodeWasChanged() let oldCC = m.OlderVersion().CyclomaticComplexity where oldCC > 6 && m.CyclomaticComplexity > oldCC select new { m, oldCC, newCC = m.CyclomaticComplexity, oldLoc = m.OlderVersion().NbLinesOfCode, newLoc = m.NbLinesOfCode, } Additionally, the tool provides a live CQLinq query editor with code completion and embedded documentation. See also Design Structure Matrix List of tools for static code analysis Software visualization Sourcetrail External links NDepend reviewed by the .NET community Exiting The Zone Of Pain: Static Analysis with NDepend.aspx (Program Manager, Microsoft) discusses NDepend Stack Overflow discussion: use of NDepend Abhishek Sur, on NDepend NDepend code metrics by Andre Loker Static analysis with NDepend by Henry Cordes Hendry Luk discusses Continuous software quality with NDepend Jim Holmes (Author of the book "Windows Developer Power Tools"), on NDepend. Mário Romano discusses Metrics and Dependency Matrix with NDepend Nates Stuff review Scott Mitchell (MSDN Magazine), Code Exploration using NDepend Travis Illig on NDepend Books that mention NDepend Girish Suryanarayana, Ganesh Samarthyam, and Tushar Sharma. Refactoring for Software Design Smells: Managing Technical Debt (2014) Marcin Kawalerowicz and Craig Berntson. Continuous Integration in .NET (2010) James Avery and Jim Holmes. Windows developer power tools (2006) Patrick Cauldwell and Scott Hanselman. Code Leader: Using People, Tools, and Processes to Build Successful Software (2008) Yogesh Shetty and Samir Jayaswal. Practical .NET for financial markets (2006) Paul Duvall. Continuous Integration (2007) Rick Leinecker and Vanessa L. Williams. Visual Studio 2008 All-In-One Desk Reference For Dummies (2008) Patrick Smacchia. Practical .Net 2 and C# 2: Harness the Platform, the Language, the Framework (2006) Static program analysis tools .NET programming tools Software metrics
NDepend
Mathematics,Engineering
1,064
45,407,528
https://en.wikipedia.org/wiki/Something%20New%20%28political%20party%29
Something New was a political party in the United Kingdom, founded in October 2014. The party was primarily based on the concept of an open-source manifesto, which means that it could be described as a party of the radical centre, as it combines ideas from the left and right of politics. It could also be described as syncretic. As such, Something New has no fixed ideology and instead believes in evidence-based policy creation. The party stood two candidates at the 2017 general election in Horsham and Ross, Skye and Lochaber, winning 0.6% and 0.5% of the vote respectively. The party voluntarily deregistered with the Electoral Commission on 5 November 2020. History Something New was founded in 2013, and was revived in October 2014. The party was registered with the Electoral Commission on 12 March 2015, naming Dr Raymond James Smith as its Leader, Alexander Hilton as its Treasurer and Paul Robinson as its Nominating Officer. Hilton had been the Treasurer and Nominating Officer for the first incarnation of Something New in 2013. At the 2015 general election, James Smith, who works as a software developer at the Open Data Institute, stood for election in Horsham, which was the constituency of Francis Maude, although he stepped down ahead of the election. Smith, in his election campaign, held a series of meetings with constituents in order to "give people a choice and increase the level of debate." Smith raised the money for his campaign through the use of the crowdfunding website, Crowdfunder, and in 27 hours had already raised the £500 for his election deposit. Following the ITV Leaders' Debate on 2 April 2015, Smith filmed his own responses to the questions that were put to the leaders and posted it on YouTube. Something New also stood a candidate in South West Surrey, Paul Robinson, a former Royal Navy officer and now a Director of Seedpod, his own business. From May 2011 to May 2015, he was a councillor on Godalming Town Council, serving from 2011 to 1 October 2014 as a Conservative, and from 1 October 2014 to 7 May 2015 as a member of Something New. At the 2015 local elections, he stood for re-election to Godalming Town Council and also for election to Waverley Borough Council. Robinson's wife, Rebecca Robinson, a fellow Director of Seedpod, also stood for Something New on Godalming Town Council. In the run-up to the 2015 general election, Something New formed party alliances with the Whig Party, My MP 2015 and Rebooting Democracy, and it cross-endorsed candidates from both the Whig Party and Rebooting Democracy. Both Smith and Robinson signed the My MP 2015 pledge to respect the will of their constituents. Smith signed the West Sussex County Times "Free Speech Charter." Something New also recommended several Independent candidates and all the candidates being stood by the Pirate Party UK. In the course of the election campaign, both Smith and Robinson attended several pre-election hustings. On 21 March 2015, Smith attended one husting organised by the Sussex branch of the Campaign to Protect Rural England. Robinson attended one husting in South West Surrey that included the incumbent Member of Parliament and Health Secretary Jeremy Hunt and the chief challenger, National Health Action Party candidate Louise Irvine. James Smith stated that he wished to beat the Green Party candidate in Horsham, as the Green Party received only 570 votes at the 2010 general election with very little campaigning. However, Smith only received 375 votes whereas the Green Party candidate received 2,151. Paul Robinson came 7th in South West Surrey, winning 320 votes or 0.6% of the vote. He also came 4th in the Waverley Borough Council election in the Godalming Central and Ockford Ward, winning 485 votes. Following the 2015 general election, Something New stood a candidate, Jessie Macneil-Brown, in a by-election in the Stepney Green Ward on Tower Hamlets London Borough Council, that was held on 11 June 2015. Macneil-Brown won just over 1% of the vote and came last out of all the candidates. Something New also intended to contest the 2016 London Assembly election. On 14 May 2015, Lindsey Garrett, Chair of the New Era Tenants Association, was announced as Something New's candidate for Mayor of London in the 2016 London mayoral election. Garrett was instrumental in removing Westbrook Partners from the estate and worked with Russell Brand and other New Era residents throughout the campaign. However, on 30 November 2015, it was announced via the Something New website that Garrett had withdrawn her potential candidacy. OpenPolitics Project The OpenPolitics Project was launched in August 2013. It was an open-source manifesto, in that anyone was free to contribute a policy that was then discussed and subject to consensus or scrapped. It combines elements of open-source governance and also direct democracy and consensus democracy. The project was organised on GitHub and the contribution process is operated in a way similar to Wikipedia's. The manifesto was supported by a base of active contributors, numbering roughly 25. Any candidates are open to stand on the OpenPolitics Manifesto, however only Something New and their two candidates for the 2015 general election have pledged to stand on the policies. James Smith has said, on behalf of Something New, that the three policies that he would prioritise would be to "change the voting system to three-member single transferable vote, tackle off-shoring of profits, and target of no new fossil-fuel vehicles by 2030." Smith has also described the Manifesto as "never 'finished,' never 'published.' It's a living document, always being updated and improved." On behalf of Something New and the OpenPolitics Project, Smith wrote an essay that was included in the Design Commission's report on "Designing Democracy." The essay appeared on pages 67 and 68 of the report in Section 4: The Stuff of Democracy. The inquiry was headed by Dr Richard Simmons and John Howell and the report was launched on 23 March 2015. Smith concluded his essay with the line, "The Open Revolution is here to change everything." Party leaders Electoral performance Parliamentary elections General Election 2015 Something New stood two candidates in the 2015 general election, but also endorsed candidates from other parties and recommended several candidates in constituencies where it did not nominate a candidate. It recommended that people voted for all the Pirate Party UK candidates and several independent candidates in several constituencies where it was not standing. Something New also endorsed "Allied Candidates" from the Whig Party and Rebooting Democracy. General Election 2017 Local elections Local elections 2015 Something New stood one candidate in the 2015 local elections: Paul Robinson in Godalming Central and Ockford Ward on Waverley Borough Council. Stepney Green by-election (2015) In a by-election in Stepney Green Ward on Tower Hamlets London Borough Council, that was held on 11 June 2015, Something New stood Jessie Macneil-Brown as a candidate. The election was called because the incumbent, Alibor Choudhury, was found guilty of corrupt and illegal practices and forced to leave his post by an election court. Local elections 2017 See also Open-source governance Direct democracy Consensus democracy Participatory democracy References External links Official website OpenPolitics Manifesto 2014 establishments in the United Kingdom Political parties established in 2014 Defunct political parties in the United Kingdom Direct democracy parties E-democracy Anti-austerity political parties in the United Kingdom
Something New (political party)
Technology
1,510
21,133,787
https://en.wikipedia.org/wiki/Protochlorophyllide
Protochlorophyllide, or monovinyl protochlorophyllide, is an intermediate in the biosynthesis of chlorophyll a. It lacks the phytol side-chain of chlorophyll and the reduced pyrrole in ring D. Protochlorophyllide is highly fluorescent; mutants that accumulate it glow red if irradiated with blue light. In angiosperms, the later steps which convert protochlorophyllide to chlorophyll are light-dependent, and such plants are pale (chlorotic) if grown in the darkness. Gymnosperms, algae, and photosynthetic bacteria have another, light-independent enzyme and grow green in the darkness as well. Conversion to chlorophyll The enzyme that converts protochlorophyllide to chlorophyllide a, the next intermediate on the biosynthetic pathway, is protochlorophyllide reductase, EC 1.3.1.33. There are two structurally unrelated proteins with this activity: the light-dependent and the dark-operative. The light-dependent reductase needs light to operate. The dark-operative version is a completely different protein, consisting of three subunits that exhibit significant sequence similarity to the three subunits of nitrogenase, which catalyzes the formation of ammonia from dinitrogen. This enzyme might be evolutionary older but (being similar to nitrogenase) is highly sensitive to free oxygen and does not work if its concentration exceeds about 3%. Hence, the alternative, light-dependent version needed to evolve. Most of the photosynthetic bacteria have both light-dependent and light-independent reductases. Angiosperms have lost the dark-operative form and rely on 3 slightly different copies of light-dependent version, frequently abbreviated as POR A, B, and C. Gymnosperms have much more copies of the similar gene (Loblolly pine has about 11 Loblolly Pine (Pinus taeda L.) Contains Multiple Expressed Genes Encoding Light-Dependent NADPH:Protochlorophyllide Oxidoreductase (POR)). In plants, POR is encoded in the cell nucleus and only later transported to its place of work, chloroplast. Unlike with POR, in plants and algae that have the dark-operative enzyme it is at least partially encoded in the chloroplast genome. Potential danger for plant Chlorophyll itself is bound to proteins and can transfer the absorbed energy in the required direction. Protochlorophyllide, however, occurs mostly in the free form and, under light conditions, acts as a photosensitizer, forming highly toxic free radicals. Hence, plants need an efficient mechanism of regulating the amount of chlorophyll precursor. In angiosperms, this is done at the step of δ-aminolevulinic acid (ALA), one of the intermediate compounds in the biosynthetic pathway. Plants that are fed by ALA accumulate high and toxic levels of protochlorophyllide, as do mutants with a disrupted regulatory system. Arabidopsis FLU mutant with damaged regulation can survive only either in a continuous darkness (protochlorophyllide is not dangerous in the darkness) or under continuous light, when the plant is can convert all produced protochlorophyllide into chlorophyll and does not over accumulate it despite the lack of regulation. In barley Tigrina mutant (mutated on the same gene,) light kills the majority of the leaf tissue that has developed in the darkness, but part of the leaf that originated during the day survives. As a result, the leaves are covered by white stripes of necrotic regions, and the number of the white stripes is close to the age of the leaf in days. Green regions survive the subsequent nights, likely because the synthesis of chlorophyll in the mature leaf tissue is greatly reduced anyway. Biosynthesis regulatory protein FLU In spite of numerous past attempts to find the mutant that overacumulates protochlorophyllide under usual conditions, only one such gene (flu) is currently (2009) known. Flu (first described in ) is a nuclear-encoded, chloroplast-located protein that appears containing only protein-protein interaction sites. It is currently not known which other proteins interact through this linker. The regulatory protein is a transmembrane protein that is located in the thylakoid membrane. Later, it was discovered that Tigrina mutants in barley, known a long time ago, are also mutated in the same gene. It is not obvious why no mutants of any other gene were observed; maybe mutations in other proteins, involved into the regulatory chain, are fatal. Flu is a single gene, not a member of the gene family. Later, by the sequence similarity, a similar protein was found in Chlamydomonas algae, showing that this regulatory subsystem existed a long time before the angiosperms lost the independent conversion enzyme. In a different manner, the Chlamydomonas regulatory protein is more complex: It is larger, crosses the thylakoid membrane twice rather than once, contains more protein-protein interactions sites, and even undergoes alternative splicing. It appears that the regulatory system underwent simplification during evolution. References Porphyrins Plant physiology
Protochlorophyllide
Chemistry,Biology
1,118
18,360,901
https://en.wikipedia.org/wiki/GLRA4
The glycine receptor, alpha 4, also known as GLRA4, is a human pseudogene. The protein encoded by this gene is a subunit of the glycine receptor. References External links Ion channels
GLRA4
Chemistry
47
175,622
https://en.wikipedia.org/wiki/Passivation%20%28chemistry%29
In physical chemistry and engineering, passivation is coating a material so that it becomes "passive", that is, less readily affected or corroded by the environment. Passivation involves creation of an outer layer of shield material that is applied as a microcoating, created by chemical reaction with the base material, or allowed to build by spontaneous oxidation in the air. As a technique, passivation is the use of a light coat of a protective material, such as metal oxide, to create a shield against corrosion. Passivation of silicon is used during fabrication of microelectronic devices. Undesired passivation of electrodes, called "fouling", increases the circuit resistance so it interferes with some electrochemical applications such as electrocoagulation for wastewater treatment, amperometric chemical sensing, and electrochemical synthesis. When exposed to air, many metals naturally form a hard, relatively inert surface layer, usually an oxide (termed the "native oxide layer") or a nitride, that serves as a passivation layer - i.e. these metals are "self-protecting". In the case of silver, the dark tarnish is a passivation layer of silver sulfide formed from reaction with environmental hydrogen sulfide. Aluminium similarly forms a stable protective oxide layer which is why it does not "rust". (In contrast, some base metals, notably iron, oxidize readily to form a rough, porous coating of rust that adheres loosely, is of higher volume than the original displaced metal, and sloughs off readily; all of which permit & promote further oxidation.) The passivation layer of oxide markedly slows further oxidation and corrosion in room-temperature air for aluminium, beryllium, chromium, zinc, titanium, and silicon (a metalloid). The inert surface layer formed by reaction with air has a thickness of about 1.5 nm for silicon, 1–10 nm for beryllium, and 1 nm initially for titanium, growing to 25 nm after several years. Similarly, for aluminium, it grows to about 5 nm after several years. In the context of the semiconductor device fabrication, such as silicon MOSFET transistors and solar cells, surface passivation refers not only to reducing the chemical reactivity of the surface but also to eliminating the dangling bonds and other defects that form electronic surface states, which impair performance of the devices. Surface passivation of silicon usually consists of high-temperature thermal oxidation. Mechanisms There has been much interest in determining the mechanisms that govern the increase of thickness of the oxide layer over time. Some of the important factors are the volume of oxide relative to the volume of the parent metal, the mechanism of oxygen diffusion through the metal oxide to the parent metal, and the relative chemical potential of the oxide. Boundaries between micro grains, if the oxide layer is crystalline, form an important pathway for oxygen to reach the unoxidized metal below. For this reason, vitreous oxide coatings – which lack grain boundaries – can retard oxidation. The conditions necessary, but not sufficient, for passivation are recorded in Pourbaix diagrams. Some corrosion inhibitors help the formation of a passivation layer on the surface of the metals to which they are applied. Some compounds, dissolved in solutions (chromates, molybdates) form non-reactive and low solubility films on metal surfaces. It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism ("transpassivation"). History Discovery and etymology The fact that iron doesn't react with concentrated nitric acid was discovered by Mikhail Lomonosov in 1738 and rediscovered by James Keir in 1790, who also noted that such pre-immersed Fe doesn't reduce silver from nitrate anymore. In the 1830s, Michael Faraday and Christian Friedrich Schönbein studied that issue systematically and demonstrated that when a piece of iron is placed in dilute nitric acid, it will dissolve and produce hydrogen, but if the iron is placed in concentrated nitric acid and then returned to the dilute nitric acid, little or no reaction will take place. In 1836, Schönbein named the first state the active condition and the second the passive condition while Faraday proposed the modern explanation of the oxide film described above (Schönbein disagreed with it), which was experimentally proven by Ulick Richardson Evans only in 1927. Between 1955 and 1957, Carl Frosch and Lincoln Derrick discovered surface passivation of silicon wafers by silicon dioxide, using passivation to build the first silicon dioxide field effect transistors. Specific materials Aluminium Aluminium naturally forms a thin surface layer of aluminium oxide on contact with oxygen in the atmosphere through a process called oxidation, which creates a physical barrier to corrosion or further oxidation in many environments. Some aluminium alloys, however, do not form the oxide layer well, and thus are not protected against corrosion. There are methods to enhance the formation of the oxide layer for certain alloys. For example, prior to storing hydrogen peroxide in an aluminium container, the container can be passivated by rinsing it with a dilute solution of nitric acid and peroxide alternating with deionized water. The nitric acid and peroxide mixture oxidizes and dissolves any impurities on the inner surface of the container, and the deionized water rinses away the acid and oxidized impurities. Generally, there are two main ways to passivate aluminium alloys (not counting plating, painting, and other barrier coatings): chromate conversion coating and anodizing. Alclading, which metallurgically bonds thin layers of pure aluminium or alloy to different base aluminium alloy, is not strictly passivation of the base alloy. However, the aluminium layer clad on is designed to spontaneously develop the oxide layer and thus protect the base alloy. Chromate conversion coating converts the surface aluminium to an aluminium chromate coating in the range of in thickness. Aluminium chromate conversion coatings are amorphous in structure with a gel-like composition hydrated with water. Chromate conversion is a common way of passivating not only aluminium, but also zinc, cadmium, copper, silver, magnesium, and tin alloys. Anodizing is an electrolytic process that forms a thicker oxide layer. The anodic coating consists of hydrated aluminium oxide and is considered resistant to corrosion and abrasion. This finish is more robust than the other processes and also provides electrical insulation, which the other two processes may not. Carbon In carbon quantum dot (CQD) technology, CQDs are small carbon nanoparticles (less than 10 nm in size) with some form of surface passivation. Ferrous materials Ferrous materials, including steel, may be somewhat protected by promoting oxidation ("rust") and then converting the oxidation to a metalophosphate by using phosphoric acid and add further protection by surface coating. As the uncoated surface is water-soluble, a preferred method is to form manganese or zinc compounds by a process commonly known as parkerizing or phosphate conversion. Older, less effective but chemically similar electrochemical conversion coatings included black oxidizing, historically known as bluing or browning. Ordinary steel forms a passivating layer in alkali environments, as reinforcing bar does in concrete. Stainless steel Stainless steels are corrosion-resistant, but they are not completely impervious to rusting. One common mode of corrosion in corrosion-resistant steels is when small spots on the surface begin to rust because grain boundaries or embedded bits of foreign matter (such as grinding swarf) allow water molecules to oxidize some of the iron in those spots despite the alloying chromium. This is called rouging. Some grades of stainless steel are especially resistant to rouging; parts made from them may therefore forgo any passivation step, depending on engineering decisions. Common among all of the different specifications and types are the following steps: Prior to passivation, the object must be cleaned of any contaminants and generally must undergo a validating test to prove that the surface is 'clean.' The object is then placed in an acidic passivating bath that meets the temperature and chemical requirements of the method and type specified between customer and vendor. While nitric acid is commonly used as a passivating acid for stainless steel, citric acid is gaining in popularity as it is far less dangerous to handle, less toxic, and biodegradable, making disposal less of a challenge. Passivating temperatures can range from ambient to , while minimum passivation times are usually 20 to 30 minutes. After passivation, the parts are neutralized using a bath of aqueous sodium hydroxide, then rinsed with clean water and dried. The passive surface is validated using humidity, elevated temperature, a rusting agent (salt spray), or some combination of the three. The passivation process removes exogenous iron, creates/restores a passive oxide layer that prevents further oxidation (rust), and cleans the parts of dirt, scale, or other welding-generated compounds (e.g. oxides). Passivation processes are generally controlled by industry standards, the most prevalent among them today being ASTM A 967 and AMS 2700. These industry standards generally list several passivation processes that can be used, with the choice of specific method left to the customer and vendor. The "method" is either a nitric acid-based passivating bath, or a citric acid-based bath, these acids remove surface iron and rust, while sparing the chromium. The various 'types' listed under each method refer to differences in acid bath temperature and concentration. Sodium dichromate is often required as an additive to oxidise the chromium in certain 'types' of nitric-based acid baths, however this chemical is highly toxic. With citric acid, simply rinsing and drying the part and allowing the air to oxidise it, or in some cases the application of other chemicals, is used to perform the passivation of the surface. It is not uncommon for some aerospace manufacturers to have additional guidelines and regulations when passivating their products that exceed the national standard. Often, these requirements will be cascaded down using Nadcap or some other accreditation system. Various testing methods are available to determine the passivation (or passive state) of stainless steel. The most common methods for validating the passivity of a part is some combination of high humidity and heat for a period of time, intended to induce rusting. Electro-chemical testers can also be utilized to commercially verify passivation. Titanium The surface of titanium and of titanium-rich alloys oxidizes immediately upon exposure to air to form a thin passivation layer of titanium oxide, mostly titanium dioxide. This layer makes it resistant to further corrosion, aside from gradual growth of the oxide layer, thickening to ~25 nm after several years in air. This protective layer makes it suitable for use even in corrosive environments such as sea water. Titanium can be anodized to produce a thicker passivation layer. As with many other metals, this layer causes thin-film interference which makes the metal surface appear colored, with the thickness of the passivation layer directly affecting the color produced. Nickel Nickel can be used for handling elemental fluorine, owing to the formation of a passivation layer of nickel fluoride. This fact is useful in water treatment and sewage treatment applications. Silicon In the area of microelectronics and photovoltaic solar cells, surface passivation is usually implemented by thermal oxidation at about 1000 °C to form a coating of silicon dioxide. Surface passivation is critical to solar cell efficiency. The effect of passivation on the efficiency of solar cells ranges from 3–7%. The surface resistivity is high, > 100 Ωcm. Perovskite The easiest and most widely studied method to improve perovskite solar cells is passivation. These defects usually lead to deep energy level defects in solar cells due to the presence of hanging bonds on the surface of perovskite films. Usually, small molecules or polymers are doped to interact with the hanging bonds and thus reduce the defect states. This process is similar to Tetris, i.e., we always want the layer to be full. A small molecule with the function of passivation is some kind of square that can be inserted where there is an empty space and then a complete layer is obtained. These molecules will generally have lone electron pairs or pi-electrons, so they can bind to the defective states on the surface of the cell film and thus achieve passivation of the material. Therefore, molecules such as carbonyl, nitrogen-containing molecules, and sulfur-containing molecules are considered, and recently it has been shown that π electrons can also play a role. In addition, passivation not only improves the photoelectric conversion efficiency of perovskite cells, but also contributes to the improvement of device stability. For example, adding a passivation layer of a few nanometers thickness can effectively achieve passivation with the effect of stopping water vapor intrusion. See also Cold welding Deal–Grove model Pilling–Bedworth ratio References Further reading Chromate conversion coating (chemical film) per MIL-DTL-5541F for aluminium and aluminium alloy parts A standard overview on black oxide coatings is provided in MIL-HDBK-205, Phosphate & Black Oxide Coating of Ferrous Metals. Many of the specifics of Black Oxide coatings may be found in MIL-DTL-13924 (formerly MIL-C-13924). This Mil-Spec document additionally identifies various classes of Black Oxide coatings, for use in a variety of purposes for protecting ferrous metals against rust. Passivisation : Debate over Paintability http://www.coilworld.com/5-6_12/rlw3.htm Corrosion prevention Surface finishing German inventions Integrated circuits MOSFETs Semiconductor device fabrication Swiss inventions
Passivation (chemistry)
Chemistry,Materials_science,Technology,Engineering
2,992
19,690,559
https://en.wikipedia.org/wiki/Angus%20Dalgleish
Angus George Dalgleish (born May 1950) is a professor of oncology at St George's, University of London, best known for his contributions to HIV/AIDS research. Dalgleish stood in 2015 for Parliament as a UKIP candidate. Education Angus George Dalgleish was born in May 1950 in Harrow, London. Initially educated at the Harrow County School for Boys, Dalgleish received a Bachelor of Medicine, Bachelor of Surgery degree from University College London with an intercalated bachelor's degree in Anatomy. Career as medical researcher After various positions in the United Kingdom, Dalgleish joined the Royal Flying Doctor Service in Mount Isa, Queensland, then progressed through positions at various hospitals in Brisbane, Australia, before moving to the Ludwig Institute for Cancer Research in Sydney. After completion of his training, Dalgleish returned to work in the UK in 1984 at the Institute of Cancer Research. He is a co-discoverer of the CD4 receptor as the major cellular receptor for HIV. In 1986, he was appointed to a consulting position at Northwick Park Hospital, in 1991 he was made Foundation Professor of Oncology at St George's, University of London, and in 1994 he was appointed Visiting Professor at the Institute of Cancer Research in London. In 1997, he founded Onyvax Ltd., a privately-funded biotechnology company developing cancer vaccines, where he held the position as Research Director; it was dissolved in 2013. Dalgleish is a member of the medical board in Bionor Pharma. Dalgleish is on the scientific advisory board of Immodulon, and has stock options in Immunor AS, a disclosure he made in order to have his research work published. During the COVID-19 pandemic, Dalgleish was a proponent of the lab leak theory. While still not generally accepted, this remains a live debate, and it has been claimed that the support of Jay Bhattacharya and John Ratcliffe for the lab leak theory will bring "explosive documents" to light in 2025. 2015 candidacy for Parliament Dalgleish was a member of the UK Independence Party and stood as a candidate in Sutton & Cheam, during the 2015 United Kingdom general election finishing fourth with 10.7% of the vote. Dalgleish campaigned for Leave.EU and appeared on the BBC Radio 4 Today programme presenting the case for Brexit. He was an advocate of Leave Means Leave, a Eurosceptic group. Awards and honours Dalgleish was elected a Fellow of the Academy of Medical Sciences in 2001 and is also a Fellow of the Royal College of Physicians the Royal College of Pathologists and a Fellow of the Royal Australasian College of Physicians. His citation on election to FMedSci reads: Covid controversies In October 2023, following a joint investigation analysing emails leaked in 2022 by Russian hacking group working for the Russian FSB, an article was published by Computer Weekly and Byline Times containing several controversial claims about Angus Dalgleish. That Dalgleish was a member of a secret group led by Richard Dearlove (former head of MI6), Gwythian Prins (a historian academic), and John Constable (of the Global Warming Policy Foundation) - who called themselves the "Covid Hunters". That in March 2020 the group prepared an 'Urgent Briefing for the Prime Minister and his Advisers''' which advised that COVID-19 originated in the Wuhan Institute of Virology (see COVID-19 lab leak ). That the group had briefed Boris Johnson that the man-made nature of the virus meant that the best candidate for vaccine development was the Norwegian Biovacc-19. Also that Dalgleish had been given stock options in the company Immunor which held the patents for this vaccine due to his significant involvement in the research behind its development. That when the scientific journal Nature Medicine published an article contradicting them on the origin of COVID-19, the group considered this to be COVID-19 misinformation by China. That following these suspicions the group had advised Michael Gove to secretly start electronic surveillance on the journal using MI5 resources to uncover them as part of a “China Persons of Influence Network” of senior officials, politicians and academics allegedly under the influence of the communist state. (For examples, see Chinese information operations and information warfare and Chinese espionage in the United States.) That the group had then contacted a range of other Western intelligence agencies to brief them on the supposed Chinese activity in a briefing titled 'The Three Interlocking Arms of The Intelligence Case against PRC' which claimed China was “attempting to control the terms of the origin of COVID-19 debate with active help from non-Chinese agents of influence, notably at the scientific journal Nature.'' That the group had worked together previously to replace Theresa May with Boris Johnson and had previously attempted to replace the National Security Council. In November 2024, Dalgleish was interviewed in Australia on 2GB and repeated his views on the COVID-19 pandemic. He believed the lockdowns and mask mandates in many countries had been "total madness" and that the "vaccines" were wrongly named and had been "largely ineffective at saving lives" while causing many adverse reactions. Australia's pandemic response had been "absolutely appalling". Only Sweden had got it right, with no lockdown mandates, and with vaccines only for people over 70. He said the result had been "the lowest excess death rate in the entire western world." Publications and contributions According to Semantic Scholar, Dalgleish has 495 publications, 21,234 citations, and 541 "highly influential citations". Bibliography References External links Explosive study claims to prove Chinese scientists created COVID British pathologists Vaccinologists British cancer researchers 1950 births Living people Alumni of University College London People from Harrow, London People educated at Harrow High School UK Independence Party parliamentary candidates COVID-19 pandemic in the United Kingdom English male non-fiction writers 21st-century English male writers English non-fiction outdoors writers
Angus Dalgleish
Biology
1,241
11,147,109
https://en.wikipedia.org/wiki/Kempner%20function
In number theory, the Kempner function is defined for a given positive integer to be the smallest number such that divides the For example, the number does not divide , , but does This function has the property that it has a highly inconsistent growth rate: it grows linearly on the prime numbers but only grows sublogarithmically at the factorial numbers. History This function was first considered by François Édouard Anatole Lucas in 1883, followed by Joseph Jean Baptiste Neuberg in 1887. In 1918, A. J. Kempner gave the first correct algorithm for The Kempner function is also sometimes called the Smarandache function following Florentin Smarandache's rediscovery of the function Properties Since is always at A number greater than 4 is a prime number if and only That is, the numbers for which is as large as possible relative to are the primes. In the other direction, the numbers for which is as small as possible are the factorials: for is the smallest possible degree of a monic polynomial with integer coefficients, whose values over the integers are all divisible For instance, the fact that means that there is a cubic polynomial whose values are all zero modulo 6, for instance the polynomial but that all quadratic or linear polynomials (with leading coefficient one) are nonzero modulo 6 at some integers. In one of the advanced problems in The American Mathematical Monthly, set in 1991 and solved in 1994, Paul Erdős pointed out that the function coincides with the largest prime factor of for "almost all" (in the sense that the asymptotic density of the set of exceptions is zero). Computational complexity The Kempner function of an arbitrary number is the maximum, over the prime powers dividing , of . When is itself a prime power , its Kempner function may be found in polynomial time by sequentially scanning the multiples of until finding the first one whose factorial contains enough multiples The same algorithm can be extended to any whose prime factorization is already known, by applying it separately to each prime power in the factorization and choosing the one that leads to the largest value. For a number of the form , where is prime and is less than , the Kempner function of is . It follows from this that computing the Kempner function of a semiprime (a product of two primes) is computationally equivalent to finding its prime factorization, believed to be a difficult problem. More generally, whenever is a composite number, the greatest common divisor of will necessarily be a nontrivial divisor allowing to be factored by repeated evaluations of the Kempner function. Therefore, computing the Kempner function can in general be no easier than factoring composite numbers. References and notes Factorial and binomial topics
Kempner function
Mathematics
572
46,332,476
https://en.wikipedia.org/wiki/Penicillium%20kananaskense
Penicillium kananaskense is an anamorph species of the genus of Penicillium which was isolated from soil of a forest in Alberta in Canada. References kananaskense Fungi described in 1994 Fungus species
Penicillium kananaskense
Biology
49
3,509,794
https://en.wikipedia.org/wiki/Tunnel%20and%20Reservoir%20Plan
The Tunnel and Reservoir Plan (abbreviated TARP and more commonly known as the Deep Tunnel Project or the Chicago Deep Tunnel) is a large civil engineering project that aims to reduce flooding in the metropolitan Chicago area, and to reduce the harmful effects of flushing raw sewage into Lake Michigan by diverting storm water and sewage into temporary holding reservoirs. The megaproject is one of the largest civil engineering projects ever undertaken in terms of scope, cost and timeframe. Commissioned in the mid-1970s, the project is managed by the Metropolitan Water Reclamation District of Greater Chicago. Completion of the system is not anticipated until 2029, but substantial portions of the system have already opened and are currently operational. Across 30 years of construction, over $3 billion has been spent on the project. History 19th century The Deep Tunnel Project is the latest in a series of civil engineering projects dating back to 1834. Many of the problems experienced by the city of Chicago are directly related to its low level topography and the fact that the city is largely built upon marsh or wet prairie. This combined with a temperate wet climate and the human development of open land, leads to substantial water runoff. Lake Michigan was ineffective in carrying sewage away from the city, and in the event of a rainstorm, the water pumps that provided drinking water to Chicagoans became contaminated with sewage. Though no epidemics were caused by this system (see Chicago 1885 cholera epidemic myth), it soon became clear that the sewage system needed to be diverted to flow away from Lake Michigan in order to handle an increasing population's sanitation needs. Between 1864 and 1867, under the leadership of Ellis S. Chesbrough, the city built the two-mile Chicago lake tunnel to a new water intake location farther from the shore. Crews began from the intake location and the shore, tunneling in two shifts a day. Clay and earth were drawn away by mule-drawn railcars. Masons lined the five-foot-diameter tunnel with two layers of brick. The lake and shore crews met in November 1866, less than seven inches out of alignment. A second tunnel was added in 1874. In 1871, the deepening of the Illinois and Michigan Canal was completed to reverse the flow of the Chicago River to drain diluted sewage southwest away from Lake Michigan. However, the canal only had the capacity to drain to the Des Plaines River during dry weather; during heavy rains, the Des Plaines would flood and overflow into the canal, reversing its flow back into the lake. In 1900, to improve general health standards, the flow of the main branch of the Chicago River was permanently reversed with the construction of the Chicago Sanitary and Ship Canal. This further improved the sanitation of Lake Michigan and helped to prevent further waterborne epidemic scares. 20th century The construction of the Sanitary and Ship Canal (1892–1900), enlargements to the North Shore Channel (1907–1910), the construction of the Cal-Sag Channel (1911–1922), and the construction of locks at the mouth of the Chicago River (1933–1938) brought further improvements to the sanitary issues of the time. These projects blocked further amounts of sewage from draining into Lake Michigan. The projects also brought fresh lake water to inland waterways to further dilute sewage that was already in the waterways. Surrounding farmland also engaged in flood control projects. The Illinois Farm Drainage Act of 1879 established drainage districts. These districts were generally named for the basin they drained—for example, the Fox River Drainage District. After World War II, suburban communities began to realize the benefits of separating stormwater from sewage water and began to construct separate sewer and storm drainage lines. The primary benefit of wastestream separation is that storm water requires less treatment than sewage before being returned to the environment. Flood damage grew markedly after 1938, when surrounding natural drainage areas were lost to development and human activity. Serious flooding has occurred in the Chicago metropolitan area in 1849, 1855, 1885, 1938, 1952, 1954, 1957, 1961, 1973, 1979, 1986, 1987, 1996, 2007, 2008, 2010, 2011, 2022, 2023 — but most record-setting crests occurred after 1948. In the 1960s, the concept of Deep Tunnel was studied and recommended as a solution to continuing flooding issues. Status Phase 1, the creation of of drainage tunnels ranging from in diameter, up to underground, was adopted in 1972, commenced in 1975, and completed and operational by 2006. Phase 2, creation of reservoirs primarily intended for flood control, remains underway with an expected completion date of 2029. Currently, up to of sewage can be stored and held in the tunnels themselves while awaiting processing at sewage treatment plants, which release treated water into the Calumet and Des Plaines rivers. Additional sewage is stored at the Thornton Composite Reservoir, and the Gloria Alitto Majewski Reservoir near O'Hare International Airport. The McCook Reservoir was completed in 2017 and will be expanded to by 2029. Because the reservoirs are decommissioned quarries, construction has been delayed by decreased demand for the quarried gravel. Upon completion, the TARP system will have a storage capacity of . Reservoirs Effects Severe weather events have forced water management agencies to pump excess wastewater into the lake and river in order to prevent flooding. These incidents have decreased in frequency as more of the Deep Tunnel system has become operational. Long considered an open sewer, the Chicago River now hosts more than 60 fish species and increased wildlife along its shores. Substantial development is occurring along many portions of the riverfront. Canoeing is once again allowed on the waterway, but swimming is still prohibited due to high pollution levels. On October 3, 1986, a heavy thunderstorm drenched the southern portion of the Deep Tunnel area with several inches of rain in a short period of time. While the Deep Tunnel system performed satisfactorily by absorbing excess water, water within the system itself rushed past the north side of Chicago and near the Bahá'í Temple in Wilmette. Geysers of over were reported in both locations for up to an hour as the water was redistributed more evenly through the system. A geyser erupted downtown at the corner of Jefferson and Monroe, trapping a woman inside her car as it filled with water. A system of watertight bulkheads has since been installed to prevent the event from occurring again. During the Chicago Flood of 1992, the water from the Chicago River that leaked into the long-disused underground freight tunnel system was eventually drained into the Deep Tunnel network, which itself was still under construction. Since the tunnels became operational, combined sewer overflows have been reduced from an average of 100 days per year to 50. Since Thornton Reservoir came online in 2015 combined sewer overflows have been nearly eliminated. Sources References External links Set of photos of the Morrison Knudsen segment of the tunnel Metropolitan Water Reclamation District of Greater Chicago Video of TARP on YouTube Video of TARP on YouTube Article and map in the Encyclopedia of Chicago Government of Chicago Buildings and structures in Chicago Flood control projects Flood control in the United States Waste processing sites Planned developments Engineering projects Proposed infrastructure in the United States Tunnels in Illinois Proposed tunnels in the United States Metropolitan Water Reclamation District of Greater Chicago
Tunnel and Reservoir Plan
Engineering
1,451
21,469,584
https://en.wikipedia.org/wiki/Internet%20pornography
Internet pornography or online pornography is any pornography that is accessible over the Internet; primarily via websites, FTP connections, peer-to-peer file sharing, or Usenet newsgroups. The greater accessibility of the World Wide Web from the late 1990s led to an incremental growth of Internet pornography, the use of which among adolescents and adults has since become increasingly popular. Danni's Hard Drive started in 1995 by Danni Ashe is considered one of the earliest online pornographic websites. In 2012, estimates of the total number of pornographic websites stood at nearly 25 million comprising about 12% of all the websites. In 2022, the total amount of pornographic content accessible online was estimated to be over 10,000 terabytes. The four most accessed pornographic websites are Pornhub, XVideos, xHamster, and XNXX. , a single company, Aylo, owns and operates most of the popular online streaming pornographic websites, including: Pornhub, RedTube, and YouPorn, as well as pornographic film studios like: Brazzers, Digital Playground, Men.com, Reality Kings, and Sean Cody among others, but it does not own websites like XVideos, xHamster, and XNXX. The company has been alleged to be a monopoly. Introduction Starting in the 1990s, the Internet played a major part in enhancing the access of pornography by people. Usenet newsgroups provided the base for what has been called the "amateur revolution" where amateur pornographers from the late 1980s and early 1990s, with the help of digital cameras and the Internet, created and distributed their own pornographic content independent of the mainstream networks. The use of the World Wide Web became popular with the introduction of Netscape navigator in 1994. This development paved the way for newer methods of distribution and consumption of pornography. The Internet as a medium to access pornography became so popular that in 1995 Time published a cover story titled "Cyberporn". Danni's Hard Drive started in 1995, by Danni Ashe is considered one of the earliest online pornographic websites; coded by Ashe, a former stripper and nude model, the website was reported by CNN in 2000 to have made revenues of $6.5 million. In 2012, the total number of pornographic websites was estimated to be around 25 million, comprising 12% of all the websites. In 2022, the amount of pornographic content accessible online is estimated at over 10,000 terabytes. XVideos and Pornhub are the two most accessed pornographic websites in the US. In 2024 and according to the DSA regulation, 59 over 100 Spaniards visits monthly one of the three biggest websites. Before its shutdown in 2025, ThisAV was a popular pornographic website in Hong Kong. History and methods of distribution Before the World Wide Web Pornography is regarded by some as one of the driving forces behind the expansion of the World Wide Web, like camcorders, VCRs and cable television before it. Pornographic images had been transmitted over the Internet as ASCII porn but to send images over network needed computers with graphics capability and also higher network bandwidth. This was possible in the late 1980s and early 1990s through the use of anonymous FTP servers and through the Gopher protocol. At this time, the Internet had widespread ever since the late 1970s. One of the early Gopher/FTP sites was at Tudelft and was called the Digital Archive on the 17th Floor. This small image archive contained some low quality scanned pornographic images that were initially available to anyone anonymously, but the site soon became restricted to Netherlands only access. Pornographic videos started appearing on FTP and Gopher servers as well. Usenet groups Usenet newsgroups provided an early way of sharing images over the narrow bandwidth available in the early 1990s. Because of the network restrictions of the time, images had to be encoded as ascii text and then broken into sections before being posted to the Alt.binaries of the usenet. These files could then be downloaded and then reassembled before being decoded back to an image. Automated software such as Aub (Assemble Usenet Binaries) allowed the automatic download and assembly of the images from a newsgroup. There was a rapid growth in the number of posts in the early 1990s but image quality was restricted by the size of files that could be posted. The method was also used to disseminate pornographic images, which were usually scanned from adult magazines. This type of distribution was generally free (apart from fees for Internet access), and provided a great deal of anonymity. The anonymity made it safe and easy to ignore copyright restrictions, as well as protecting the identity of uploaders and downloaders. Around this time frame, pornography was also distributed via pornographic Bulletin Board Systems such as Rusty n Edie's. These BBSes could charge users for access, leading to the first commercial online pornography. A 1995 article written in The Georgetown Law Journal titled "Marketing Pornography on the Information Superhighway: A Survey of 917,410 Images, Description, Short Stories and Animations Downloaded 8.5 Million Times by Consumers in Over 2000 Cities in Forty Countries, Provinces and Territories" by Martin Rimm, a Carnegie Mellon University graduate student, claimed that (as of 1994) 83.5% of the images on Usenet newsgroups where images were stored were pornographic in nature. Before publication, Philip Elmer-DeWitt used the research in a Time magazine article, "On a Screen Near You: Cyberporn." The findings were attacked by journalists and civil liberties advocates who insisted the findings were seriously flawed. "Rimm's implication that he might be able to determine 'the percentage of all images available on the Usenet that are pornographic on any given day' was sheer fantasy" wrote Mike Godwin in HotWired. The research was cited during a session of U.S. Congress. The student changed his name and disappeared from public view. Godwin recounts the episode in "Fighting a Cyberporn Panic" in his book Cyber Rights: Defending Free Speech in the Digital Age. The invention of the World Wide Web spurred both commercial and non-commercial distribution of pornography. The rise of pornography websites offering photos, video clips and streaming media including live webcam access allowed greater access to pornography. Free vs. commercial On the Web, there are both commercial and free pornography sites. The bandwidth usage of a pornography site is relatively high, and the income a free site can earn through advertising may not be sufficient to cover the costs of that bandwidth. One recent entry into the free pornography website market are Thumbnail gallery post sites. These are free websites that post links to commercial sites, providing a sampling of the commercial site in the form of thumbnail images, or in the form of Free Hosted Galleries—samplings of full-sized content provided and hosted by the commercial sites to promote their site. Some free websites primarily serve as portals by keeping up-to-date indexes of these smaller sampler sites. These intents to create directories about adult content and websites were followed by the creation of adult wikis where the user can contribute their knowledge and recommend quality resources and links. When a user purchases a subscription to a commercial site after clicking through from a free thumbnail gallery site, the commercial site makes a payment to the owner of the free site. There are several forms of sites delivering adult content. TGP The most common form of adult content is a categorized list (more often a table) of small pictures (called "thumbnails") linked to galleries. These sites are called a thumbnail gallery post (TGP). As a rule, these sites sort thumbs by category and type of content available on a linked gallery. Sites containing thumbs that lead to galleries with video content are called MGP (movie gallery post). The main benefit of TGP/MGP is that the surfer can get a first impression of the content provided by a gallery without actually visiting it. However, TGP sites are open to abuse, with the most abusive form being the so-called CJ (abbreviation for circlejerk), that contains links that mislead the surfer to sites he or she actually did not wish to see. This is also called a redirect. Linklists Linklists, unlike TGP/MGP sites, do not display a huge number of pictures. A linklist is a (frequently) categorised web list of links to so-called "freesites*", but unlike TGPs, links are provided in a form of text, not thumbs. It is still a question which form is more descriptive to a surfer, but many webmasters cite a trend that thumbs are much more productive, and simplify searching. On the other hand, linklists have a larger amount of unique text, which helps them improve their positions in search engine listings. TopLists are linklists whose internal ranking of freesites is based on incoming traffic from those freesites, except that freesites designed for TopLists have many more galleries. Peer-to-peer Peer-to-peer file sharing networks provide another form of free access to pornography. While such networks have been associated largely with the illegal sharing of copyrighted music and movies, the sharing of pornography has also been a popular use for file sharing. Many commercial sites have recognized this trend and have begun distributing free samples of their content on peer-to-peer networks. Viewership As of 2011, the majority of viewers of online pornography were men; women tended to prefer romance novels and erotic fan fiction. Women comprised about one quarter to one third of visitors to popular pornography websites, but were only 2% of subscribers to pay sites. Subscribers with female names were flagged as signs of potential credit card fraud, because "so many of these charges result in an angry wife or mother demanding a refund for the misuse of her card." Nonetheless, women spend more time on average on pornography websites, particularly Pornhub, than men and were more interested in pornography upon marriage. An anti-porn research group, Barna Group and Covenant Eyes, reported in 2020 that "33% of women aged 25 and under search for porn at least once per month. A 2015 study found "a big jump" in pornography viewing over the past few decades, with the largest increase driven by the people born in the 1970s and 1980s. While the study's authors noted this increase is "smaller than conventional wisdom might predict," it is still quite significant. Those who were born since the 1980s onward were the first to grow up in a world where they had access to the Internet from their teenage years, this early exposure and accessibility of Internet pornography might have been the primary driver of this increase. States that are highly religious and conservative were found to search for more Internet pornography. Internet pornography formats Image files Pornographic images may be either scanned into the computer from photographs or magazines, produced with a digital camera or a frame from a video before being uploading onto a pornographic website. The JPEG format is one of the most common formats for these images. Another format is GIF which may provide an animated image where the people in the picture move. It can last for only a second or two up to a few minutes and then reruns (repeats) indefinitely. If the position of the objects in the last frame is about the same as the first frame, there is the illusion of continuous action. Video files and streaming video Pornographic video clips may be distributed in a number of formats, including MPEG, WMV, and QuickTime. More recently, VCD and DVD image files allow the distribution of whole VCDs and DVDs. Many commercial porn sites exist that allow one to view pornographic streaming video. As of 2020, some Internet pornography sites have begun offering 5K resolution content, while 1080p and 4K resolution are still more common. Since mid-2006, advertising-supported free pornographic video sharing websites based on the YouTube format have appeared. Referred to as Porn 2.0, these sites generally use Flash technology to distribute videos that were uploaded by users; these include user-generated content as well as scenes from commercial porn movies and advertising clips from pornographic websites. Webcams Another format of adult content that emerged with the advent of the Internet is live webcams. Webcam content can generally be divided into two categories: group shows offered to members of an adult paysite, and one-on-one private sessions usually sold on a pay-per-view basis. Server-based webcam sex shows spur unique international economics: adult models in various countries perform live webcam shows and chat for clients in affluent countries. This kind of activity is sometimes mediated by companies that will set up websites and manage finances. They may maintain "office" space for the models to perform from, or they provide the interface for models to work at home, with their own computer with webcam. As of 2020, most so-called cam hosts stream directly from their home, due to the fast Internet lines and cheap HD webcams, that are available at low-cost. The models get paid via tips or by selling exclusive content to their viewers through live cam sites, which can reach more than 20,000 viewers at once. Live cam sites are very popular. Big sites like Chaturbate or LiveJasmin are among the 100 most popular websites according to Alexa Internet. Other formats Other formats include text and audio files. While pornographic and erotic stories, distributed as text files, web pages, and via message boards and newsgroups, have been semi-popular, audio porn, via formats like MP3 and FLV, have increased in popularity. Audio porn can include recordings of people having sex or merely reading erotic stories. (Pornographic magazines are available in Zinio format, which provides a reader program to enable access.) Combination formats, such as webteases that consist of images and text have also emerged. Legal status The Internet is an international network and there are currently no international laws regulating pornography; each country deals with Internet pornography differently. Generally, in the United States, if the act depicted in the pornographic content is legal in the jurisdiction that it is being distributed from then the distributor of such content would not be in violation of the law regardless of whether it is accessible in countries where it is illegal. This does not apply to those who access the pornography, however, as they could still be prosecuted under local laws in their country. Due to enforcement problems in anti-pornography laws over the Internet, countries that prohibit or heavily restrict access to pornography have taken other approaches to limit access by their citizens, such as employing content filters. Many activists and politicians have expressed concern over the easy availability of Internet pornography, especially to minors. This has led to a variety of attempts to restrict children's access to Internet pornography such as the 1996 Communications Decency Act in the United States. Some companies use an Adult Verification System (AVS) to deny access to pornography by minors. However, most Adult Verification Systems charge fees that are substantially higher than the actual costs of any verification they do (for example, in excess of $10/month) and are really part of a revenue collection scheme where sites encourage users to sign up for an AVS system, and get a percentage of the proceeds in return. In response to concerns with regard to children accessing age-inappropriate content, the adult industry, through the Association of Sites Advocating Child Protection (ASACP), began a self-labeling initiative called the Restricted to Adults label (RTA). This label is recognized by many web filtering products and is entirely free to use. Most employers have distinct policies against the accessing of any kind of online pornographic material from company computers, in addition to which some have also installed comprehensive filters and logging software in their local computer networks. One area of Internet pornography that has been the target of the strongest efforts at curtailment is child pornography. Because of this, most Internet pornography websites based in the U.S. have a notice on their front page that they comply with 18 USC Section 2257, which requires the keeping of records regarding the age of the people depicted in photographs, along with displaying the Name of the company record keeper. Some site operators outside the U.S. have begun to include this compliance statement on their websites as well. On April 8, 2008 Evil Angel and its owner John Stagliano were charged in federal court with multiple counts of obscenity. One count was for, "using an interactive computer service to display an obscene movie trailer in a manner available to a person under 18 years of age." Web filters and blocking software A variety of content-control, parental control and filtering software is available to block pornography and other classifications of material from particular computers or (usually company-owned) networks. Commercially available Web filters include Bess, Net Nanny, SeeNoEvil, SurfWatch, and others. Various work-arounds and bypasses are available for some of these products; Peacefire is one of the most notable clearinghouses for such countermeasures. Child pornography The Internet has radically changed how child pornography is reproduced and disseminated, and, according to the United States Department of Justice, resulted in a massive increase in the "availability, accessibility, and volume of child pornography." The production of child pornography has become very profitable, bringing in several billion dollars a year, and is no longer limited to pedophiles. Philip Jenkins notes that there is "overwhelming evidence that [child pornography] is all but impossible to obtain through nonelectronic means." In 2006, the International Centre for Missing & Exploited Children (ICMEC) published a report of findings on the presence of child pornography legislation in the then-184 INTERPOL member countries. It later updated this information, in subsequent editions, to include 196 UN member countries. The report, entitled “Child Pornography: Model Legislation & Global Review,” assesses whether national legislation: (1) exists with specific regard to child pornography; (2) provides a definition of child pornography; (3) expressly criminalizes computer-facilitated offenses; (4) criminalizes the knowing possession of child pornography, regardless of intent to distribute; and (5) requires ISPs to report suspected child pornography to law enforcement or to some other mandated agency. ICMEC stated that it found in its initial report that only 27 countries had legislation needed to deal with child pornography offenses, while 95 countries did not have any legislation that specifically addressed child pornography, making child pornography a global issue worsened by the inadequacies of domestic legislation. The 7th Edition Report found that still only 69 countries had legislation needed to deal with child pornography offenses, while 53 did not have any legislation specifically addressing the problem. Over seven years of research from 2006–12, ICMEC and its Koons Family Institute on International Law and Policy report that they have worked with 100 countries that have revised or put in place new child pornography laws. The NCMEC estimated in 2003 that 20 percent of all pornography traded over the Internet was child pornography, and that since 1997, the number of child pornography images available on the Internet had increased by 1,500 percent. Regarding Internet proliferation, the US DOJ states that "At any one time there are estimated to be more than one million pornographic images of children on the Internet, with 200 new images posted daily." They also note that a single offender arrested in the United Kingdom possessed 450,000 child pornography images, and that a single child pornography site received a million hits in a month. Further, much of the trade in child pornography takes place at hidden levels of the Internet. It has been estimated that between 50,000 and 100,000 pedophiles are involved in organized pornography rings around the world, and that one third of them operate from the United States. Digital cameras and Internet distribution facilitated by the use of credit cards and the ease of transferring images across national borders has made it easier than ever before for users of child pornography to obtain the photographs and videos. In 2007, the British-based Internet Watch Foundation reported that child pornography on the Internet was becoming more brutal and graphic, and the number of images depicting violent abuse had risen fourfold since 2003. The CEO stated "The worrying issue is the severity and the gravity of the images is increasing. We're talking about prepubescent children being raped." About 80 percent of the children in the abusive images were female, and 91 percent appeared to be children under the age of 12. Prosecution is difficult because multiple international servers are used, sometimes to transmit the images in fragments to evade the law. See also Cybersex Internet sex addiction Adult movie theater Amateur pornography Revenge porn Sexting Susanna Paasonen Aylo Notes References Bibliography External links XBIZ, Adult Internet News, Market Analysis Articles, and Webmaster Resources Straight Dope: How Much of All Internet Traffic is Pornography?. . "Cyberporn: The Crack Cocaine of Sexual Addiction", by Antonella Gambotto-Burke, Men's Style, December 2006. Internet New media Multimedia Internet censorship
Internet pornography
Technology
4,294
31,597,864
https://en.wikipedia.org/wiki/Bu%C5%A1t%C4%9Bhrad%20slag%20heap
Buštěhrad slag heap (Czech: Buštěhradská halda; asl) is a huge artificial hill between municipalities of Kladno-Vrapice, Buštěhrad and Stehelčeves near Kladno in the Central Bohemian Region of the Czech Republic that have arisen in the late 20th century as a dumping site for slag from the Kladno ironworks, and other industrial waste. This is one of the most massive heaps of Kladno area, which, particularly when viewed from the northeast, forms by far the most visible and noticeable part of the landscape in shape of a mesa. The heap is situated less than NE from the centre of the Kladno on right slope of shallow valley of the Dřetovický potok stream, in a place where borders of the three municipalities meet. Most of the heap (NE and central part) is based on cadastre of Stehelčeves. The southern part of it is on the territory of Buštěhrad town. Both these settlements lie within a few hundred paces from the heap. The westernmost part of the heap reaches the territory of the city of Kladno, specifically its suburb of Vrapice, just above the medieval church of St. Nicholas and municipal wastewater treatment plant. The heap covers an area of approximately 55 hectares and has shape of a trapezoid, slightly stretched on the south. Dimensions are 800 × 600 m, on a diagonal about 1 100 m. Its height varies depending on the surrounding terrain - from the southern side it is a little over 20 meters and on the north heap rises above bottom of the valley for nearly 70 m. The main, approximately trapezium body of the heap is extended vy a minor salient in the northwest in the northwest. This is the only active section of the heap at the present time. All the rest of the heap is no longer in use and so its overall size does not increase in any way. The core business of the heap, founded after the World War II, in 1950s through 1980s. Rail siding, which is connected to Kladno industrial complexes, bridges the local road Vrapice - Buštehrad the southwestern edge of the heap. From this point took place gradually and enlarging body. According to current estimates, the heap contains about 23 to 27 million tons of materials - blast furnace and steelmaking slag and sludge treatment, cogeneration ash, cinders, sludge from wastewater treatment plants. And less were placed there, both free and in barrels, hazardous waste, including lead-containing or cyanide. Public outrage (later supported by the media attention when a popular magazine Mlady Svet/Young World) created in a 1988 case where a group of children from surrounding villages climbed Bustehrad heap to play and were poisoned. This fact only sped up the ongoing downturn operation and reclamation of the heap. In the late 80's and early 90's most of the heap was covered with clay. Now most of the heap is covered in wild vegetation and trees have gradually begun to grow again. The heap has become habitat for wild animals, especially hares and pheasants, and its role in the ecosystem of the surrounding countryside can even be compared to the nearby natural monument Vinarice mountain. At present time, heap doesn't represent for people around some inconvenience, no more significant spreading of dust, toxins, etc. This positive fact probably contributes strongly alkaline environment heap leaching of pollutants. There are no studies that would provide accurate information about the composition inside the heap, and the ongoing processes there and help predict future activities inside head. Bustehrad Heap got attention again between 2005 and 2006, when its owner, the company REAL Kladno Leasing Ltd., came up with a plan to progressively to extract part of heap and utilize (especially steel plant slag) for construction and other materials (a process will be lasted 18 years and the public purse would be cost 10 to 15 billion Czech crowns). Realization of this plan was not started, because resist of residents of not only neighboring the villages. References Václav Cílek Industriální nature – study case: Bustehrad Slug Heap. (CZ) In: Ochrana přírody, ročník 2002, číslo 10, s. 313-316. Jiří X. Doležal, Think ecological, act economical! (CZ) In: Reflex, ročník 2006, č. 12. Kladno District Soil improvers Steelmaking Geography of the Central Bohemian Region
Buštěhrad slag heap
Chemistry
934
1,267,364
https://en.wikipedia.org/wiki/Teredo%20tunneling
In computer networking, Teredo is a Microsoft transition technology that gives full IPv6 connectivity for IPv6-capable hosts that are on the IPv4 Internet but have no native connection to an IPv6 network. Unlike similar protocols such as 6to4, it can perform its function even from behind network address translation (NAT) devices such as home routers. Teredo operates using a platform independent tunneling protocol that provides IPv6 (Internet Protocol version 6) connectivity by encapsulating IPv6 datagram packets within IPv4 User Datagram Protocol (UDP) packets. Teredo routes these datagrams on the IPv4 Internet and through NAT devices. Teredo nodes elsewhere on the IPv6 network (called Teredo relays) receive the packets, un-encapsulate them, and pass them on. Teredo is a temporary measure. In the long term, all IPv6 hosts should use native IPv6 connectivity. Teredo should be disabled when native IPv6 connectivity becomes available. Christian Huitema developed Teredo at Microsoft, and the IETF standardized it as RFC 4380. The Teredo server listens on UDP port 3544. Purpose For 6to4, the most common IPv6 over IPv4 tunneling protocol, requires that the tunnel endpoint have a public IPv4 address. However, many hosts currently attach to the IPv4 Internet through one or several NAT devices, usually because of IPv4 address shortage. In such a situation, the only available public IPv4 address is assigned to the NAT device, and the 6to4 tunnel endpoint must be implemented on the NAT device itself. The problem is that many NAT devices currently deployed cannot be upgraded to implement 6to4, for technical or economic reasons. Teredo alleviates this problem by encapsulating IPv6 packets within UDP/IPv4 datagrams, which most NATs can forward properly. Thus, IPv6-aware hosts behind NATs can serve as Teredo tunnel endpoints even when they don't have a dedicated public IPv4 address. In effect, a host that implements Teredo can gain IPv6 connectivity with no cooperation from the local network environment. In the long term, all IPv6 hosts should use native IPv6 connectivity. The temporary Teredo protocol includes provisions for a sunset procedure: Teredo implementation should provide a way to stop using Teredo connectivity when IPv6 matures and connectivity becomes available using a less brittle mechanism. As of IETF89, Microsoft plans to deactivate their Teredo servers for Windows clients in the first half of 2014 (exact date TBD), and encourage the deactivation of publicly operated Teredo relays. Overview The Teredo protocol performs several functions: Diagnoses UDP over IPv4 (UDPv4) connectivity and discovers the kind of NAT present (using a simplified replacement to the STUN protocol) Assigns a globally routable unique IPv6 address to each host using it Encapsulates IPv6 packets inside UDPv4 datagrams for transmission over an IPv4 network (this includes NAT traversal) Routes traffic between Teredo hosts and native (or otherwise non-Teredo) IPv6 hosts Node types Teredo defines several different kinds of nodes: Teredo client A host that has IPv4 connectivity to the Internet from behind a NAT and uses the Teredo tunneling protocol to access the IPv6 Internet. Teredo clients are assigned an IPv6 address that starts with the Teredo prefix (2001::/32). Teredo server A well-known host used for initial configuration of a Teredo tunnel. A Teredo server never forwards any traffic for the client (apart from IPv6 pings), and has therefore modest bandwidth requirements (a few hundred bits per second per client at most), which means a single server can support many clients. Additionally, a Teredo server can be implemented in a fully stateless manner, thus using the same amount of memory regardless of how many clients it supports. Teredo relay The remote end of a Teredo tunnel. A Teredo relay must forward all of the data on behalf of the Teredo clients it serves, with the exception of direct Teredo client to Teredo client exchanges. Therefore, a relay requires a lot of bandwidth and can only support a limited number of simultaneous clients. Each Teredo relay serves a range of IPv6 hosts (e.g. a single campus or company, an ISP or a whole operator network, or even the whole IPv6 Internet); it forwards traffic between any Teredo clients and any host within said range. Teredo host-specific relay A Teredo relay whose range of service is limited to the very host it runs on. As such, it has no particular bandwidth or routing requirements. A computer with a host-specific relay uses Teredo to communicate with Teredo clients, but sticks to its main IPv6 connectivity provider to reach the rest of the IPv6 Internet. IPv6 addressing Each Teredo client is assigned a public IPv6 address, which is constructed as follows (the higher order bit is numbered 0): Bits 0 to 31 hold the Teredo prefix (2001::/32). Bits 32 to 63 embed the primary IPv4 address of the Teredo server that is used. Bits 64 to 79 hold some flags and other bits; the format for these 16 bits, MSB first, is "CRAAAAUG AAAAAAAA". The "C" bit was set to 1 if the Teredo client is located behind a cone NAT, 0 otherwise, but RFC 5991 changed it to always be 0 to avoid revealing this fact to strangers. The "R" bit is currently unassigned and should be sent as 0. The "U" and "G" bits are set to 0 to emulate the "Universal/local" and "Group/individual" bits in MAC addresses. The 12 "A" bits were 0 in the original RFC 4380 specification, but were changed to random bits chosen by the Teredo client in RFC 5991 to provide the Teredo node with additional protection against IPv6-based scanning attacks. Bits 80 to 95 contain the obfuscated UDP port number. This is the port number that the NAT maps to the Teredo client, with all bits inverted. Bits 96 to 127 contain the obfuscated IPv4 address. This is the public IPv4 address of the NAT with all bits inverted. As an example, the IPv6 address 2001:0000:4136:e378:8000:63bf:3fff:fdd2 refers to a Teredo client that: Uses Teredo server at address 65.54.227.120 (4136e378 in hexadecimal) Is behind a cone NAT and client is not fully compliant with RFC 5991 (bit 64 is set) Is probably (99.98%) not compliant with RFC 5991 (the 12 random bits are all 0, which happens less than 0.025% of the time) Uses UDP mapped port 40000 on its NAT (in hexadecimal not 63bf equals 9c40, or decimal number 40000) Has a NAT public IPv4 address of 192.0.2.45 (not 3ffffdd2 equals c000022d, which is to say, 192.0.2.45) Servers Teredo clients use Teredo servers to autodetect the kind of NAT they are behind (if any), through a simplified STUN-like qualification procedure. Teredo clients also maintain a binding on their NAT toward their Teredo server by sending a UDP packet at regular intervals. That ensures that the server can always contact any of its clients—which is required for NAT hole punching to work properly. If a Teredo relay (or another Teredo client) must send an IPv6 packet to a Teredo client, it first sends a Teredo bubble packet to the client's Teredo server, whose IP address it infers from the Teredo IPv6 address of the Teredo client. The server then forwards the bubble to the client, so the Teredo client software knows it must do hole punching toward the Teredo relay. Teredo servers can also transmit ICMPv6 packet from Teredo clients toward the IPv6 Internet. In practice, when a Teredo client wants to contact a native IPv6 node, it must locate the corresponding Teredo relay, i.e., to which public IPv4 and UDP port number to send encapsulated IPv6 packets. To do that, the client crafts an ICMPv6 Echo Request (ping) toward the IPv6 node, and sends it through its configured Teredo server. The Teredo server de-capsulates the ping onto the IPv6 Internet, so that the ping should eventually reach the IPv6 node. The IPv6 node should then reply with an ICMPv6 Echo Reply, as mandated by RFC 2460. This reply packet is routed to the closest Teredo relay, which — finally — tries to contact the Teredo client. Maintaining a Teredo server requires little bandwidth, because they are not involved in actual transmission and reception of IPv6 traffic packets. Also, it does not involve any access to the Internet routing protocols. The only requirements for a Teredo server are: The ability to emit ICMPv6 packets with a source address belonging to the Teredo prefix Two distinct public IPv4 addresses. Though not written down in the official specification, Microsoft Windows clients expect both addresses to be consecutive — the second IPv4 address is for NAT detection Public Teredo servers: teredo.trex.fi (Finland) Former public Teredo servers: teredo.remlab.net / teredo-debian.remlab.net (Germany), now redirects to teredo.trex.fi Relays A Teredo relay potentially requires much network bandwidth. Also, it must export (advertise) a route toward the Teredo IPv6 prefix (2001::/32) to other IPv6 hosts. That way, the Teredo relay receives traffic from the IPv6 hosts addressed to any Teredo client, and forwards it over UDP/IPv4. Symmetrically, it receives packets from Teredo clients addressed to native IPv6 hosts over UDP/IPv4 and injects those into the native IPv6 network. In practice, network administrators can set up a private Teredo relay for their company or campus. This provides a short path between their IPv6 network and any Teredo client. However, setting up a Teredo relay on a scale beyond that of a single network requires the ability to export BGP IPv6 routes to the other autonomous systems (AS's). Unlike 6to4, where the two halves of a connection can use different relays, traffic between a native IPv6 host and a Teredo client uses the same Teredo relay, namely the one closest to the native IPv6 host network-wise. The Teredo client cannot localize a relay by itself (since it cannot send IPv6 packets by itself). If it needs to initiate a connection to a native IPv6 host, it sends the first packet through the Teredo server, which sends a packet to the native IPv6 host using the client's Teredo IPv6 address. The native IPv6 host then responds as usual to the client's Teredo IPv6 address, which eventually causes the packet to find a Teredo relay, which initiates a connection to the client (possibly using the Teredo server for NAT piercing). The Teredo Client and native IPv6 host then use the relay for communication as long as they need to. This design means that neither the Teredo server nor client needs to know the IPv4 address of any Teredo relays. They find a suitable one automatically via the global IPv6 routing table, since all Teredo relays advertise the network 2001::/32. On March 30, 2006, Italian ISP ITGate was the first AS to start advertising a route toward 2001::/32 on the IPv6 Internet, so that RFC 4380-compliant Teredo implementations would be fully usable. As of 16 February 2007, it is no longer functional. In Q1 2009, IPv6 backbone Hurricane Electric enabled 14 Teredo relays in an anycast implementation and advertising 2001::/32 globally. The relays were located in Seattle, Fremont, Los Angeles, Chicago, Dallas, Toronto, New York, Ashburn, Miami, London, Paris, Amsterdam, Frankfurt, and Hong Kong. It is expected that large network operators will maintain Teredo relays. As with 6to4, it remains unclear how well the Teredo service will scale up if a large proportion of Internet hosts start using IPv6 through Teredo in addition to IPv4. While Microsoft has operated a set of Teredo servers since they released the first Teredo pseudo-tunnel for Windows XP, they have never provided a Teredo relay service for the IPv6 Internet as a whole. Clients Officially, this mechanism was created for Microsoft Windows XP and onwards PCs to provide IPv6 connectivity to IPv4 clients by connecting to ipv6.microsoft.com and works in conjunction with IP Helper service and Teredo Tunneling Adapter Interface driver. The service also opens a UPNP port on the router for relaying. Limitations Teredo is not compatible with all NAT devices. Using the terminology of RFC 3489, it supports full cone, restricted, and port-restricted NAT devices, but does not support symmetric NATs. The Shipworm specification original that led to the final Teredo protocol also supported symmetric NATs, but dropped that due to security concerns. People at the National Chiao Tung University in Taiwan later proposed SymTeredo, which enhanced the original Teredo protocol to support symmetric NATs, and the Microsoft and Miredo implementations implement certain unspecified non-standard extensions to improve support for symmetric NATs. However, connectivity between a Teredo client behind a symmetric NAT, and a Teredo client behind a port-restricted or symmetric NAT remains seemingly impossible. Indeed, Teredo assumes that when two clients exchange encapsulated IPv6 packets, the mapped/external UDP port numbers used will be the same as those that were used to contact the Teredo server (and building the Teredo IPv6 address). Without this assumption, it would not be possible to establish a direct communication between the two clients, and a costly relay would have to be used to perform triangle routing. A Teredo implementation tries to detect the type of NAT at startup, and will refuse to operate if the NAT appears to be symmetric. (This limitation can sometimes be worked around by manually configuring a port forwarding rule on the NAT box, which requires administrative access to the device). Teredo can only provide a single IPv6 address per tunnel endpoint. As such, it is not possible to use a single Teredo tunnel to connect multiple hosts, unlike 6to4 and some point-to-point IPv6 tunnels. The bandwidth available to all Teredo clients toward the IPv6 Internet is limited by the availability of Teredo relays, which are no different than 6to4 relays in that respect. Alternatives 6to4 requires a public IPv4 address, but provides a large 48-bit IPv6 prefix for each tunnel endpoint, and has a lower encapsulation overhead. Point-to-point tunnels can be more reliable and are more accountable than Teredo, and typically provide permanent IPv6 addresses that do not depend on the IPv4 address of the tunnel endpoint. Some point-to-point tunnel brokers also support UDP encapsulation to traverse NATs (for instance, the AYIYA protocol can do this). On the other hand, point-to-point tunnels normally require registration. Automated tools (for instance AICCU) make it easy to use Point-to-Point tunnels. Security considerations Exposure Teredo increases the attack surface by assigning globally routable IPv6 addresses to network hosts behind NAT devices, which would otherwise be unreachable from the Internet. By doing so, Teredo potentially exposes any IPv6-enabled application with an open port to the outside. Teredo tunnel encapsulation can also cause the contents of the IPv6 data traffic to become invisible to packet inspection software, facilitating the spread of malware. Finally, Teredo exposes the IPv6 stack and the tunneling software to attacks should they have any remotely exploitable vulnerability. In order to reduce the attack surface, the Microsoft IPv6 stack has a "protection level" socket option. This allows applications to specify from which sources they are willing to accept IPv6 traffic: from the Teredo tunnel, from anywhere except Teredo (the default), or only from the local intranet. The Teredo protocol also encapsulates detailed information about the tunnel's endpoint in its data packets. This information can help potential attackers by increasing the feasibility of an attack, and/or by reducing the effort required. Firewalling, filtering, and blocking For a Teredo pseudo-tunnel to operate properly, outgoing UDP packets to port 3544 must be unfiltered. Moreover, replies to these packets (i.e., "solicited traffic") must also be unfiltered. This corresponds to the typical setup of a NAT and its stateful firewall functionality. Teredo tunneling software reports a fatal error and stops if outgoing IPv4 UDP traffic is blocked. DoS via routing loops In 2010, new methods to create denial of service attacks via routing loops that use Teredo tunnels were uncovered. They are relatively easy to prevent. Default use in MS-Windows Microsoft Windows as of Windows 10, version 1803 and later disable Teredo by default. If needed, this transitional technology can be enabled via a CLI command or Group Policy. Implementations Several implementations of Teredo are currently available: Windows XP SP2 includes a client and host-specific relay (also in the Advanced Networking Pack for Service Pack 1). Windows Server 2003 has a relay and server provided under the Microsoft Beta program. Windows Vista and Windows 7 have built-in support for Teredo with an unspecified extension for symmetric NAT traversal. However, if only a link-local and Teredo address are present, these operating systems don't try to resolve IPv6 DNS AAAA records if a DNS A record is present, in which case they use IPv4. Therefore, only literal IPv6 URLs typically use Teredo. This behavior can be modified in the registry. Windows 10 version 1803 and later disable Teredo by default. If needed, this transitional technology can be enabled via a CLI command or Group Policy. Miredo is a client, relay, and server for Linux, *BSD, and Mac OS X, ng_teredo is a relay and server based on netgraph for FreeBSD from the LIP6 University and 6WIND. NICI-Teredo is a relay for the Linux kernel and a userland Teredo server, developed at the National Chiao Tung University. Choice of the name The initial nickname of the Teredo tunneling protocol was Shipworm. The idea was that the protocol would pierce through NAT devices, much as the shipworm (a kind of marine wood-boring clam) bores tunnels through wood. Shipworms have been responsible for the loss of many wooden hulls. Christian Huitema, in the original draft, noted that the shipworm "only survives in relatively clean and unpolluted water; its recent comeback in several Northern American harbors is a testimony to their newly retrieved cleanliness. The Shipworm service should, in turn, contributes to a newly retrieved transparency of the Internet." To avoid confusion with computer worms, Huitema later changed the protocol's name from Shipworm to Teredo, after the genus name of the shipworm Teredo navalis. References External links Teredo Overview on Microsoft TechNet Current anycast Teredo BGP routes Teredo: Tunneling IPv6 over UDP through Network Address Translations (NATs). RFC 4380, C. Huitema. February 2006. JavaScript Teredo-IP address calculator Internet architecture IPv6 transition technologies Tunneling protocols
Teredo tunneling
Technology,Engineering
4,280
61,882,775
https://en.wikipedia.org/wiki/Caterina%20Ducati
Caterina Ducati is a Professor of Nanomaterials in the Department of Materials at the University of Cambridge. She serves as Director of the University of Cambridge Master's programme in Micro- and Nanotechnology Enterprise as well as leading teaching in the Nanotechnology Doctoral Training Centre. Early life and education Ducati was born in Milan. She studied at the University of Milan, where she earned an undergraduate degree in physics. Her research project involved designing a time-of-flight mass spectrometer for supersonic cluster beams under the supervision of Paolo Milani. She moved to the University of Cambridge Department of Engineering for her graduate studies, where she worked with John Robertson. Her doctorate considered nanostructured carbon for electrochemistry as well as the relationship between morphology, crystallographic phases and electronic properties in nanomaterials. This included the development of carbon nanotubes and investigations into their growth models using transmission electron microscopy. Research and career In 2003 Ducati was awarded a Knowledge Transfer Partnership fellowship working on the 4151 programme with Alphasense Limited. In 2004 she was made a Royal Society Dorothy Hodgkin fellow, where she started to research metal oxide nanostructures for catalysis. She was simultaneously awarded a Sackler junior fellowship. She was subsequently awarded a Royal Society University Research Fellowship to explore electron microscopy of nanostructures, and was based in Churchill College, Cambridge. This involved developing transmission electron microscopy to study the nanoscale properties of solar cells, which allows better understanding of how electrons move through a structured anode. In 2009 Ducati was made a lecturer in the Department of Materials at the University of Cambridge. She researches the degradation of nanostructured solar cells, and lithium-ion batteries in collaboration with Paul Midgley and Clare Grey. She was awarded a European Research Council Starting Grant to study photoactive nanomaterials and devices, and a Proof of Concept grant to study metal – metal oxide nanocomposites for air purification. She was elected to AcademiaNet in 2011. Ducati has worked with the Institute of Physics Electron Microscopy and Analysis group and the Nanoscale Physics and Technology Group. She worked with Rachel Oliver on the delivery of Master's course in Micro- & Nanotechnology Enterprise. She was promoted to Professor of Nanomaterials in 2019 and serves as Tutor and Director of Studies of Materials Science in Trinity College, Cambridge. She has led activities at Trinity to improve the representation of women scientists. Awards In 2018, Ducati was awarded the Royal Microscopical Society Medal for Innovation in Applied Microscopy for Engineering and Physical Sciences. Personal life Ducati has two sons born in 2003 and 2007. References Living people Italian women scientists Italian women physicists University of Milan alumni Alumni of the University of Cambridge Fellows of Trinity College, Cambridge Italian materials scientists Women materials scientists and engineers Year of birth missing (living people)
Caterina Ducati
Materials_science,Technology
583
58,980,354
https://en.wikipedia.org/wiki/Lei%20Stanley%20Qi
Lei "Stanley" Qi () is an associate professor in the department of bioengineering, and the department of chemical and systems biology at Stanford University. Qi led the development of the first catalytically dead Cas9 lacking endonuclease activity (dCas9), which is the basis for CRISPR interference (CRISPRi). His laboratory subsequently developed CRISPR-Genome Organization (CRISPR-GO). Qi is a co-inventor of the University of California patent on the CRISPR gene-editing technology. Early life and education Qi obtained his B.S. in physics and math from Tsinghua University, China, Master in physics from UC Berkeley, and PhD in bioengineering from UC Berkeley. During his PhD work at Berkeley, he studied synthetic biology with Adam Arkin, and was the first to explore engineering the CRISPR for targeted gene editing and gene regulation with Jennifer Doudna. After PhD, he performed independent research work as a faculty fellow at UCSF. He joined the Stanford faculty in 2014. Award Qi has won awards, including NIH Director's Early Independence Award, Pew Biomedical Scholar, and Alfred. P. Sloan Fellowship. References External links Qi Lab in Stanford Qi Lab in UCSF 1983 births Living people American bioengineers Chinese bioengineers Tsinghua University alumni University of California, Berkeley alumni Stanford University faculty Synthetic biologists American biochemists Chinese biochemists People from Weifang Chemists from Shandong Educators from Shandong Biologists from Shandong Chinese emigrants to the United States Sloan Research Fellows University of California, San Francisco faculty
Lei Stanley Qi
Biology
325
24,568,209
https://en.wikipedia.org/wiki/C.%20Allin%20Cornell
Carl Allin Cornell (September 19, 1938 – December 14, 2007) was an American civil engineer, researcher, and professor who made important contributions to reliability theory and earthquake engineering and, along with Luis Esteva, developed the field of probabilistic seismic hazard analysis by publishing the seminal document of the field in 1968. Biography Cornell was born in Mobridge, South Dakota in 1938. He received his B.A. in architecture in 1960 and M.S. and Ph.D. in civil engineering in 1961 and 1964 respectively, all from Stanford University. He held a professorship at the Massachusetts Institute of Technology from 1964 to 1983, and in 1983 became a research professor at Stanford. He was awarded the Moisseiff Award (1977), two Norman Medals (1983 and 2003), and the Freudenthal Medal (1988), all from the American Society of Civil Engineers (ASCE). He also received the Harry Fielding Reid Medal of the Seismological Society of America, their highest honor (2001) and their William B. Joyner Memorial Lecture award (2005), as well as the Earthquake Engineering Research Institute's highest honor, the Housner Medal, in 2003. He was a fellow of the American Geophysical Union (2002) and member of the National Academy of Engineering (1981). His wife was Elisabeth Pate-Cornell, formerly chair of Stanford's Department of Management Science and Engineering, and one of his five children is Eric Allin Cornell, Nobel Laureate in Physics. He is best known for his 1968 seminal paper "Engineering Seismic Risk Analysis" that started the field of probabilistic seismic hazard analysis; his work in reliability especially on second moment methods and reliability-based code calibration, and his development of the probabilistic framework for performance-based earthquake engineering that became the unifying equation of the Pacific Earthquake Engineering Research Center. His 1971 book, Probability, Statistics, and Decision for Civil Engineers (coauthored with Jack Benjamin), exposed an entire generation of civil and structural engineering students to the field of probabilistic modeling and decision analysis, and remains in use for classroom curriculum to this day. At the quadrennial International Conference on Applications of Statistics and Probability in Civil Engineering, the International Civil Engineering Risk and Reliability Association (CERRA) awards the C. Allin Cornell Award to one individual. In 2009, the award was renamed from the CERRA Award to the C. Allin Cornell Award in honor of its first recipient, and was awarded under its new name in 2011. Cornell received the award in 1987. He died aged 69 at Stanford University Medical Center he had been struggling with cancer for two years. Students See also Probabilistic risk assessment Seismic hazard Seismic risk References External links at the Wayback Machine. Members of the United States National Academy of Engineering Massachusetts Institute of Technology faculty Stanford University School of Engineering alumni Stanford University School of Engineering faculty 1938 births 2007 deaths Earthquake engineering People from Mobridge, South Dakota Fellows of the American Geophysical Union People from Portola Valley, California
C. Allin Cornell
Engineering
619
2,898,851
https://en.wikipedia.org/wiki/Upsilon%20Aurigae
Upsilon Aurigae, Latinised from υ Aurigae, is the Bayer designation for a single star in the northern constellation of Auriga. It has an apparent visual magnitude of 4.74, which means it is bright enough to be seen with the naked eye. Based upon parallax measurements, this star is approximately distant from the Earth. It is drifting further away with a radial velocity of +38 km/s. This is an evolved red giant star with a stellar classification of M0 III. It is a suspected variable star and is currently on the asymptotic giant branch, which means it is generating energy through the fusion of helium along a shell surrounding a small, inert core of carbon and oxygen. The star is two billion years old with 1.64 times the mass of the Sun and has expanded to 61 times the Sun's radius. It is radiating 1,165 times the Sun's luminosity from its photosphere at an effective temperature of . References External links HR 2011 Image Upsilon Aurigae M-type giants Asymptotic-giant-branch stars Auriga Aurigae, Upsilon Durchmusterung objects Aurigae, 31 038944 027639 2011
Upsilon Aurigae
Astronomy
265
1,960,919
https://en.wikipedia.org/wiki/Mary%20Ward%20%28scientist%29
Mary Ward (née King; 27 April 1827 – 31 August 1869) was an Irish naturalist, astronomer, microscopist, author, and artist. She was killed when she fell under the wheels of an experimental steam car built by her cousins. As the event occurred in 1869, she is the first person known to have been killed by a motor vehicle. Early life She was born Mary King in Ballylin near present-day Ferbane, County Offaly, on 27 April 1827, the youngest child of the Reverend Henry King and his wife Harriette. She and her sisters were educated at home, as were most girls at the time. However, her education was slightly different from the norm because she was of a renowned scientific family. She was interested in nature from an early age, and by the time she was three years old she was collecting insects. Interests Ward was a keen amateur astronomer, sharing this interest with her cousin William Parsons, 3rd Earl of Rosse. Parsons built the Leviathan of Parsonstown, a reflecting telescope with a six-foot mirror which remained the world's largest until 1917. Ward was a frequent visitor to Birr Castle, producing sketches of each stage of the process. Along with photographs made by Parson's wife Mary Rosse, Ward's sketches were used to aid in the restoration of the telescope. Ward also drew insects, and the astronomer James South observed her doing so one day. She was using a magnifying glass to see the tiny details, and her drawing so impressed him that he immediately persuaded her father to buy her a microscope. A compound microscope made by Andrew Ross (model 112) was purchased for £48 12s 8d. This was the beginning of a lifelong passion. She began to read everything she could find about microscopy, and taught herself until she had an expert knowledge. She made her own slides from slivers of ivory, as glass was difficult to obtain, and prepared her own specimens. The physicist David Brewster asked her to make his microscope specimens, and used her drawings in many of his books and articles. Distinctions Universities and most societies would not accept women, but Ward obtained information any way she could. She wrote frequently to scientists, asking them about papers they had published. During 1848, Parsons was made president of the Royal Society. Parsons, to recall, was Ward's cousin and visits to his London home meant that she met many scientists. She was one of only three women on the mailing list for the Royal Astronomical Society (the others were Queen Victoria and Mary Somerville, a scientist for whom Somerville College at Oxford University was named). Marriage On 6 December 1854, she married Henry Ward of Castle Ward, County Down, who in 1881 succeeded to the title of Viscount Bangor. They had three sons and five daughters, including Maxwell Ward, 6th Viscount Bangor. Her best-known descendants are her grandson Edward Ward, the foreign correspondent and seventh viscount, and his daughter, the Doctor Who actress Lalla Ward. Publications When Ward wrote her first book, Sketches with the microscope (privately printed in 1857), she apparently believed that no one would print it because of her gender or lack of academic credentials. She published 250 copies of it privately, and several hundred handbills were distributed to advertise it. The printing sold during the next few weeks, and this was enough to make a London publisher take the risk and contract for future publication. The book was reprinted eight times between 1858 and 1880 as A World of Wonders Revealed by the Microscope. A new full-colour facsimile edition at €20 was published in September 2019 by the Offaly Historical and Archaeological Society, with accompanying essays. (). Her books are: A Windfall for the Microscope (1856), A World of Wonders, Revealed by the Microscope (1857), Entomology in Sport, and Entomology in Earnest (1857, with Lady Jane Mahon), Microscope Teachings (1864), Telescope Teachings (1859). She illustrated her books and articles herself, as well as many books and papers by other scientists. Death Ward is the first known automobile fatality. William Parsons' sons had built a steam-powered car, and on 31 August 1869 she and her husband, Henry, were travelling in it with the Parsons boys (the Hons Richard Clere Parsons and the future steam turbine pioneer Charles Algernon Parsons) and their tutor, Richard Biggs. She was thrown from the car on a bend in the road at Parsonstown (present-day Birr, County Offaly). She fell under its wheels and died almost instantly. A doctor who lived near the scene arrived within moments, and found her cut, bruised, and bleeding from the ears. The fatal injury was a broken neck. It is believed that the grieving family destroyed the car after the crash. Legacy Ward's microscope, accessories, slides and books are on display in her husband's home, Castle Ward, County Down. William Parsons' home at Birr Castle, County Offaly, is also open to the public. Her great-granddaughter is the English actress and author Lalla Ward. See also Bridget Driscoll (born in Ireland, 1851/1852–1896) – first pedestrian death by automobile in Great Britain Henry H. Bliss (1830–1899) – first automobile death in the Americas Further reading The Field Day Anthology of Irish Writing, Volume IV, Irish Women's Writing and Traditions, p. 653, edited by Angela Bourke et al., NYU Press, 2002. The Field Day Anthology of Irish Writing – a short biography and an overview of further work. A Pair of New Eyes, a play by A. L. Mentxaka, deals with the life of Mary Ward and her friendship with the pioneer photographer, designer, and architect Mary Rosse (née Field). – the play was premiered at the Sean O'Casey Theatre Dublin on 5 November 2013. A second production was staged in Smock Alley Theatre Dublin in August 2014. Article in August bank holiday 2019 edition of the Irish Examiner Did you know Notes References External links Entomology in sport : and Entomology in earnest (1859) 1827 births 1869 deaths 19th-century Irish writers 19th-century Irish astronomers Irish entomologists Women entomologists Irish women scientists 19th-century Irish women scientists Irish women artists Artists from County Offaly 19th-century Irish scientists 19th-century astronomers Women astronomers People from Ferbane Scientists from County Offaly Road incident deaths in the Republic of Ireland
Mary Ward (scientist)
Astronomy
1,322
34,513,729
https://en.wikipedia.org/wiki/VFTS%20102
VFTS 102 is a star located in the Tarantula nebula, a star forming region in the Large Magellanic Cloud, a satellite galaxy of the Milky Way. The peculiarity of this star is its projected equatorial velocity of ~ (about ), making it the second fastest rotating massive star known alongside VFTS 285 (), and preceded only by the WO star WR 142 which has a rotational velocity of . The resulting centripetal force tends to flatten the star; material can be lost in the loosely bound equatorial regions, allowing for the formation of a disk. The spectroscopic observations seem to confirm this, and the star is classified as Oe, possibly due to emission from such an equatorial disk of gas. This star was observed by the VLT Flames Tarantula Survey collaboration using the VLT, Very Large Telescope in Chile. One member of this team is Matteo Cantiello, an Italian astrophysicist who emigrated to the United States and is currently working at the Kavli Institute for Theoretical Physics at University of California Santa Barbara. In 2007, together with a few collaborators, he predicted the existence of massive stars with properties very similar to VFTS 102. In its theoretical model, the extreme rotational speed is caused by the transfer of material from a companion star in a binary system. After this "cosmic dance", the donor star is predicted to explode as a supernova. The spun-up companion instead is likely to be launched out of the orbit and move away from its stellar neighbors at high speed. Such a star is called a runaway. VFTS 102 fits this theoretical model very well, being found to be a rapidly rotating runaway star and lying close to a pulsar and a supernova remnant. Other scenarios, like a dynamical ejection from the core of the star cluster R136, are also possible. References External links ESO Press Release STScI Press Release News on "Corriere della Sera" News on "Il Tirreno" Press Release at University of California Santa Barbara VLT Flames Tarantuala Survey Homepage Matteo Cantiello HomePage O-type main-sequence stars Emission-line stars Runaway stars Tarantula Nebula Stars in the Large Magellanic Cloud Extragalactic stars Dorado J05373924-6909510
VFTS 102
Astronomy
476
50,265,507
https://en.wikipedia.org/wiki/Heat%20transfer%20through%20fins
Fins are extensions on exterior surfaces of objects that increase the rate of heat transfer to or from the object by increasing convection. This is achieved by increasing the surface area of the body, which in turn increases the heat transfer rate by a sufficient degree. This is an efficient way of increasing the rate, since the alternative way of doing so is by increasing either the heat transfer coefficient (which depends on the nature of materials being used and the conditions of use) or the temperature gradient (which depends on the conditions of use). Clearly, changing the shape of the bodies is more convenient. Fins are therefore a very popular solution to increase the heat transfer from surfaces and are widely used in a number of objects. The fin material should preferably have high thermal conductivity. In most applications the fin is surrounded by a fluid in motion, which heats or cools it quickly due to the large surface area, and subsequently the heat gets transferred to or from the body quickly due to the high thermal conductivity of the fin. In order to design a fin for optimal heat transfer performance with minimal cost, the dimensions and shape of the fin have to be calculated for specific applications. A common way of doing so is by creating a model of the fin and then simulating it under required service conditions. Modeling Consider a body with fins on its outer surface, with air flowing around it. The heat transfer rate depends on Shape and geometry of the external surface Surface area of the body Velocity of the wind (or any fluid in other cases) Temperature of surroundings Modelling of the fins in this case involves, experimenting on this physical model and optimizing the number of fins and fin pitch for maximum performance. One of the experimentally obtained equations for heat transfer coefficient for the fin surface for low wind velocities is: where k= Fin surface heat transfer coefficient [W/m2K ] a=fin length [mm] v=wind velocity [km/h] θ=fin pitch [mm] Another equation for high fluid velocities, obtained from experiments conducted by Gibson, is where k=Fin surface heat transfer coefficient[W/m2K ] a=Fin length[mm] θ=Fin pitch[mm] v=Wind velocity[km/h] A more accurate equation for fin surface heat transfer coefficient is: where k (avg)= Fin surface heat transfer coefficient[W/m2K ] θ=Fin pitch[mm] v=Wind velocity[km/h] All these equations can be used to evaluate average heat transfer coefficient for various fin designs. Design The momentum conservation equation for this case is given as follows: This is used in combination with the continuity equation. The energy equation is also needed, which is: . The above equation, on solving, gives the temperature profile for the fluid region. When solved as a scalar equation, it can be used to calculate the temperatures at the fin and cylinder surfaces, by reducing to: Where: q = internal heat generation = 0 (in this case). Also dT/dt = 0 due to steady state assumption. These flow and energy equations can be set up and solved in any simulation software, e.g. Fluent. In order to do so, all parameters of flow and thermal conditions like fluid velocity and temperature of body have to be specified according to the requirement. Also, the boundary conditions and assumptions if any must be specified. This results in velocity profiles and temperature profiles for various surfaces and this knowledge can be used to design the fin. References Unit operations Transport phenomena Heat transfer
Heat transfer through fins
Physics,Chemistry,Engineering
713
57,845,698
https://en.wikipedia.org/wiki/School%20of%20Transportation%20Science%20and%20Engineering%2C%20HIT
School of Transportation Science and Engineering, Harbin Institute of Technology () is one of nineteen schools of Harbin Institute of Technology, and the only school of transportation engineering within C9 League (an alliance of the top nine universities in China). The school was founded in 1995. With years of development, its quality of science research and student education have reached top-ranking level and it has been conferred as one of the most important training bases of civil engineering and transportation engineers by the nation. History School of Transportation Science and Engineering, Harbin Institute of Technology traces its origin back to the railway construction program of the Sino-Russian Industrial School, which was the precursor of Harbin Institute of Technology. In 1958, highway and urban road major was established by Professor N.S. Cai. In 1979, the school received qualification of enrolling graduates in majors of road and bridge. In 1986, the transportation engineering major was established and began to enroll graduate students. Seven years later, School of Transportation Science and Engineering began to enroll Undergraduate. In 1995, the School was founded with HIT's merge with Harbin University of Architecture. In 1998, the majors of road and bridge were both conferred with the qualification to enroll Doctorate. In 2000, School of Transportation Science and Engineering was incorporated into Harbin Institute of Technology as Harbin University of Architecture was incorporated as the 2nd campus of HIT. In 2009, the Transportation Information and Control Engineering major was established. Departments Ever since 1958, the school has established five departments and one engineering center, namely Dept. of Road and Railway Engineering, Dept. of Bridge & Tunnel Engineering, Dept. of Transportation Engineering, Dept. of Road Materials Engineering, Dept. of Traffic Information & Control Engineering and the Measurement Center. It also owns a state key lab on urban road and traffic field and a provincial key lab on ITS (Intelligent Traffic System). Road and railway engineering has been assigned as key discipline by both Heilongjiang Province and the Ministry of Housing and Urban-Rural Development. Transportation planning, Engineering management and Bridge and tunnel engineering are listed as key discipline of the Ministry of Housing and Urban-Rural Development. Achievements Since its foundation, the school has promoted many advanced researches in the technology of Highway engineering, Highway construction and observing road & bridge, mechanical analysis and simulation, transportation issues and their effects and solutions, materials for pavement and bridges, and applications of intelligent transportation system. Communication The School of Transportation Science and Engineering has various inter-school and international communications, having student exchange programs with Imperial College London, University of Illinois at Urbana–Champaign and Moscow Automobile and Road Construction State Technical University (MADI). The school also has about 20 part-time Adjunct professor. It has held two high-level international conferences and participated in 66 high-level international conferences. References External links Transportation engineering Harbin Institute of Technology
School of Transportation Science and Engineering, HIT
Engineering
571
5,932,001
https://en.wikipedia.org/wiki/Height%20restriction%20laws
Height restriction laws are laws that restrict the maximum height of structures. There are a variety of reasons for these measures. Some restrictions serve aesthetic values, such as blending in with other housing and not obscuring important landmarks. Other restrictions may serve a practical purpose, such as height restrictions around airports for flight safety. Height restriction laws for housing have become a source of contention by restricting housing supply, increasing housing costs, and depressing land values. Asia China New building regulations that came in force in 2020, limited the height of buildings on cities depending on population in China. Cities with less than 3million population cannot have structures rising above ; cities with populations greater than 3million can have buildings up to a height of . Buildings are capped at on the Shenzhen Bay area due to its proximity to Shenzhen Bao'an International Airport. A similar height restriction also applies in Wuhan, with buildings limited to on its central areas due to runway approaches paths to Wuhan Tianhe International Airport crossing it. Malaysia Buildings in the Petaling Jaya suburb of Kelana Jaya were previously capped at 15 floors (around in height) because of the close proximity to Subang International Airport, less than away. The height restriction was lifted in 1998 when commercial jet operations were relocated to the Kuala Lumpur International Airport in Sepang, and this saw higher buildings being erected, notably the 33-floor Ascent and New World Hotel towers at Paradigm Mall (the tallest in the area today, with heights of around ). Middle East Israel and Jordan inherited laws from the days of the British Mandate that prevent buildings from rising more than four stories above the ground except by special government permission. In Amman, these regulations have been credited with maintaining the city's architectural and urban heritage, but have also been accused of inflating housing prices and causing unsustainable urban sprawl. Myanmar Most of the tallest buildings are located in Yangon where zoning regulations restrict the maximum height of buildings to above sea level, in order to prevent buildings from overtaking the Shwedagon Pagoda. The first ever attempt to build a skyscraper in the country—a tower in downtown Yangon faced intense opposition by local conservationists and was cancelled in 2014. Philippines A structural height restriction applies to buildings within Intramuros, Manila, where most structures cannot be higher than from street level, and towers cannot exceed . Davao City's zoning ordinance as of 2019 imposes a height restriction on buildings in its central area due to its proximity to Francisco Bangoy International Airport, with buildings not allowed to exceed above mean sea level. Hong Kong To protect the ridge line along Hong Kong Island and in Kowloon, height restrictions are imposed according to the location of the buildings or structures. Prior to the 1998 closure of the Kai Tak Airport, many places in Kowloon had a stricter building height restriction due to its proximity to the airport. Indonesia In Bali, Indonesia, a building cannot be taller than a coconut tree, which is about . The only building that is higher than a coconut tree is the Bali Beach Hotel because the hotel was built before the height restriction was announced. The restriction was enforced by a regional regulation, however, how much this is enforced is in question. Singapore Buildings in Raffles Place, Marina Centre, Marina Bay Sands, Bugis and Kallang have height restrictions of up to because of the proximity of Paya Lebar Air Base until 2030 as planned. Europe In Europe, there is no official general law restricting the height of structures. There are however height restriction laws in many cities, often aimed to protect historic skylines. In Athens, buildings are not allowed to surpass twelve floors so as not to block views of the Parthenon. There are several exceptions though, such as the Athens Tower, the Atrina center and the OTE central building which all exceed that level. This is due to them either being built far away from the centre, or to the fact that they were constructed during periods of political instability. The city's tallest structure is the Athens Tower, reaching and comprising 25 floors. In the central area of Rome, delimited by the Aurelian Walls, no building can exceed the height of the dome of St. Peter's Basilica (). A skyscraper called Torre Eurosky (Eurosky Tower), built in 2012 in EUR neighbourhood (outside the ban area) exceeds this limit being high. There is however a height restriction for new onshore wind turbines in the European Union, which set their total height to . North America Canada Canada has no national height restrictions, but many individual cities do have height restriction bylaws and building is restricted by the national aviation authority (Transport Canada) near airports. Some examples: Edmonton: Buildings in downtown Edmonton were limited to above ground level due to its proximity to Blatchford Field (City Centre Airport). The height restriction was lifted in 2013 with the airport's closure, and the first building in Edmonton to exceed 150m, JW Marriott Edmonton Ice District & Residences was topped out in 2018 and opened in 2019. Hamilton: No buildings may exceed the height of the Niagara Escarpment, to preserve views of Lake Ontario from the Escarpment, and vice versa. Montreal: until the late 1920s, all buildings were limited to ten storeys. Currently buildings are limited to a height of and are subject to not contrasting the view of Mount Royal, the city's central green space, with the only exception being antennas and communication towers, that are allowed to reach above mean sea level. The downtown today possesses only one building exceeding 200m, the 1000 de la Gauchetière tower, which was built as a special project in 1992. Ottawa-Gatineau: Until 1973, buildings in downtown Ottawa were limited to so that the Peace Tower, part of the parliament buildings, could dominate the skyline. Saskatoon: continues to limit building heights to a maximum of due to a flight path that bisects the downtown core, however, the recent proposal of a tower could potentially lead to the lifting of this height limit. Vancouver: maintains "view corridors" that protect views of the North Shore Mountains. It also has a density bank that allows developers to exceed maximum building height restrictions in exchange for preserving heritage buildings. Whitehorse: No buildings should be taller than four stories due to the nearby fault line. The Whitehorse Chamber of Commerce said that maintaining the height restriction of four stories would discourage businesses from coming to the city. In 2007, the city rejected the proposal to increase the height limit to eight stories. In order to exceed height limit, the developer would have to apply for an amendment to the city's official community plan. United States Both the U.S. Federal Aviation Administration (FAA) and the Federal Communications Commission (FCC) have a rebuttable presumption not to build any antennae over above ground level. This is to prevent those structures from being a hazard to air navigation. In recent years, the FAA has requested that height limits within of an airport runway be lowered from to , as development near airports has increased. For airports, sometimes there are exceptions for height restrictions made for important infrastructure equipment, as radio towers or for structures older than the airport. These structures have to be marked with red and white paint, have flight safety lamps on top, or both. Often red and white paint and flight safety lamps have to be installed on high structures (taller than ) far away from airports. Height restriction laws are not always kept strictly. Several cities in the United States have local height limits, for example: Orlando, Florida: Due to Downtown Orlando's close proximity to Orlando Executive Airport, the maximum allowable height of buildings there is . Bellevue, Washington: maximum of in Downtown Bellevue, set in the late 1990s. Madison, Wisconsin: No building located within of the Wisconsin State Capitol (its dome is high) may be higher than it (set in 1966). San Jose, California: Due to Downtown San Jose's close proximity to San Jose International Airport, no buildings within city limits surpass . Portland, Oregon: Height limits vary between throughout the city, with the primary intent being to protect views of Mount Hood and the West Hills. Washington, DC: buildings are limited to a height equal to the width of the adjacent street plus up to a maximum of on residential streets, on commercial streets, and on a small portion of Pennsylvania Avenue. The height limit was passed by the United States Congress in 1889 as the Height of Buildings Act of 1899 and later amended by the Height of Buildings Act of 1910. Boston, Massachusetts: Due to the city's proximity to Logan International Airport, building height is restricted to around . Furthermore, buildings in Downtown Boston are capped even lower than . This is in order to prevent shadows from being cast on both significant historic landmarks and public parks, such as the Boston Common. Philadelphia, Pennsylvania: For many years, the city had a gentlemen's agreement not to build taller than the statue of William Penn that graced the Philadelphia City Hall. Philadelphia sports fans blamed the failure of their teams at the turn of the 21st century on the violation of this rule. The first building to exceed the height of City Hall was the Liberty One tower. References External links FCC policy statement concerning tower heights near airports Statutory law Urban planning Construction law Housing law
Height restriction laws
Engineering
1,879
48,429,145
https://en.wikipedia.org/wiki/Tricholoma%20penangense
Tricholoma penangense is an agaric fungus of the genus Tricholoma. Found in Peninsular Malaysia, it was described as new to science in 1994 by English mycologist E.J.H. Corner. See also List of Tricholoma species References penangense Fungi described in 1994 Fungi of Asia Taxa named by E. J. H. Corner Fungus species
Tricholoma penangense
Biology
80
65,653,885
https://en.wikipedia.org/wiki/Curtailment%20%28electricity%29
In electric grid power generators, curtailment is the deliberate reduction in output below what could have been produced in order to balance energy supply and demand or due to transmission constraints. The definition is not strict, and several types of curtailment exist. "Economic dispatch" (low market price) is the most common. Curtailment is a loss of potentially useful energy, and may impact power purchase agreements. However, utilizing all available energy may require costly methods such as building new power lines or storage, becoming more expensive than letting surplus power go unused. Examples After ERCOT built a new transmission line from the Competitive Renewable Energy Zone in West Texas to the central cities in the Texas Interconnection in 2013, curtailment was reduced from 8-16% to near zero. Curtailment of wind power in western China was around 20% in 2018. In 2018, curtailment in the California grid was 460 GWh, or 0.2% of generation. Curtailment has since increased to 150-300 GWh/month in spring of 2020 and 2021, mainly solar power at noon as part of the duck curve. In Hawaii, curtailment reached 20% on the island of Maui in Hawaii in the second and third quarters of 2020. Mitigation options Transmission upgrade Demand response Battery storage power station Energy forecasting, including forecasting for price, wind and solar References External links Increase in curtailment in California, 2014—2022 Curtailment curves in South Australia, peaking at 69% (Christmas 2021) Electrical engineering
Curtailment (electricity)
Engineering
316
55,725,174
https://en.wikipedia.org/wiki/Poisk%20%28computer%29
Poisk (, "The Search") is an IBM-compatible computer built by KPO Electronmash () in Kyiv, Ukrainian SSR during the Soviet era. It is based on the K1810VM88 microprocessor, a clone of the Intel 8088. Developed since 1987 and released in 1989, it was the most common IBM-compatible computer in the Soviet Union. The basic version did not include an expansion module for parallel or serial ports for connecting a printer, mouse or other devices. The computer had 128 KB (hardware versions 1.0, 1.01, 1.02, 1.03 and 1.05) or 512 KB of RAM (versions 1.04 and 1.06), and displayed CGA graphics. Unusual for an IBM-compatible computer, Poisk utilities cartridges for expanding the system's capabilities in lieu of traditional internal expansion slots found in most similar systems. Despite using CGA-like graphical video modes and 8088-compatible processor running at 5 Mhz, it was not fully IBM-compatible, lacking the Motorola 6845 display and the Intel 8237 DMA controllers, and its performance lagged behind the IBM XT due to the emulation of the alphanumeric modes using NMI and sharing RAM between CPU and display. There were three versions of this computer: Poisk, Poisk-2 and Poisk-3. The machine entered mass production in 1991, just before the Soviet collapse, and production output in the early 1990s reached several tens of thousands units a year. Technical details Poisk The Poisk was made as cheap as possible, being a monoblock with a motherboard and keyboard and an external power supply. The machine came with a KM1810VM88 processor. The CPU speed was 5.0 MHz and the machine came with 128 KB (models 1.0, 1.01, 1.02, 1.03 and 1.05) or 512 KB of RAM (models 1.04 and 1.06), a CGA compatible video adapter and four expansion slots. A monitor and tape recorder could be connected directly to the computer. Poisk-2 Poisk-2 was compatible with the PC/XT architecture on software and hardware level. The machine came with a KR1810VM86M processor with the possibility of adding a K1810VM87B coprocessor. The CPU speed was 8 MHz and the machine came with 640 KB of RAM (expandable to 2048 KB), an Hercules and Extended CGA video adapter, hard and floppy disk controller based on i82064 and i8272 chips and COM and printer ports. Poisk-3 Poisk-3 reduced manufacturing costs due to the use of high-integration microcircuits instead of discrete logic. It was produced in small batches in the early 1990s. The machine came with a K1810VM86M processor with the possibility of adding a K1810VM87B coprocessor, or with a Intel 8086-2. The CPU speed was 8 MHz and the machine came with 640 KB of RAM, an EGA video adapter and an IDE HDD controller. Emulation References Ministry of Radio Industry (USSR) computers Computer-related introductions in 1989
Poisk (computer)
Technology
680
70,053,407
https://en.wikipedia.org/wiki/III%20Zw%202
III Zw 2 is a Seyfert 1 galaxy located in the Pisces constellation. It has a redshift of 0.089 and is notable as the first of its kind to exhibit a superluminal jet. Discovery III Zw 2 was first discovered by Fritz Zwicky via a 48-inch Schmidt survey as a stellar object with faint wisps. However, it was confirmed to have a Seyfert morphology with classical broadline characteristic based on further spectroscopic studies. It was also included in Palomar Green quasar sample. Characteristics The host galaxy of III Zw 2 was initially classified as a spiral galaxy. However according to a recent study made on its budge and disk decomposition via Hubble Space Telescope in 2009, it has since been reclassified as an elliptical galaxy. It has a star-forming tidal bridge feature indicating a merger with a companion galaxy. Furthermore, III Zw 2 belongs to a class of radio-intermediate quasars and is a member of a triple galaxy system. Active nucleus The nucleus of III Zw 2 is active. In additional, to its superluminal jet, the galaxy shows two distinctive γ-ray flares happening between November 2009 and May 2010, according to observations by Fermi-LAT. It is also known to have a highly variable radio core flux density between factor of 20-30. Black hole III Zw 2 contains a supermassive black hole of 7.4 × 108 M⊙. The black hole is responsible for producing an ionized wind outflow with a velocity of (−1780 ± 670) km s−1. Approximately every five years the galaxy emits dramatic radio outbursts. References Quasars Pisces (constellation) Seyfert galaxies 000737 1501 Active galaxies
III Zw 2
Astronomy
366
28,882,734
https://en.wikipedia.org/wiki/Ciclopramine
Ciclopramine is a tetracyclic antidepressant (TeCA) that was never marketed. References Abandoned drugs Amines Tetracyclic antidepressants Dibenzazepines
Ciclopramine
Chemistry
47
51,467,390
https://en.wikipedia.org/wiki/List%20of%20computer%20museums
Below is a list of computer museums around the world, organized by continent and country, then alphabetically by location. Asia Israel The Israeli Personal Computer Museum, Haifa Japan IPSJ Computer Museum - A virtual museum by IPSJ, an academic society of information processing in Japan, and affiliated physical computer museums ("satellite museums") all over Japan, such as: KCG Computer Museum, Kyoto - a computer museum by KCG, an education institution Microcomputer Museum in Ōme,_Tokyo Tokyo University of Science Museum of Science's "History of the Computer" South Korea Nexon Computer Museum Oceania Australia The Australian Computer Museum Society, Inc, NSW - very large collection The Nostalgia Box, Perth - Video Game Museum Powerhouse Museum - Has Computer Exhibit Monash Museum of Computing History, Monash University New Zealand Techvana, Auckland Europe Belgium Computermuseum NAM-IP, Namur Unisys Computermuseum, Haren (Brussels) Croatia Peek&Poke, Rijeka Czech Republic Technical museum in Brno - Computer Technology Retro Computer, Žatec Muzeum počítačové techniky, Higher Education College Žďár nad Sázavou Arcade Hry, Červený Újezd (arcade video games) Game World, Prague (arcade video games) Denmark Dansk Datahistorisk Forening, Hedehusene Estonia Arvutimuuseum, Tallinn Finland Rupriikki Media Museum, Tampere Finnish Museum of Games, Tampere France ACONIT, Grenoble , Paris FEB, Angers Musée des Arts et Métiers, Paris Musée de l'imprimerie, Lyon AMISA : Association pour un Musée de l'informatique, Sophia Antipolis INRIA : Institut national de recherche en Informatique et Automatique, Montbonnot-Saint-Martin Silicium, Toulouse Germany Computerspielemuseum Berlin, Berlin - Video Game Museum BINARIUM, Dortmund - Video Game and Personal Computer Museum Heinz Nixdorf MuseumsForum, Paderborn Computermuseum der Fakultät Informatik, University of Stuttgart Oldenburger Computer-Museum, Oldenburg Computeum, Vilshofen, with a selection from the Munich Computer Warehouse, Private Collection Deutsches Museum, Munich - Large computer collection in their Communications exhibit technikum29 living museum, Frankfurt - Re-opened in January 2020. Computerarchiv Muenchen, Munich - Computer, Video Games and Magazine Archive Computermuseum der Fachhochschule Kiel, Kiel :de:Analog Computer Museum, Bad Schwalbach / Hettenhain - Large collection of analog computers, working and under restoration. Greece Hellenic IT Museum, in Athens Ireland Computer and Communications Museum of Ireland, National University of Ireland Italy Museo dell'Informatica Funzionante, Palazzolo Acreide (Siracusa) Museo del Computer, via per Occhieppo, 29, 13891 Camburzano (Biella) Museo Interattivo di Archeologia Informatica, Cosenza UNESCO Computer Museum, Padova All About Apple Museum, Savona Piedmontese Museum of Informatics, Turin VIGAMUS, Rome - Video Game Museum Tecnologic@mente, Ivrea Museo degli strumenti per il calcolo, Pisa Lithuania Retrobytes cafe - vintage computers gallery, Kaunas The Netherlands Bonami SpelComputer Museum, Zwolle Computer Museum Universiteit van Amsterdam, Amsterdam Computermuseum Hack42, Arnhem HomeComputerMuseum, Helmond Rotterdams Radio Museum, Rotterdam Poland Muzeum Historii Komputerów i Informatyki, Katowice Muzeum Gry i Komputery Minionej Ery (Muzeum Gier), Wrocław Apple Muzeum Polska, Piaseczno Portugal LOAD ZX Spectrum Museum, Cantanhede Museu Faraday, IST - Instituto Superior Técnico, Lisboa Nostalgica - Museu de Videojogos e tecnologia, Lisboa Museu dos Computadores Inforap, Braga Museu Virtual da Informática, Universidade do Minho, Braga Museu das Comunicações, Lisboa Museu Nacional de História Natural e da Ciência - Universidade de Lisboa, Lisboa Russia Museum of Soviet Arcade Machines, Moscow Yandex Museum, Moscow Yandex Museum, Saint-Petersburg Moscow Apple Museum Antimuseum of Computers and Games, Yekaterinburg Slovenia Computer History Museum Slovenia, Ljubljana Slovakia Computer Museum SAV, Bratislava Spain Computer Museum Garcia Santesmases (MIGS), Complutense University Museum of Informatics, Polytechnic University of Valencia Museo de la Historia de la Computacion, Cáceres Sweden Dalby Datormuseum, Dalby on the island Aspö north of Strängnäs Switzerland Musée Bolo, Lausanne Enter Museum, Solothurn Ukraine Software & Computer Museum, Kyiv, Kharkiv United Kingdom Northwest Computer Museum, Leigh, Greater Manchester The National Museum of Computing, Bletchley Park The Centre for Computing History, Cambridge Retro Computer Museum, Leicester Science Museum, London, London National Archive for the History of Computing, University of Manchester National Videogame Arcade, Nottingham The Computing Futures Museum, Staffordshire University - In association with the BCS Museum of Computing, Swindon Time Line Computer Archive, Wigton The Micro Museum, Ramsgate Home Computer Museum, Hull IBM Hursley Museum, Hursley Derby Computer Museum The ICL Computer Museum - ICL and related items, computers, paperwork and software, from companies that made up ICL. See also: Computer Conservation Society North America Canada Ontario Personal Computer Museum, Brantford University of Waterloo Computer Museum, Waterloo, Ontario Vintage Computer Museum, Toronto, Ontario York University Computer Museum or YUCoM, York University Quebec EMusée, Montreal iMusée, Montreal Saskatchewan University of Saskatchewan Computer Museum United States Arizona Southwest Museum of Engineering, Communications and Computation, Glendale, Arizona California Computer History Museum, Mountain View, California DigiBarn Computer Museum, Boulder Creek, California Museum of Art and Digital Entertainment, Oakland, California The Tech Museum of Innovation, San Jose, California Intel Museum, Santa Clara, California D.C. Smithsonian National Museum of American History, Washington, D.C. Georgia Computer Museum of America, Roswell, Georgia Museum of Technology at Middle Georgia State University, Macon, Georgia Kansas The Topeka Computing Museum, Topeka, Kansas - Now being liquidated, online archive only. Maryland System Source Computer Museum, Hunt Valley, Maryland Minnesota Charles Babbage Institute, University of Minnesota Montana American Computer & Robotics Museum, Bozeman, Montana New Jersey Vintage Computer Federation Museum, Wall, New Jersey New York The Strong, International Center for the History of Electronic Games, Rochester, NY - Focus on Retrogaming but many games are on vintage personal computers. Pennsylvania Kennett Classic Computer Museum, Kennett Square, Pennsylvania Large Scale Systems Museum, Pittsburgh, Pennsylvania The Computer History Learning Center aka The Computer Church, Parkesburg, Pennsylvania Rhode Island Rhode Island Computer Museum, Warwick, Rhode Island Texas National Videogame Museum, Frisco, Texas Virginia U.Va. Computer Museum, University of Virginia Virginia Computer Museum Washington Living Computers: Museum + Labs, Seattle, Washington Microsoft Visitor Center, Redmond, Washington Wisconsin Chippewa Falls Museum of Industry and Technology, Chippewa Falls, Wisconsin - Exhibit offering "Seymour Cray and The Supercomputer" South America Argentina Espacio TEC, Bahia Blanca Museo de Informática UNPA-UARG, Río Gallegos Museo de Informática de la República Argentina - Fundación ICATEC (closed), Ciudad Autónoma de Buenos Aires Brasil Museu Capixaba do Computador, Vitória/ES Museu do Computador, São Paulo/SP Online The ICL Computer Museum (UK) MV Museu de Tecnologia (Brazil) Old Computer Museum San Diego Computer Museum - Physical objects were donated to the San Diego State University Library, but still does online exhibits Obsolete Computer Museum Old-Computers.com HP Computer Museum Early Office Museum IBM Archives EveryMac.com Bitsavers.org - Software and Document Archive TAM (The Apple Museum) - Apple Computers and Products Rewind Museum - Virtual museum with traveling physical exhibits The Computer Collector New Computer Museum IPSJ Computer Museum - Computers of Japan Freeman PC Museum FEMICOM Museum - Femininity in 20th century Video games, computers and electronic toys Home Computer Museum Malware Museum - Malware programs from the 80's and 90's that have been stripped of their destructive properties. History Computers KASS Computer Museum - A computer history museum & private collection Russian Virtual Computer Museum - a history of Soviet Computers from the late 1940s Soviet Digital Electronics Museum - a museum of Soviet electronic calculators, PCs and some other devices Development of Computer Science and Technologies in Ukraine - Ukrainian virtual Computer Museum Spectrum Generation collection, supporting the LOAD ZX Spectrum Museum in Portugal Home Computer Museum UK Vintage Mac Museum - Now a part of the American Computer & Robotics Museum in Bozeman, Montana. See also Computer museum List of video game museums References History of computing Lists of museums by subject
List of computer museums
Technology
1,902
39,470,026
https://en.wikipedia.org/wiki/Gilbert%20LaFreniere
Gilbert LaFreniere (born 1934 in New York) is an American ecological philosopher, active in the study of geology, ecology, and human impact upon nature. Biography Gilbert attended the University of Massachusetts Amherst and earned a Masters of Geology from Dartmouth College before completing a Ph.D in intellectual history from the University of California at Santa Barbara, (UCSB) in 1976. LaFreniere taught geology, environmental ethics, and environmental history for more than twenty-five years at Willamette University in Salem, Oregon. He remains an active Professor Emeritus and continues to lecture on the transformation of natural landscapes by man, appearing at Willamette University, Portland State University, and Oregon State University in the last few years. Many of his lectures rely heavily on his own travels and photography of the national parks of Europe, New England, California, the Pacific Northwest, and Canada. LaFreniere is also a noted scholar of the work and thought of the French philosopher Jean-Jacques Rousseau and the Idea of Progress and appears in the Card Catalog of the Rousseau Library in Montmorency, France. Gilbert's most recent book is The Decline of Nature (2008). Among his other publications are the book Jean Jacques Rousseau and the Idea of Progress (1976) and articles in Environmental History Review, Agriculture and Human Values, and The Trumpeter. See also Environmental ethics Natural philosophy List of environmental philosophers List of historians of science References External links 1935 births Living people American historians of science Intellectual historians Environmental historians Willamette University faculty Activists from New York City Environmental ethicists
Gilbert LaFreniere
Environmental_science
315
12,139,540
https://en.wikipedia.org/wiki/Spirotryprostatin%20A
Spirotryprostatin A is an indolic alkaloid from the 2,5-Diketopiperazine class of natural products found in the Aspergillus fumigatus fungus. Spirotryprostatin A and several other indolic alkaloids (including Spirotryprostatin B, as well as other tryprostatins and cyclotryprostatins) have been found to have anti-mitotic properties, and as such they have become of great interest as anti-cancer drugs. Because of this, the total syntheses of these compounds is a major pursuit of organic chemists, and a number of different syntheses have been published in the chemical literature. One such total synthesis was published in 1999, which showed that the isopropylidene side chain was not necessary to the biological activity of the compound, leading to a number of new theorized analogues. References Alkaloids Diketopiperazines Heterocyclic compounds with 3 rings Heterocyclic compounds with 2 rings Methoxy compounds
Spirotryprostatin A
Chemistry
224
34,523,717
https://en.wikipedia.org/wiki/Mike%20Cannon-Brookes
Michael Cannon-Brookes (born 17 November 1979) is an Australian businessman who is the co-founder and chief executive officer of the software company Atlassian. Since 2018, he has been involved in the Australia-Asia Power Link, a huge electricity infrastructure project to be developed in the Northern Territory by Sun Cable in a collaboration with the Australian businessman Twiggy Forrest. Early life and education Michael Cannon-Brookes was born on 17 November 1979 in Connecticut, US. The son of a global banking executive, also named Mike, and his wife Helen, he is the youngest of three siblings, with two sisters. His family relocated to Taiwan when he was six months old, and to Hong Kong when he was three; he later attended boarding school in England. He attended Cranbrook School in Sydney, and graduated from the University of New South Wales with a bachelor's degree in information systems on a UNSW co-op scholarship. Career Before founding Atlassian, Cannon-Brookes co-founded an internet bookmark management tool called The Bookmark Box with his university classmate Niki Scevak. The Bookmark Box was sold to Blink.com in 2000. Cannon-Brookes co-founded Atlassian, a collaboration software company, of which he is co-CEO, with Scott Farquhar. The pair started the company in 2002, shortly after graduating from university, funding it with credit cards. They have said they founded Atlassian with the aim of earning the then-typical graduate starting salary of 48,000 at the big corporations without having to work for someone else. Their first major Atlassian product was Jira, an issue- and project-tracking software. They decided to forgo the expense of hiring sales people, and instead spent their time and money on building a good product and selling it at a more affordable price via the Atlassian website. As of 2016, the company still did not have a traditional sales force, investing instead in research and development. In 2005, they opened an office in New York, where most of their clients were. Later in 2005 they moved the U.S. office to San Francisco, which had a much larger pool of relevant technical talent. Their first external funding for Atlassian was a US$60 million round from Accel in 2010. In 2014, they redomiciled the company to the UK, in advance of an initial public offering (IPO). Atlassian made its debut on the Nasdaq stock exchange in December 2015, with a market capitalisation of $4.37 billion. The IPO made Cannon-Brookes and Farquhar Australia's first tech startup billionaires and household names in Australia. Cannon-Brookes and Farquhar redomiciled Atlassian to the United States in 2022. Since September 2024, Cannon-Brookes is the sole CEO of the company after Farquhar stepped down as co-CEO. , he owns approximately 20% of Atlassian, with super-voting shares. Other activities Cannon-Brookes started Grok Ventures in 2016 as a family office. He is a major investor in green projects through his private investing vehicle. In October 2021, he pledged to donate and invest $1.5 billion on climate projects by 2030 to reinforce the COP26 goal of limiting global warming to 1.5 degrees above pre-industrial levels. He also formed a climate fund called Boundless Earth in 2022. Cannon-Brookes is an adjunct professor at the University of New South Wales' School of Computer Science and Engineering. He is also the chairman of Blackbird Ventures, a venture capital firm. In September 2020, it was revealed that Cannon-Brookes was among 35,000 Australians on a Chinese Government "Overseas Key Individuals Database" of prominent international individuals of interest for China. In March 2022, Cannon-Brookes and the billionaire Andrew Forrest invested in the Sun Cable project, to build a solar and battery farm 12,000 hectares (120 km2) in size at Powell Creek, Northern Territory, and a power-cable to link it to Singapore (via Indonesia) leaving Australia at Murrumujuk beach. In January 2023, Sun Cable went into administration owing to disagreements between Cannon-Brookes and Forrest, and in May 2023, Grok Ventures outbid Forrest and others to buy the liquidated company. In 2022, Cannon-Brookes became the largest shareholder of the Australian publicly listed energy company AGL, Australia's largest greenhouse gas emitter, in a move to force the company to de-carbonise more quickly. Sports In December 2020, Cannon-Brookes bought a minority stake in NBA team Utah Jazz, along with Qualtrics co-founder Ryan Smith. In November 2021, he bought a one-third share of Blackcourt League Investments, which owns 75% of the Australian Rugby League team, the South Sydney Rabbitohs. Personal life Cannon-Brookes married American fashion designer Annie Todd in 2010, and they have four children together. The couple first met at a Qantas lounge while flying from Sydney to San Francisco. Cannon-Brookes and Todd lived in Sydney's eastern suburbs in . In 2018 they bought Fairwater, Australia's most expensive house for approximately 100 million, next door to Scott Farquhar's 71 million Point Piper harbourside mansion, Elaine. Cannon-Brookes also acquired the 1923-built heritage residence Verona, designed by architect Leslie Wilkinson and located in Double Bay, for 17 million. The house previously belonged to New Zealand philanthropist Pat Goodman. Prior to that, in 2016, Cannon-Brookes had bought the 7.05 million SeaDragon house, built in 1936, also designed by Wilkinson and updated by architect Luigi Rosselli. His Centennial Park home sold for 16.5 million. In 2019 he purchased a house near Fairwater for 12 million. Cannon-Brookes separated from his wife Annie in July 2023. Recognition Cannon-Brookes and Farquhar were recognised as Ernst & Young's 2006 Australian Entrepreneur of the Year. He is a member of The Forum of Young Global Leaders. In 2023, he was recognized as one of the "Time100 climate person" Net worth In 2016, Forbes listed him among Australia's 50 richest people with an estimated net worth of 1.69 billion; BRW Rich 200 at 2.00 billion; and the Sunday Times Rich List at 906 million. In May 2023, the Australian Financial Review estimated his net worth at 19.01 billion. Meanwhile, in 2021, his net worth was assessed at 13.7 billion by Forbes and at 11.2 billion by Bloomberg. See also List of NRL club owners References External links 1979 births Atlassian people Australian billionaires Businesspeople from Sydney Living people People educated at Cranbrook School, Sydney Rugby league chairmen and investors Rugby league people in Australia South Sydney Rabbitohs University of New South Wales alumni Utah Jazz owners Australian company founders Energy company founders Australian sports owners
Mike Cannon-Brookes
Technology
1,414
25,763,710
https://en.wikipedia.org/wiki/Philip%20Kim%20%28physicist%29
Philip Kim (born 1968) is a South Korean physicist. He is a condensed matter physicist known for study of quantum transport in carbon nanotubes and graphene, including observations of quantum Hall effects in graphene. Academic career Kim studied physics at Seoul National University and earned his bachelor's degree in 1990 and a master's degree in 1992, and a doctorate in applied physics at Harvard University in 1999 under the supervision of Charles Lieber. He worked at the University of California, Berkeley as a Miller Research Fellow until 2001, when he joined the faculty at Columbia University where much of his seminal work was carried out. He later moved to Harvard University in 2014 as a professor of Physics and Applied Physics. Research Kim and coworkers have made important contributions in the field of nanoscale low-dimensional materials. In 1999, he and Lieber published a highly cited paper on electrostatically controlled carbon nanotube NEMS devices. In February 2005, his group at Columbia reported electrical measurements of thin graphite films produced by an atomic force microscope technique. In September 2005, they reported observation of the quantum Hall effect in single graphene layers simultaneously with the group of Andre Geim, and in 2007, the two groups jointly published observations of the quantum Hall effect in graphene at room temperature. Kim's group authored an influential paper in 2007 describing a transport gap introduced by lithographic patterning of graphene to form nanoribbons. This was an important proof of principle in the development of graphene electronics as it allowed on-off switching of the graphene devices by a factor of 1000 at low temperature. In February 2009, his group and coworkers have synthesized the large-scale graphene films by CVD method. He indicated that the quality of CVD-grown graphene is comparable to that of mechanically cleaved graphene, as observation of the half-integer quantum Hall effect in CVD-grown graphene. The group reported observation of the fractional quantum Hall effect in suspended graphene in November 2009. Honors and awards Kim received a National Science Foundation Early Career Development Award in 2004. In 2006, he was named as one of the "Scientific American 50", a list of individuals/organizations honored for their contributions to science and society during the preceding year. Kim was awarded the 2008 Ho-Am Prize in Science "for his pioneering work on low-dimensional carbon nanostructures". He received an IBM Faculty award in 2009. In 2011, Kim won the Dresden Barkhausen Award. In his Nobel Prize lecture, Andre Geim acknowledged the contribution of Philip Kim, saying, "I owe Philip a great deal for this, and many people heard me saying – before and after the Nobel Prize – that I would be honored to share it with him." References External links 1968 births Living people People from Seoul Columbia University faculty Experimental physicists Condensed matter physicists Harvard University alumni Harvard University faculty Recipients of the Ho-Am Prize in Science Seoul National University alumni South Korean physicists University of California, Berkeley fellows Oliver E. Buckley Condensed Matter Prize winners Fellows of the American Physical Society Benjamin Franklin Medal (Franklin Institute) laureates
Philip Kim (physicist)
Physics,Materials_science
638
54,626,282
https://en.wikipedia.org/wiki/GenX
GenX is a Chemours trademark name for a synthetic, short-chain organofluorine chemical compound, the ammonium salt of hexafluoropropylene oxide dimer acid (HFPO-DA). It can also be used more informally to refer to the group of related fluorochemicals that are used to produce GenX. DuPont began the commercial development of GenX in 2009 as a replacement for perfluorooctanoic acid (PFOA, also known as C8), in response to legal action due to the health effects and ecotoxicity of PFOA. Although GenX was designed to be less persistent in the environment compared to PFOA, it has proven to be a "regrettable substitute". Its effects may be equally harmful or even more detrimental than those of the chemical it was meant to replace. GenX is one of many synthetic organofluorine compounds collectively known as per- and polyfluoroalkyl substances (PFASs). Uses The chemicals are used in products such as food packaging, paints, cleaning products, non-stick coatings, outdoor fabrics and firefighting foam. The chemicals are manufactured by Chemours, a corporate spin-off of DuPont, in Fayetteville, North Carolina. GenX chemicals are used as replacements for PFOA for manufacturing fluoropolymers such as Teflon, the GenX chemicals serve as surfactants and processing aids in the fluoropolymer production process to lower the surface tension allowing the polymer particles to grow larger. The GenX chemicals are then removed from the final polymer by chemical treatment and heating. Chemistry The manufacturing process combines two molecules of hexafluoropropylene oxide (HFPO) to form HFPO-DA. HFPO-DA is converted into its ammonium salt that is the official GenX compound. The chemical process uses 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoic acid (FRD-903) to generate ammonium 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoate (FRD-902) and heptafluoropropyl 1,2,2,2-tetrafluoroethyl ether (E1). When GenX contacts water, it releases the ammonium group to become HFPO-DA. Because HFPO-DA is a strong acid, it deprotonates into its conjugate base, which can then be detected in the water. Pollution In North Carolina, the Chemours Fayetteville plant released GenX compounds into the Cape Fear River, which is a drinking water source for the Wilmington area. A documentary film, The Devil We Know; a fictional dramatization, Dark Waters; and a nonfiction memoir, Exposure: Poisoned Water, Corporate Greed, and One Lawyer's Twenty-Year Battle Against DuPont by Robert Bilott, subsequently publicized the discharges, leading to controversy over possible health effects. HFPO-DA was first reported to be in the Cape Fear River in 2012 and an additional eleven polyfluoroalkyl substances (PFAS) were reported 2014. These results were published as a formal paper in 2015. The following year, North Carolina State University and the EPA jointly published a study demonstrating HFPO-DA and other PFAS were present in the Wilmington-area drinking water sourced from the Cape Fear river. In September 2017, the North Carolina Department of Environmental Quality (NCDEQ) ordered Chemours to halt discharges of all fluorinated compounds into the river. Following a chemical spill one month later, NCDEQ cited Chemours for violating provisions in its National Pollutant Discharge Elimination System wastewater discharge permit. In November 2017, the Brunswick County Government filed a federal lawsuit alleging that DuPont failed to disclose research regarding potential risks from the chemical. In spring 2018, Cape Fear River Watch sued Chemours for Clean Water Act violations and sued the NCDEQ for inaction. After Cape Fear River Watch's suits were filed, NCDEQ filed a suit against Chemours, the result of all 3 lawsuits culminated in a consent order. The order signed by all 3 parties requires Chemours drastically reduce PFAS containing water discharges and air emissions, as well as sampling and filtration for well owners with contaminated wells, among other requirements. All materials relative to status of consent order requirements must be published to a public website,https://www.chemours.com/en/about-chemours/global-reach/fayetteville-works/compliance-testing. One requirement under the order was for non-targeted analysis which found 257 "unknown" PFAS being released from Fayetteville Works, (aside from the 100 'known' PFAS which can be quantified. Cape Fear River Watch published that their research of the NC DEQ permit file indicates that the first PFAS byproducts were likely released from Fayetteville Works in 1976 with the production of Nafion which uses HFPO in production (otherwise known as GenX) and creates byproducts termed Nafion Byproducts 1 through 5, some of which have been found in the blood of Cape Fear area residents. In 2020 Michigan adopted drinking water standards for 5 previously unregulated PFAS compounds including HFPO-DA which has a maximum contaminant level (MCL) of 370 parts per trillion (ppt). Two previously regulated PFAS compounds PFOA and PFOS had their acceptable limits lowered to 8 ppt and 16 ppt respectively. In 2022 Virginia's Roanoke River had become contaminated by GenX at levels reported to be 1.3 million parts per trillion. Health effects GenX has been shown to cause a variety of adverse health effects. While it was originally marketed as a safer alternative to legacy PFAS, research suggests that GenX poses significant health risks similar to those associated with its predecessor. Liver and kidney toxicity Studies have demonstrated that the liver is especially vulnerable to GenX exposure. Animal research has shown that even low doses of GenX can cause liver enlargement and damage. Similarly, the kidneys are also sensitive to GenX, with chronic exposure leading to renal toxicity. These effects highlight the potential dangers of prolonged exposure to even small amounts of the chemical. Cancer risk There is increasing concern about the carcinogenic potential of GenX. Research in animal models has linked exposure to various cancers, including liver, pancreatic, and testicular cancers. Although data on humans are limited, the results from these studies have prompted further investigation into the possible cancer risks posed by GenX. Neurotoxicity and developmental effects Two 2023 studies have identified potential neurotoxic effects of GenX, particularly during critical developmental windows. Pre-differentiation exposure of human dopaminergic-like neurons (SH-SY5Y cells) to low-dose GenX (0.4 and 4 μg/L) resulted in persistent alterations in neuronal characteristics. The study reported significant changes in nuclear morphology, chromatin arrangement, and increased expression of the repressive marker H3K27me3, which is associated with neurodegeneration. These changes were accompanied by disruptions in mitochondrial function and an increase in intracellular calcium levels, which are critical markers of neuronal health. Notably, GenX exposure led to altered expression of α-synuclein, a protein closely linked to the development of Parkinson's disease. The findings suggest that developmental exposure to GenX may pose a long-term risk for neurodegenerative disorders, particularly Parkinson's disease, due to its impact on key neuronal processes. Recent research has also underscored the potential for GenX to disrupt glucose and lipid metabolism during critical developmental periods. A 2021 study published in Environment International investigated the effects of prenatal exposure to GenX in Sprague-Dawley rats, revealing significant maternal and neonatal adverse outcomes, such as increased maternal liver weight, altered lipid profiles, and reduced glycogen accumulation in neonatal livers, resulting in hypoglycemia. Additionally, neonatal mortality and lower birth weights were observed at higher doses of GenX . A 2024 study in Science of the Total Environment expanded upon these findings in mice, demonstrating that gestational exposure to GenX led to increased liver weight, elevated liver enzyme levels (e.g., ALT and AST), and decreased glycogen storage capacity in the liver. Disruptions in gut flora and the intestinal mucosal barrier were also noted, further linking GenX exposure to hepatotoxicity. Both studies revealed significant alterations in gene expression, particularly in pathways regulating glucose and lipid metabolism. Genes such as CYP4A14, Sult2a1, and Igfbp1 were upregulated, which may have long-term implications for metabolic health. These findings suggest that gestational GenX exposure could trigger metabolic disorders and liver toxicity, posing potential health risks for populations exposed to GenX through contaminated water sources . Immune system and metabolic effects Studies have demonstrated that exposure to GenX, a replacement for long-chain PFAS chemicals, can lead to complex health effects. GenX has been linked to alterations in immune responses and metabolic processes, as observed in both human and animal studies. For instance, in a study using Monodelphis domestica, GenX exposure upregulated genes associated with inflammation and fatty acid transport. Another study on mice showed that GenX suppressed innate immune responses to inhaled carbon black nanoparticles, while simultaneously promoting lung cell proliferation, including macrophages and epithelial cells. These findings suggest that GenX may have immunosuppressive effects, potentially increasing susceptibility to respiratory agents while encouraging cellular growth in the lungs, raising concerns about respiratory health risks. This research highlights the potential health implications of GenX exposure, particularly its impact on immune system function and cell proliferation, which may contribute to both immune suppression and adverse health outcomes like inflammation or respiratory diseases. These findings raise concerns about the long-term impact on human health, especially in vulnerable populations. Drinking water health advisories In June 2022 the U.S. Environmental Protection Agency (EPA) published drinking water health advisories, which are non-regulatory technical documents, for GenX and PFBS. The lifetime health advisories and health effects support documents assist federal, state, tribal, and local officials and managers of drinking water systems in protecting public health when these chemicals are present in drinking water. EPA has listed recommended steps that consumers may take to reduce possible exposure to GenX and other PFAS chemicals. See also Perfluorinated alkylated substances (PFAS) Timeline of events related to per- and polyfluoroalkyl substances (PFAS) References Chemical processes Chemours DuPont products Pollutants
GenX
Chemistry
2,301
37,848,383
https://en.wikipedia.org/wiki/Opera%20Mobile%20Store
Opera Mobile Store was a platform-independent browser-based app store for mobile-phone owners and a digital application distribution platform used by more than 40,000 developers. It is owned and maintained by Opera. Launched by a third-party provider in March 2011, the Opera Mobile Store was relaunched on a new platform, after acquisition of Handster, a mobile app store platform company, in January 2012. The service allows users to browse and download applications for over 7,500 different devices on Android, Java, BlackBerry OS, Symbian, iOS, and Windows Mobile. Opera Mobile Store has than 1 million app downloads a day and is accessible from any mobile phone via a bookmark or the Start Page of Opera Mini or Opera Mobile browsers. The store is also accessible from any other browser, including desktop versions, and on any operating system, from https://apps.opera.com/. The Opera Mobile Store then forwards the user to the country-specific store based on the location information provided by the mobile carrier or ISP. Over 86% of the applications for Android-powered phones provided by the Opera Mobile Store are free of charge. The average free-vs.-paid-apps ratio for all platforms is 70% and 30%, respectively. These applications are generally targeted on a particular category, including video games, business applications, social media apps, e-books, and others. History On March 8, 2011, Opera Software announced the launch of Opera Mobile Store powered by third-party provider. On September 19, 2011, Opera Software acquired app store platform company Handster, which was the leading independent applications store for Android market at that time. Following the acquisition of Handster, Opera Software made a major overhaul since the Opera Mobile Store launch. Opera Software announced a revamped Opera Mobile Store with improved distribution and monetization capabilities for developers, and better customization for white-label marketplaces at the Mobile World Congress in Barcelona on February 27, 2012. In July 2013, developers can now upload apps to both the Yandex Store and the Opera Mobile Store simultaneously, as a result of agreements between the two companies. On November 18, 2014, Microsoft and Opera Software have signed an agreement to replace Nokia Store with the Opera Mobile Store as the default app store for Nokia feature phones, Symbian and Nokia X smartphones. The process of migrating customers from Nokia Store to Opera Mobile Store is expected to be complete in the first half of 2015, at which point Nokia Store will be closed. Number of downloaded applications As stated in the official Opera Software press-release, by February 27, 2012, Opera Mobile Store reached more than 30 million monthly app store visits and over 45 million monthly app downloads. Number of apps in the store has tripled over the year 2013, while the number of monthly visitors to the Opera's app distribution platform grew to 75 million in October, 2013. At the end of Q4 2013 Opera Software announced that it has reached a new milestone — 105 million monthly visitors. That is a 172% increase in a year since the close of 2012. The top-three countries in 2013 based on the number of Opera Mobile Store users were India, the United States and Indonesia. By November, 2014 Opera's app store started to offer close to 300,000 mobile apps and games across most mobile platforms. Earlier, in early 2014 the mobile store offered over 200,000 apps significant growth in the number of apps available. Most popular apps Top 100 apps lists are compiled separately for each country with an Opera Mobile Store presence. The lists feature 17 categories, ranging from business apps to games for every platform supported by the Opera Mobile Store, consisting of the most popular apps among the store users in the selected category. Top 20 most downloaded paid and free apps across all countries, platforms, and categories (March 2015) Application ratings Currently, the Opera Mobile Store does not rate applications; instead, the OMS team ether accepts apps submitted for inclusion into the store or rejects them, based on whether their content is appropriate for a wide age group, and keeping objectionable apps out of the store. App submission process Opera Mobile Store does not charge developers for joining mobile store's distribution program and publishing apps. Developers get 70% of net revenue from sales of paid apps for Android accepted into the Opera Mobile Store, and 50% of net revenue for Java apps. App approval process Applications are subject to approval by Opera Mobile Store team for basic reliability testing on all the platforms and devices declared by developer in submission process. Opera Mobile Store does not accept adult and sexual content. Opera Top App Awards Every year Opera Mobile Store runs Top Apps Awards. These awards recognize the best apps in the Opera Mobile Store across multiple categories based on popular user voting. Partnerships In 2013 Opera Mobile Store signed a cross-app store distribution partnership with Yandex, the largest search engine in the Russian-speaking Internet market and the leading search engine in Eastern Europe, based on audience reach. This agreement allows apps in the Opera Mobile Store also to be available for download in the Yandex app store and vice versa. The same year Opera Software signed several more partnership agreements with mobile carriers in Eastern Europe. Both MTS Belarus and MTS Ukraine launched Opera co-branded app stores for their 28 million total user base. Opera Subscription Mobile Store Starting July 21, 2014 Opera Software started to offer service that provides mobile carriers' customers with unlimited access to premium games and apps catalogue built on the Opera Mobile Store technology and supported by the OMS team. The first carrier to implement app store subscription model for their customers was MTS Russia that launched App Market service together with Opera. In this model mobile users pay a weekly subscription fee for "all-you-can-eat" access after a free seven-day trial period. During 2014 Opera Software launched subscription mobile stores with MTS Russia, MTS Ukraine, MTS Belarus, TIM Brasil and XL Axiata in Indonesia. One Platform Foundation In May 2013 Opera and Yandex announced together with SlideME and CodeNgo the launch and provided support for a new open-source One Platform Foundation (OPF) initiative, enabling developers to easily code and submit their apps across multiple alternative app stores. Similar services References External links Official website Official Opera Mobile Store developer site Software distribution agreement Opera Mobile Store team's official blog Opera Mobile Store: From the Browser Veterans Mobile software distribution platforms
Opera Mobile Store
Technology
1,292
11,254,647
https://en.wikipedia.org/wiki/Delayed%20coker
A delayed coker is a type of coker whose process consists of heating a residual oil feed to its thermal cracking temperature in a furnace with multiple parallel passes. This cracks the heavy, long chain hydrocarbon molecules of the residual oil into coker gas oil and petroleum coke. Delayed coking is one of the unit processes used in many oil refineries. The adjacent photograph depicts a delayed coking unit with 4 drums. However, larger units have tandem pairs of drums, some with as many as 8 drums, each of which may have diameters of up to 10 meters and overall heights of up to 43 meters. The yield of coke from the delayed coking process ranges from about 18 to 30 percent by weight of the feedstock residual oil, depending on the composition of the feedstock and the operating variables. Many refineries worldwide produce as much as 2,000 to 3,000 tons per day of petroleum coke and some produce even more. Schematic flow diagram and description The flow diagram and description in this section are based on a delayed coking unit with a single pair of coke drums and one feedstock furnace. However, as mentioned above, larger units may have as many as 4 pairs of drums (8 drums in total) as well as a furnace for each pair of coke drums. Residual oil from the vacuum distillation unit (sometimes including high-boiling oils from other sources within the refinery) is pumped into the bottom of the distillation column called the main fractionator. From there, it is pumped, along with some injected steam, into the fuel-fired furnace and heated to its thermal cracking temperature of about 480 °C. Thermal cracking begins in the pipe between the furnace and the first coke drums, and finishes in the coke drum that is on-stream. The injected steam helps to minimize the deposition of coke within the furnace tubes. Pumping the incoming residual oil into the bottom of the main fractionator, rather than directly into the furnace, preheats the residual oil by having it contact the hot vapors in the bottom of the fractionator. At the same time, some of the hot vapors condense into a high-boiling liquid which recycles back into the furnace along with the hot residual oil. As cracking takes place in the drum, gas oil and lighter components are generated in vapor phase and separate from the liquid and solids. The drum effluent is vapor except for any liquid or solids entrainment, and is directed to main fractionator where it is separated into the desired boiling point fractions. The solid coke is deposited and remains in the coke drum in a porous structure that allows flow through the pores. Depending upon the overall coke drum cycle being used, a coke drum may fill in 16 to 24 hours. After the first drum is full of the solidified coke, the hot mixture from the furnace is switched to the second drum. While the second drum is filling, the filled first drum is steamed out to reduce the hydrocarbon content of the petroleum coke, and then quenched with water to cool it. The top and bottom heads of the full coke drum are removed, and the solid petroleum coke is then cut from the coke drum with a high pressure water nozzle, where it falls into a pit, pad, or sluiceway for reclamation to storage. Composition of coke The table below illustrates the wide range of compositions for raw petroleum coke (referred to as green coke) produced in a delayed coker and the corresponding compositions after the green coke has been calcined at 2375 °F (1302 °C): History Petroleum coke was first made in the 1860s in the early oil refineries in Pennsylvania which boiled oil in small, iron distillation stills to recover kerosene, a much needed lamp oil. The stills were heated by wood or coal fires built underneath them, which over-heated and coked the oil near the bottom. After the distillation was completed, the still was allowed to cool and workmen could then dig out the coke and tar. In 1913, William Merriam Burton, working as a chemist for the Standard Oil of Indiana refinery at Whiting, Indiana, was granted a patent for the Burton thermal cracking process that he had developed. He was later to become the president of Standard Oil of Indiana before he retired. In 1929, based on the Burton thermal cracking process, Standard Oil of Indiana built the first delayed coker. It required very arduous manual decoking. In the late 1930s, Shell oil developed hydraulic decoking using high-pressure water at their refinery in Wood River, Illinois. That made it possible, by having two coke drums, for delayed decoking to become a semi-continuous process. From 1955 onwards, the growth in the use of delayed coking increased. As of 2002, there were 130 petroleum refineries worldwide producing 172,000 tons per day of petroleum coke. Included in those worldwide data, about 59 coking units were operating in the United States and producing 114,000 tons per day of coke. Uses of petroleum coke The product coke from a delayed coker has many commercial uses and applications. The largest use is as a fuel. The uses for green coke are: As fuel for space heaters, large industrial steam generators, fluidized bed combustions, Integrated Gasification Combined Cycle (IGCC) units and cement kilns In silicon carbide foundries For producing blast furnace coke The uses for calcined coke are: As anodes in the production of aluminium In the production of titanium dioxide As a carbon raiser in cast iron and steel making Producing graphite electrodes and other graphite products such as graphite brushes used in electrical equipment In carbon structural materials Other processes for producing petroleum coke There are other petroleum refining processes for producing petroleum coke, namely the Fluid Coking and Flexicoking processes both of which were developed and are licensed by ExxonMobil Research and Engineering. The first commercial unit went into operation in 1955. Forty-three years later, as of 1998, there were 18 of these units operating worldwide of which 6 were in the United States. There are other similar coking processes, but they do not produce petroleum coke. For example, the Lurgi-VZK Flash Coker which produces coke by the pyrolysis of biomass. References External links Glossary for cokers and related topics Petroleum production Chemical equipment Oil refineries
Delayed coker
Chemistry,Engineering
1,301
8,005,697
https://en.wikipedia.org/wiki/Allylic%20strain
Allylic strain (also known as A1,3 strain, 1,3-allylic strain, or A-strain) in organic chemistry is a type of strain energy resulting from the interaction between a substituent on one end of an olefin (a synonym for an alkene) with an allylic substituent on the other end. If the substituents (R and R') are large enough in size, they can sterically interfere with each other such that one conformer is greatly favored over the other. Allylic strain was first recognized in the literature in 1965 by Johnson and Malhotra. The authors were investigating cyclohexane conformations including endocyclic and exocylic double bonds when they noticed certain conformations were disfavored due to the geometry constraints caused by the double bond. Organic chemists capitalize on the rigidity resulting from allylic strain for use in asymmetric reactions. Quantifying allylic strain energy The "strain energy" of a molecule is a quantity that is difficult to precisely define, so the meaning of this term can easily vary depending on one's interpretation. Instead, an objective way to view the allylic strain of a molecule is through its conformational equilibrium. Comparing the heats of formation of the involved conformers, an overall ΔHeq can be evaluated. This term gives information about the relative stabilities of the involved conformers and the effect allylic strain has one equilibrium. Heats of formation can be determined experimentally though calorimetric studies; however, calculated enthalpies are more commonly used due to the greater ease of acquisition. Different methods utilized to estimate conformational equilibrium enthalpy include: the Westheimer method, the homomorph method, and more simply—using estimated enthalpies of nonbonded interactions within a molecule. Because all of these methods are approximations, reported strain values for the same molecule can vary and should be used only to give a general idea of the strain energy. Olefins The simplest type of molecules which exhibit allylic strain are olefins. Depending on the substituents, olefins maintain varying degrees of allylic strain. In 3-methyl-1-butene, the interactions between the hydrogen and the two methyl groups in the allylic system cause a change in enthalpy equal to 2 kcal/mol. As expected, with an increase in substituent size, the equilibrium enthalpies between rotamers also increases. For example, when examining 4-methyl-2-pentene which contains an additional allylic methyl group compared to 3-methyl-1-butene, the enthalpy of rotation for the highest energy conformer increases from 2 kcal/mol to 4 kcal/mol. Cyclic molecules Nonbonded 1,3-diaxial interaction energies are commonly used to approximate strain energy in cyclic molecules, as values for these interactions are available. By taking the difference in nonbonded interactions for each conformer, the equilibrium enthalpy can be estimated. The strain energy for methylidenecyclohexane has been calculated to be 4.5 kcalmol−1 using estimations for 1,3-diaxial strain (0.9 kcalmol−1), methyl/hydrogen allylic strain (1.3kcalmol−1), and methyl/methyl allylic strain (7.6 kcalmol−1) values. The strain energy in 1,8-dimethylnaphthalene was calculated to be 7.6 kcalmol−1 and around 12-15 kcalmol−1 for 4,5-dimethylphenanthrene. Allylic strain tends to be greater for cyclic molecules compared to olefins as strain energy increases with increasing rigidity of the system. An in depth summary of allylic strain in six membered rings has been presented in a review by Johnson, F. Influencing factors Several factors influence the energy penalty associated with the allylic strain. In order to relieve strain caused by interaction between the two methyl groups, the cyclohexanes will often exhibit a boat or twist-boat conformation. The boat conformation tends to be the major conformation to the strain. The effect of allylic strain on cis alkenes creates a preference for more linear structures. Substituent size The size of the substituents interacting at the 1 and 3 positions of an allylic group is often the largest factor contributing to the magnitude of the strain. As a rule, larger substituents will create a larger magnitude of strain. Proximity of bulky groups causes an increase in repulsive Van der Waals forces. This quickly increases the magnitude of the strain. The interactions between the hydrogen and methyl group in the allylic system cause a change in enthalpy equal to 3.6 kcal/mol. The strain energy in this system was calculated to be 7.6 kcal/mol due to interactions between the two methyl groups. Substituent polarity Polarity also has an effect on allylic strain. In terms of stereoselectivity, polar groups act like large, bulky groups. Even though two groups may have approximately the same A values the polar group will act as though it were much bulkier. This is due to the donor character of the polar group. Polar groups increase the HOMO energy of the σ-system in the transition state. This causes the transition state to be in a much more favorable position when the polar group is not interacting in a 1,3 allylic strain. Hydrogen bonding With certain polar substituents, hydrogen bonding can occur in the allylic system between the substituents. Rather than the strain that would normally occur in the close group proximity, the hydrogen bond stabilizes the conformation and makes it energetically much more favorable. This scenario occurs when the allylic substituent at the 1 position is a hydrogen bond donor (usually a hydroxyl) and the substituent at the 3 position is a hydrogen bond acceptor (usually an ether). Even in cases where the allylic system could conform to put a much smaller hydrogen in the hydrogen bond acceptor’s position, it is much more favorable to allow the hydrogen bond to form. Solvents Solvents also have an effect on allylic strain. When used in conjunction with knowledge of the effects of polarity on allylic strain, solvents can be very useful in directing the conformation of a product that contains an allylic structure in its transition state. When a bulky and polar solvent is able to interact with one of the substituents in the allylic group, the complex of the solvent can energetically force the bulky complex out of the allylic strain in favor of a smaller group. Conjugation Conjugation increases the allylic strain because it forces substituents into a configuration that causes their atoms to be in closer proximity, increasing the strength of repulsive Van der Waals forces. This situation occurs most noticeably when carboxylic acid or ketone is involved as a substituent of the allylic group. Resonance effect on the carboxylic group shifts the CO double bond to a hydroxy group. The carboxylic group will thus function as a hydroxyl group that will cause a large allylic strain to form and cancel the stabilization effects of the extended conjugation. This is very common in enolization reactions and can be viewed in the figure below under "Acidic Conditions." In situations where the molecule can either be in a conjugated system or avoid allylic strain, it has been shown that the molecule's major form will be the one that avoids strain. This has been found via the cyclization in the figure below. Under treatment of perchloric acid, molecule A cyclizes into the conjugated system show in molecule B. However, the molecule will rearrange (due to allylic strain) into molecule C, causing molecule C to be the major species. Thus, the magnitude of destabilization via the allylic strain outweighs the stabilization caused by the conjugated system. Acidic conditions In cases where an enolization is occurring around an allylic group (usually as part of a cyclic system), A1,3 strain can cause the reaction to be nearly impossible. In these situations, acid treatment would normally cause the alkene to become protonated, moving the double bond to the carboxylic group, changing it to a hydroxy group. The resulting allylic strain between the alcohol and the other group involved in the allylic system is so great that the reaction can not occur under normal thermodynamic conditions. This same enolization occurs much more rapidly under basic conditions, as the carboxylic group is retained in the transition state and allows the molecule to adopt a conformation that does not cause allylic strain. Application of allylic strain in organic reactions and total synthesis Origin of stereoselectivity of organic reactions from allylic strain When one is considering allylic strain, one needs to consider the possible conformers and the possible stereoelectronic demand of the reaction. For example, in the conformation of (Z)-4-methylpent-2-ene, the molecule isn't frozen in the favored conformer but rotates in the dihedral angle around 30° at <1kcal/mol cost. In stereoselective reactions, there are 2 effects of allylic strain on the reaction which is the sterics effect and the electronic effects. The sterics effect is where the largest group prefer to be the farthest from the alkene. The electronic effect is where the orbitals of the substituents prefer to align anti or outside of the orbitals depending on the reaction. Hydroboration reaction The hydroboration reaction is a useful reaction to functionalize alkenes to alcohols. In the reaction the trimethylsilyl (TMS) group fulfill 2 roles in directing the stereoselectivity of the reaction. First, the bulky size of TMS helped the molecule to preferably adopt a conformation where the TMS is not close to the methyl group on the alkene. Second, the TMS group conferred a stereoelectronic effect on the molecule by adopting an anti conformation to the directing orbitals of the alkene. For the regioselectivity of the reaction, the TMS group can stabilize the developing partial positive charge on the secondary carbon a lot better than a methyl group. Aldol reaction In the highly versatile and widely used Evans’ Aldol Reaction, allylic strain played a major role in the development of the reaction. The Z enolate was created to avoid the allylic strain with oxazolidinone. The formation of a specific enolate enforces the development of relative stereochemistry throughout the reaction, making the aldol reaction a very predictive and useful methodology out there to synthesize chiral molecules. The absolute stereochemistry is then determined by the chirality of the oxazolidinone. There is another aspect of aldol reaction that is influenced by the allylic strain. On the second aldol reaction, the product which is a 1,3 dicarbonyl is formed in high diastereoselectivity. This is because the acidity of the proton is significantly reduced because for the deprotonation to occur, it will have to go through a developing allylic strain in the unfavored conformation. In the favored conformation, the proton is not aligned properly for deprotonation to occur. Diels-Alder reaction In an intramolecular Diels-Alder reaction, asymmetric induction can be induced through allylic 1,3 strain on the diene or the dienophile. In the following example, the methyl group on the dienophile forced the molecule to adopt that specific 6-membered ring conformation on the molecule. In the model studies to synthesize chlorothricolide, an intramolecular Diels Alder reaction gave a mixture of diastereomers. But by installing the a bulky TMS substituent, the reaction gave the desired product in high diastereoselectivity and regioselectivity in good yield. The bulky TMS substituent helps enhance allylic 1,3 strain in the conformation of the molecule. Total synthesis of natural products In the seminar paper on the total synthesis of (+)-monensin, Kishi and co-workers utilized the allylic strain to induce asymmetric induction in the hydroboration oxidation reaction. The reaction is regioselective and stereoselective. The regioselectivity of the reaction is due to the significant positive character developed at the tertiary carbon. The stereoselectivity of the reaction is due to the attack by the borane from the least hindered side to which is where the methyl group lies at. References External links Advanced Organic Chemistry Lecture Notes (Evans, D. A.; Myers, A. G. Harvard University, 2006-2007) Stereochemistry
Allylic strain
Physics,Chemistry
2,724
33,432,052
https://en.wikipedia.org/wiki/Arista%20Networks
Arista Networks, Inc. (formerly Arastra) is an American computer networking company headquartered in Santa Clara, California. The company designs and sells multilayer network switches to deliver software-defined networking (SDN) for large datacenter, cloud computing, high-performance computing, and high-frequency trading environments. These products include 10/25/40/50/100/200/400/800 gigabit low-latency cut-through Ethernet switches. Arista's Linux-based network operating system, Extensible Operating System (EOS), runs on all Arista products. Corporate history In 2004, Andy Bechtolsheim, Kenneth Duda and David Cheriton founded Arastra (later renamed Arista). Bechtolsheim and Cheriton were able to fund the company themselves. In May 2008, Jayshree Ullal left Cisco after 15 years at the firm. She was appointed CEO of Arista in October 2008. In June 2014, Arista Networks had its initial public offering on the New York Stock Exchange under the symbol ANET. In December 2014, Cisco filed two lawsuits against Arista alleging intellectual property infringement., and the United States International Trade Commission issued limited exclusion and cease-and-desist orders concerning two of the features patented by Cisco and upheld an import ban on infringing products. In 2016, on appeal, the ban was reversed following product changes and two overturned Cisco patents, and Cisco's claim was dismissed. In August 2018, Arista agreed to pay Cisco as part of a settlement that included a release for all claims of infringement by Cisco, dismissal of Arista's antitrust claims against Cisco, and a 5-year stand-down between the companies. In August 2018, Arista Networks acquired Mojo Networks. In September 2018, Arista Networks acquired Metamako and integrated their low latency product line as the 7130 series. In February 2020, Arista acquired Big Switch Networks. In October 2020, Arista acquired Awake Security. Arista's CEO, Jayshree Ullal, was named to Barron's list of World's Best CEOs in 2018 and 2019. In August 2022, Arista Networks acquired Pluribus Networks, a unified cloud network company, for an undisclosed sum. Products Extensible Operating System EOS is Arista's network operating system, and comes as one image that runs across all Arista devices or in a virtual machine (VM). EOS runs on an unmodified Linux kernel with a userland that is initially Fedora-based. The userland has since been rebased on CentOS and later, AlmaLinux. There are more than 100 independent regular processes, called agents, responsible for different aspects and features of the switch, including drivers that manage the switching application-specific integrated circuit (ASICs), the command-line interface (CLI), Simple Network Management Protocol (SNMP), Spanning Tree Protocol, and various routing protocols. All the state of the switch and its various protocols is centralized in another process, called Sysdb. Separating processing (carried by the agents) from the state (in Sysdb) gives EOS two important properties. The first is software fault containment, which means that if a software fault occurs, any damage is limited to one agent. The second is stateful restarts, since the state is stored in Sysdb, when an agent restarts it picks up where it left off. Since agents are independent processes, they can also be upgraded while the switch is running (a feature called ISSU – In-Service Software Upgrade). The fact that EOS runs on Linux allows the usage of common Linux tools on the switch itself, such as tcpdump or configuration management systems. EOS provides extensive application programming interfaces (APIs) to communicate with and control all aspects of the switch. To showcase EOS' extensibility, Arista developed a module named CloudVision that extends the CLI to use Extensible Messaging and Presence Protocol (XMPP) as a shared message bus to manage and configure switches. This was implemented simply by integrating an existing open-source XMPP Python library with the CLI. Programmability In addition to all the standard programming and scripting abilities traditionally available in a Linux environment, EOS can be programmed using different mechanisms: Advanced Event Management can be used to react to various events and automatically trigger CLI commands, execute arbitrary scripts or send alerts when state changes occur in the switch, such as an interface going down or a virtual machine migrating to another host. Event Monitor tracks changes made to the medium access control (MAC), Address Resolution Protocol (ARP), and routing table in a local SQLite database for later querying using standard Structured Query Language (SQL) queries. eAPI (External API) offers a versioned JSON-RPC interface to execute CLI commands and retrieve their output in structured JSON objects. Ethernet switches Arista's product line can be separated into different product families: 7500R series: Modular chassis with a virtual output queueing (VOQ) fabric supporting from 4 to 16 store and forward line cards delivering line-rate non-blocking 10GbE, 40GbE, and 100GbE performance in a 150 Tbit/s fabric supporting a maximum of 576 100GbE ports with 384 GB of packet buffer. Each 100GbE ports can also operate as 40GbE or 4x10GbE ports, thus effectively providing 2304 line-rate 10GbE ports with large routing tables. 7300X, 7300X3 and 7320X series: Modular chassis with 4 or 8 line cards in a choice of 10G, 40G and 100G options with 6.4 Tbit/s of capacity per line card, for a fabric totaling up to 50 Tbit/s of capacity for up to 1024 10GbE ports. Unlike the 7500 series, 10GBASE-T is available on 7300 series line cards. 7280R series: 1U and 2U systems with a common architecture to the 7500R Series, deep buffer VOQ and large routing tables. Many different speed and port combinations from 10GbE to 100GbE. 7200X series: 2U low-latency high-density line-rate 100GbE and 40GbE switches, with up to 12.8 Tbit/s of forwarding capacity. 7170 Series: High Performance Multi-function Programmable Platforms, a set of fixed 100G platforms based on Barefoot Tofino packet processor enabling the data plane to be customized using EOS and P4 profiles. 7160 series: 1U programmable high performance range of 10 GbE, 25 GbE and 100 GbE with the support for AlgoMatch technology and a software upgradeable packet processor 7150S series: 1U ultra-low latency cut-through line-rate 10 Gb switches. Port-to-port latency is sub-380ns, regardless of the frame size. Unlike the earlier 7100 series, the switch silicon can be re-programmed to add new features that work at wire-speed, such as Virtual Extensible LAN (VXLAN) or network address translation (NAT/PAT). 7130 series (7130, 7130L, 7130E): 1U and 2U ultra-low latency Layer 1 switch and programmable switches. Layer 1 switching enables mirroring and software-defined port routing with port-to-port latency starting from 4ns, depending on physical distance. The E and L variants allow running custom FPGA applications directly on the switch with a port-to-FPGA latency as low as 3ns. This series comes from the original Metamako product line acquired by Arista Networks in 2018 and runs a combination of MOS and Arista EOS operating systems. 7050X and 7060X series: 1U and 2U low-latency cut-through line-rate 10GbE/25GbE, 40GbE and 100GbE switches. This product line offers higher port density than the 7150 series, in a wider choice of port options and interface speeds at the expense of slightly increased latency (1µs or less). The 7050X and 7060X Series are based on Broadcom Trident and Tomahawk merchant silicon. 7020R series: 1U store and forward line-rate with a choice of either a 1 Gb top-of-rack switch, with 6x10 Gb uplinks or a 10G with 100G uplinks. These switches use a Deep Buffer architecture, with 3 GB of packet memory. 7010 series: 1U low power (52W) line-rate 1 Gb top-of-rack switch, with 4x10 Gb uplinks. The low-latency of Arista switches has made them prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange (largest U.S. options exchange) and RBC Capital Markets. As of October 2009, one third of its customers were big Wall Street firms. Arista's devices are multilayer switches, which support a range of layer 3 protocols, including IGMP, Virtual Router Redundancy Protocol (VRRP), Routing Information Protocol (RIP), Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), IS-IS, and OpenFlow. The switches are also capable of layer 3 or layer 4 equal-cost multi-path routing (ECMP), and applying per-port L3/L4 access-control lists (ACLs) entirely in hardware. In November 2013, Arista Networks introduced the Spline network, combining leaf and spine architectures into a single-tier network, aiming to cut operating costs. Arista Community Central Arista Community Central is a centralized resource created by Arista Networks for customers, partners, and technical professionals. The community serves as a platform for sharing knowledge, engaging in discussions, and accessing various technical resources related to Arista’s networking technologies. The community utilizes a search engine powered by AI to provide the most relevant results and to enhance user experience. What the Community Offers Knowledge Base: A comprehensive Knowledge Base that includes technical articles, guides, and best practices for Arista products and solutions. It covers topics such as troubleshooting, tech tips, and configuration articles organized by technology, aiding users in managing and supporting their Arista products. Community Forum: The forum allows users to ask questions and participate in discussions related to Arista’s products. While the forum is publicly available, participation is limited to registered users, ensuring focused and expert-driven dialogue. Videos & Webinars: Arista Community Central also provides access to recorded webinarsand technology-related videos hosted on the Community YouTube channelaimed at deepening users’ understanding of Arista’s technology offerings. Major competitors Extreme Networks Juniper Networks Cisco Systems Hewlett Packard Enterprise (Aruba Networks division) Nokia References External links Arista Community Central Arista Community Central YouTube Channel Networking hardware companies Companies based in Santa Clara, California American companies established in 2004 Networking companies of the United States Electronics companies established in 2004 Companies listed on the New York Stock Exchange 2014 initial public offerings Computer hardware companies Computer companies of the United States
Arista Networks
Technology
2,334
266,430
https://en.wikipedia.org/wiki/Shaper
In machining, a shaper is a type of machine tool that uses linear relative motion between the workpiece and a single-point cutting tool to machine a linear toolpath. Its cut is analogous to that of a lathe, except that it is (archetypally) linear instead of helical. A wood shaper is a functionally different woodworking tool, typically with a powered rotating cutting head and manually fed workpiece, usually known simply as a shaper in North America and spindle moulder in the UK. A metalworking shaper is somewhat analogous to a metalworking planer, with the cutter riding a ram that moves relative to a stationary workpiece, rather than the workpiece moving beneath the cutter. The ram is typically actuated by a mechanical crank inside the column, though hydraulically actuated shapers are increasingly used. Adding axes of motion to a shaper can yield helical tool paths, as also done in helical planing. Process A single-point cutting tool is rigidly held in the tool holder, which is mounted on the ram. The work piece is rigidly held in a vise or clamped directly on the table. The table may be supported at the outer end. The ram reciprocates and the cutting tool, held in the tool holder, moves forwards and backwards over the work piece. In a standard shaper, cutting of material takes place during the forward stroke of the ram and the return stroke remains idle. The return is governed by a quick return mechanism. The depth of the cut increments by moving the workpiece, and the workpiece is fed by a pawl and ratchet mechanism. Types Shapers are mainly classified as standard, draw-cut, horizontal, universal, vertical, geared, crank, hydraulic, contour and traveling head, with a horizontal arrangement most common. Vertical shapers are generally fitted with a rotary table to enable curved surfaces to be machined (same idea as in helical planing). The vertical shaper is essentially the same thing as a slotter (slotting machine), although technically a distinction can be made if one defines a true vertical shaper as a machine whose slide can be moved from the vertical. A slotter is fixed in the vertical plane Operation The workpiece mounts on a rigid, box-shaped table in front of the machine. The height of the table can be adjusted to suit this workpiece, and the table can traverse sideways underneath the reciprocating tool, which is mounted on the ram. Table motion may be controlled manually, but is usually advanced by an automatic feed mechanism acting on the feedscrew. The ram slides back and forth above the work. At the front end of the ram is a vertical tool slide that may be adjusted to either side of the vertical plane along the stroke axis. This tool-slide holds the clapper box and tool post, from which the tool can be positioned to cut a straight, flat surface on the top of the workpiece. The tool-slide permits feeding the tool downwards to deepen a cut. This flexibility, coupled with the use of specialized cutters and toolholders, enables the operator to cut internal and external gear teeth. The ram is adjustable for stroke and, due to the geometry of the linkage, it moves faster on the return (non-cutting) stroke than on the forward, cutting stroke. This return stroke is governed by a quick return mechanism. Uses The most common use is to machine straight, flat surfaces, but with ingenuity and some accessories a wide range of work can be done. Other examples of its use are: Keyways in the hub of a pulley or gear can be machined without resorting to a dedicated broaching setup. Dovetail slides Internal splines and gear teeth. Keyway, spline, and gear tooth cutting in blind holes Cam drums with toolpaths of the type that in CNC milling terms would require 4- or 5-axis contouring or turn-mill cylindrical interpolation It is even possible to obviate wire EDM work in some cases. Starting from a drilled or cored hole, a shaper with a boring-bar type tool can cut internal features that do not lend themselves to milling or boring (such as irregularly shaped holes with tight corners). Smoothing of a rough surface History Samuel Bentham developed a shaper between 1791 and 1793. However, Roe (1916) credits James Nasmyth with the invention of the shaper in 1836. Shapers were very common in industrial production from the mid-19th century through the mid-20th. In current industrial practice, shapers have been largely superseded by other machine tools (especially of the CNC type), including milling machines, grinding machines, and broaching machines. But the basic function of a shaper is still sound; tooling for them is minimal and very cheap to reproduce; and they are simple and robust in construction, making their repair and upkeep easily achievable. Thus, they are still popular in many machine shops, from jobbing shops or repair shops to tool and die shops, where only one or a few pieces are required to be produced, and the alternative methods are cost- or tooling-intensive. They also have considerable retro appeal to many hobbyist machinists, who are happy to obtain a used shaper or, in some cases, even to build a new one from scratch. See also Planer (metalworking) References Bibliography External links Lathes.co.uk information archive on hand-powered shapers YouTube video of shaper mechanism YouTube video of a vintage shaper in action YouTube video of a newly built hobbyist shaper in action Various Types of Shaper Tools Machine tools Metalworking tools
Shaper
Engineering
1,181