id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
606,392 | https://en.wikipedia.org/wiki/Lathe%20%28graphics%29 | In 3D computer graphics, a lathed object is a 3D model whose vertex geometry is produced by rotating the points of a spline or other point set around a fixed axis. The lathing may be partial; the amount of rotation is not necessarily a full 360 degrees. The point set providing the initial source data can be thought of as a cross section through the object along a plane containing its axis of radial symmetry.
The reason the lathe has this name is because it creates symmetrical objects around a rotational axis, just like a real lathe would.
Lathes are very similar to surfaces of revolution. However, lathes are constructed by rotating a curve defined by a set of points instead of a function. Note that this means that lathes can be constructed by rotating closed curves or curves that double back on themselves (such as the aforementioned torus), whereas a surface of revolution could not because such curves cannot be described by functions.
See also
Surface of revolution
Solid of revolution
Loft (3D)
Computer-aided design | Lathe (graphics) | [
"Engineering"
] | 205 | [
"Computer-aided design",
"Design engineering"
] |
1,018,642 | https://en.wikipedia.org/wiki/Seawall | A seawall (or sea wall) is a form of coastal defense constructed where the sea, and associated coastal processes, impact directly upon the landforms of the coast. The purpose of a seawall is to protect areas of human habitation, conservation, and leisure activities from the action of tides, waves, or tsunamis. As a seawall is a static feature, it will conflict with the dynamic nature of the coast and impede the exchange of sediment between land and sea.
Seawall designs factor in local climate, coastal position, wave regime (determined by wave characteristics and effectors), and value (morphological characteristics) of landform. Seawalls are hard engineering shore-based structures that protect the coast from erosion. Various environmental issues may arise from the construction of a seawall, including the disruption of sediment movement and transport patterns. Combined with a high construction cost, this has led to increasing use of other soft engineering coastal management options such as beach replenishment.
Seawalls are constructed from various materials, most commonly reinforced concrete, boulders, steel, or gabions. Other possible construction materials include vinyl, wood, aluminum, fiberglass composite, and biodegradable sandbags made of jute and coir. In the UK, seawall also refers to an earthen bank used to create a polder, or a dike construction. The type of material used for construction is hypothesized to affect the settlement of coastal organisms, although the precise mechanism has yet to be identified.
Types
A seawall works by reflecting incident wave energy back into the sea, thus reducing the energy available to cause erosion. Seawalls have two specific weaknesses. Wave reflection from the wall may result in hydrodynamic scour and subsequent lowering of the sand level of the fronting beach. Seawalls may also accelerate the erosion of adjacent, unprotected coastal areas by affecting the littoral drift process.
Different designs of man-made tsunami barriers include building reefs and forests to above-ground and submerged seawalls. Starting just weeks after the disaster, in January 2005, India began planting Casuarina and coconut saplings on its coast as a natural barrier against future disasters like the 2004 Indian Ocean earthquake. Studies have found that an offshore tsunami wall could reduce tsunami wave heights by up to 83%.
The appropriate seawall design relies on location-specific aspects, including surrounding erosion processes. There are three main types of seawalls: vertical, curved, stepped, and mounds (see table below).
Natural barriers
A report published by the United Nations Environment Programme (UNEP) suggests that the tsunami of 26 December 2004 caused less damage in the areas where natural barriers were present, such as mangroves, coral reefs or coastal vegetation. A Japanese study of this tsunami in Sri Lanka used satellite imagery modelling to establish the parameters of coastal resistance as a function of different types of trees. Natural barriers, such as coral reefs and mangrove forests, prevent the spread of tsunamis and the flow of coastal waters and mitigated the flood and surge of water.
Trade-offs
A cost-benefit approach is an effective way to determine whether a seawall is appropriate and whether the benefits are worth the expense. Besides controlling erosion, consideration must be given to the effects of hardening a shoreline on natural coastal ecosystems and human property or activities. A seawall is a static feature which can conflict with the dynamic nature of the coast and impede the exchange of sediment between land and sea. The table below summarizes some positive and negative effects of seawalls which can be used when comparing their effectiveness with other coastal management options, such as beach nourishment.
Generally, seawalls can be a successful way to control coastal erosion, but only if they are constructed well and out of materials that can withstand the force of ongoing wave energy. Some understanding is needed of the coastal processes and morphodynamics specific to the seawall location. Seawalls can be very helpful; they can offer a more long-term solution than soft engineering options, additionally providing recreation opportunities and protection from extreme events as well as everyday erosion. Extreme natural events expose weaknesses in the performance of seawalls, and analyses of these can lead to future improvements and reassessment.
Issues
Sea level rise
Sea level rise creates an issue for seawalls worldwide as it raises both the mean normal water level and the height of waves during extreme weather events, which the current seawall heights may be unable to cope with. The most recent analyses of long, good-quality tide gauge records (corrected for GIA and when possible for other vertical land motions by the Global Positioning System, GPS) indicate a mean rate of sea level rise of 1.6–1.8 mm/yr over the twentieth century. The Intergovernmental Panel on Climate Change (IPCC) (1997) suggested that sea level rise over the next 50 – 100 years will accelerate with a projected increase in global mean sea level of +18 cm by 2050 AD. This data is reinforced by Hannah (1990) who calculated similar statistics including a rise of between +16-19.3 cm throughout 1900–1988. Superstorm Sandy of 2012 is an example of the devastating effects rising sea levels can cause when mixed with a perfect storm. Superstorm Sandy sent a storm surge of 4–5 m onto New Jersey's and New York's barrier island and urban shorelines, estimated at $70 billion in damage. This problem could be overcome by further modeling and determining the extension of height and reinforcement of current seawalls which needs to occur for safety to be ensured in both situations. Sea level rise also will cause a higher risk of flooding and taller tsunamis.
Hydrostatic water pressure
Seawalls, like all retaining walls, must relieve the buildup of water pressure. Water pressure buildup is caused when groundwater is not drained from behind the seawall. Groundwater against a seawall can be from the area's natural water-table, rain percolating into the ground behind the wall and waves overtopping the wall. The water table can also rise during periods of high water (high tide). Lack of adequate drainage can cause the seawall to buckle, move, bow, crack, or collapse. Sinkholes may also develop as the escaping water pressure erodes soil through or around the drainage system.
Extreme events
Extreme events also pose a problem as it is not easy for people to predict or imagine the strength of hurricane or storm-induced waves compared to normal, expected wave patterns. An extreme event can dissipate hundreds of times more energy than everyday waves, and calculating structures that will stand the force of coastal storms is difficult and, often the outcome can become unaffordable. For example, the Omaha Beach seawall in New Zealand was designed to prevent erosion from everyday waves only, and when a storm in 1976 carved out ten meters behind the existing seawall, the whole structure was destroyed.
Ecosystem impacts
The addition of seawalls near marine ecosystems can lead to increased shadowing effects in the waters surrounding the seawall. Shadowing reduces the light and visibility within the water, which may disrupt the distribution as well as foraging capabilities of certain species. The sediment surrounding seawalls tends to have less favorable physical properties (Higher calcification levels, less structural organization of crystalline structure, low silicon content, and less macroscale roughness) when compared to natural shorelines, which can present issues for species that reside on the seafloor.
The Living Seawalls project, which was launched in Sydney, Australia, in 2018, aims to help many of the marine species in Sydney Harbour to flourish, thus enhancing its biodiversity, by modifying the design of its seawalls. It entails covering parts of the seawalls with specially-designed tiles that mimic natural microhabitats - with crevices and other features that more closely resemble natural rocks. In September 2021, the Living Seawalls project was announced as a finalist for the international environment award the Earthshot Prize. Since 2022 it has become part of Project Restore, under the auspices of the Sydney Institute of Marine Science.
Other issues
Some further issues include a lack of long-term trend data of seawall effects due to a relatively short duration of data records; modeling limitations and comparisons of different projects and their effects being invalid or unequal due to different beach types; materials; currents; and environments. Lack of maintenance is also a major issue with seawalls. In 2013, more than 5,000 feet (1,500 m) of seawall was found to be crumbling in Punta Gorda, Florida. Residents of the area pay hundreds of dollars each year for a seawall repair program. The problem is that most of the seawalls are over a half-century old and are being destroyed by only heavy downpours. If not kept in check, seawalls lose effectiveness and become expensive to repair.
History and examples
Seawall construction has existed since ancient times. In the first century BCE, Romans built a seawall or breakwater at Caesarea Maritima
creating an artificial harbor (Sebastos Harbor). The construction used Pozzolana concrete which hardens in contact with seawater. Barges were constructed and filled with the concrete. They were floated into position and sunk. The resulting harbor/breakwater/seawall is still in existence today – more than 2000 years later.
The oldest known coastal defense is believed to be a 100-meter row of boulders in the Mediterranean Sea off the coast of Israel. Boulders were positioned in an attempt to protect the coastal settlement of Tel Hreiz from sea rise following the last glacial maximum. Tel Hreiz was discovered in 1960 by divers searching for shipwrecks, but the row of boulders was not found until storms cleared a sand cover in 2012.
More recently, seawalls were constructed in 1623 in Canvey Island, UK, when great floods of the Thames estuary occurred, prompting the construction of protection for further events in this flood-prone area. Since then, seawall design has become more complex and intricate in response to an improvement in materials, technology, and an understanding of how coastal processes operate. This section will outline some key case studies of seawalls in chronological order and describe how they have performed in response to tsunamis or ongoing natural processes and how effective they were in these situations. Analyzing the successes and shortcomings of seawalls during severe natural events allows their weaknesses to be exposed, and areas become visible for future improvement.
Canada
The Vancouver Seawall is a stone seawall constructed around the perimeter of Stanley Park in Vancouver, British Columbia. The seawall was constructed initially as waves created by ships passing through the First Narrows eroding the area between Prospect Point and Brockton Point. Construction of the seawall began in 1917, and since then this pathway has become one of the most used features of the park by both locals and tourists and now extends 22 km in total. The construction of the seawall also provided employment for relief workers during the Great Depression and seamen from on Deadman's Island who were facing punishment detail in the 1950s (Steele, 1985).
Overall, the Vancouver Seawall is a prime example of how seawalls can simultaneously provide shoreline protection and a source of recreation which enhances human enjoyment of the coastal environment. It also illustrates that although shoreline erosion is a natural process, human activities, interactions with the coast, and poorly planned shoreline development projects can accelerate natural erosion rates.
India
On December 26, 2004, towering waves of the 2004 Indian Ocean earthquake tsunami crashed against India's south-eastern coastline killing thousands. However, the former French colonial enclave of Pondicherry escaped unscathed. This was primarily due to French engineers who had constructed (and maintained) a massive stone seawall during the time when the city was a French colony. This 300-year-old seawall effectively kept Pondicherry's historic center dry even though tsunami waves drove water above the normal high-tide mark.
The barrier was initially completed in 1735 and over the years, the French continued to fortify the wall, piling huge boulders along its coastline to stop erosion from the waves pounding the harbor. At its highest, the barrier running along the water's edge reaches about above sea level. The boulders, some weighing up to a ton, are weathered black and brown. The seawall is inspected every year and whenever gaps appear or the stones sink into the sand, the government adds more boulders to keep it strong.
The Union Territory of Pondicherry recorded around 600 deaths from the huge tsunami waves that struck India's coast after the mammoth underwater earthquake (which measured 9.0 on the moment magnitude scale) off Indonesia, but most of those killed were fishermen who lived in villages beyond the artificial barrier which reinforces the effectiveness of seawalls.
Japan
At least 43 percent of Japan's coastline is lined with concrete seawalls or other structures designed to protect the country against high waves, typhoons, or even tsunamis. During the 2011 Tōhoku earthquake and tsunami, the seawalls in most areas were overwhelmed. In Kamaishi, waves surmounted the seawall – the world's largest, erected a few years ago in the city's harbor at a depth of , a length of and a cost of $1.5 billion – and eventually submerged the city center.
The risks of dependence on seawalls were most evident in the crisis at the Fukushima Dai-ichi and Fukushima Dai-ni nuclear power plants, both located along the coast close to the earthquake zone, as the tsunami washed over walls that were supposed to protect the plants. Arguably, the additional defense provided by the seawalls presented an extra margin of time for citizens to evacuate and also stopped some of the full force of energy which would have caused the wave to climb higher in the backs of coastal valleys. In contrast, the seawalls also acted in a negative way to trap water and delay its retreat.
The failure of the world's largest seawall, which cost $1.5 billion to construct, shows that building stronger seawalls to protect larger areas would have been even less cost-effective. In the case of the ongoing crisis at the nuclear power plants, higher and stronger seawalls should have been built if power plants were to be built at that site. Fundamentally, the devastation in coastal areas and a final death toll predicted to exceed 10,000 could push Japan to redesign its seawalls or consider more effective alternative methods of coastal protection for extreme events. Such hardened coastlines can also provide a false sense of security to property owners and local residents as evident in this situation.
Seawalls along the Japanese coast have also been criticized for cutting settlements off from the sea, making beaches unusable, presenting an eyesore, disturbing wildlife, and being unnecessary.
United States
After 2012's Hurricane Sandy, New York City Mayor Bill de Blasio invested $3,000,000,000 in a hurricane restoration fund, with part of the money dedicated to building new seawalls and protection from future hurricanes.
A New York Harbor Storm-Surge Barrier has been proposed, but not voted on or funded by Congress or the State of New York.
In Florida, tiger dams are used to protect homes near the coast.
See also
Breakwater (structure)
Mole (architecture)
General:
Related types of walls:
Specific walls:
(Constantinople seawalls)
References
External links
Channel Coastal Observatory – Seawalls
Seawalls and defences on the Isle of Wight
MEDUS (Maritime Engineering Division University Salerno)
"Japan may rethink seawalls after tsunami", New York Times, March 14, 2011
General overview of residential and small commercial steel seawall construction
Coastal engineering | Seawall | [
"Engineering"
] | 3,204 | [
"Coastal engineering",
"Civil engineering"
] |
1,018,774 | https://en.wikipedia.org/wiki/Joseph%20M.%20Acaba | Joseph Michael Acabá (born May 17, 1967) is an American educator, hydrogeologist, and NASA astronaut. In May 2004, he became the first person of Puerto Rican ancestry to be named as a NASA astronaut candidate, when he was selected as a member of NASA Astronaut Training Group 19. He completed his training on February 10, 2006, and was assigned to STS-119, which flew from March 15 to 28, 2009, to deliver the final set of solar arrays to the International Space Station. He is the first person of Puerto Rican origin, and the twelfth of fifteen people of Ibero-american heritage to have flown to space.
Acabá served as a flight engineer aboard the International Space Station, having launched on May 15, 2012. He arrived at the space station on May 17 and returned to Earth on September 17, 2012. Acaba returned to the International Space Station in 2017 as a member of Expedition 53/54. In 2023, Acaba was appointed the Chief of the Astronaut Office.
Early life and education
Acaba's parents, Ralph and Elsie Acabá, from Hatillo, Puerto Rico, moved in the mid-1960s to Inglewood, California, where he was born. They later moved to Anaheim, California. Since his childhood, Acaba enjoyed reading, especially science fiction. In school, he excelled in both science and math. As a child, his parents constantly exposed him to educational films, but it was the 8-mm film showing astronaut Neil Armstrong's Moon landing that intrigued him about outer space. During his senior year in high school, Acaba became interested in scuba diving and became a certified scuba diver through a job training program at his school. This experience inspired him to further his academic education in geology. In 1985, he graduated with honors from Esperanza High School in Anaheim.
In 1990, Acaba received his bachelor's degree in geology from the University of California, Santa Barbara, and in 1992, he earned his master's degree in geology from the University of Arizona. Acaba was a sergeant in the United States Marine Corps Reserve where he served for six years. He also worked as a hydrogeologist in Los Angeles, California. Acaba spent two years in the United States Peace Corps and trained over 300 teachers in the Dominican Republic in modern teaching methodologies. He then served as island manager of the Caribbean Marine Research at Lee Stocking Island in the Exumas, Bahamas.
Upon his return to the United States, Acaba moved to Florida, where he became shoreline revegetation coordinator in Vero Beach. He taught one year of science and math in high school and four years at Dunnellon Middle School. He also briefly taught at Melbourne High School in Melbourne, Florida. Upon his return to Florida in fall 2012, Acaba began coursework in the College of Education at Texas Tech University. He earned his Master of Education, curriculum and instruction from Texas Tech University in 2015.
NASA career
On May 6, 2004, Acaba and ten other people were selected from 99 applicants by NASA as astronaut candidates. NASA's administrator, Sean O'Keefe, in the presence of John Glenn, announced the members of the "19th group of Astronaut Candidates", an event which has not been repeated since 1958 when the original group of astronauts was presented to the world. Acaba, who was selected as an Educator Mission Specialist, completed his astronaut training on February 10, 2006, along with the other ten astronaut candidates. Upon completion of his training, Acaba was assigned to the Hardware Integration Team in the International Space Station branch, working technical issues with European Space Agency (ESA) hardware.
STS-119
Acaba was assigned to the crew of STS-119 as mission specialist educator, which was launched on March 15, 2009, at 7:43 p.m., after NASA engineers repaired a leaky gas venting system the previous week, to deliver the final set of solar arrays to the International Space Station. Acaba, who carried on his person a Puerto Rican flag, requested that the crew be awakened on March 19 (Day 5) with the Puerto Rico folklore song "Qué Bonita Bandera" (What a Beautiful Flag) referring to the Puerto Rican flag, written in 1971 by Florencio Morales Ramos (Ramito) and sung by Jose Gonzalez and Banda Criolla.
On March 20, he provided support to the first mission spacewalk. On March 21, he performed a spacewalk with Steve Swanson in which he helped to successfully unfurl the final "wings" of the solar array that will augment power to the ISS. 2 days later, Acaba performed his second EVA of the mission, with crew member Ricky Arnold. The main task of the EVA was to help move the CETA carts outside of the station to a different location. On March 28 the and its seven-person crew safely touched down on runway 15 at NASA's Kennedy Space Center in Florida at 3:14 p.m. EDT. Acaba said he was amazed at the views from the space station.
Expedition 31/32
On May 15, 2012, Acaba was one of three crew members launching from Kazakhstan aboard the Soyuz TMA-04M spacecraft to the International Space Station. He and his fellow crew members, Gennady Padalka and Sergei Revin, arrived and docked with the space station two days after launch, on May 17 at 4:36 UTC. Acaba, Padalka, and Revin returned to Earth on September 17, 2012, after nearly 125 days in space.
Between space missions
Acaba served as the Branch Chief of the International Space Station Operations branch. The office is responsible for mission preparation and on-orbit support of space station crews.
Until being selected as a flight engineer for Expedition 54\Expedition 55 Acaba served as Director of Operations Russia in Star City supporting crew training in Soyuz and Russian Segment systems.
In September 2019 Acaba served as cavenaut in ESA CAVES training (between Italy and Slovenia) spending six nights underground simulating a mission exploring another planet.
Expedition 53/54
In 2017 it was announced that Acaba would return to the ISS for his third mission, onboard Soyuz MS-06. The Soyuz vehicle was originally slated to launch with a crew of 2, due to the Russian crew cuts on the ISS for 2017, however, at short notice, it was decided that the 3rd seat would be filled by an experienced astronaut and would be funded by Roscosmos to cancel out owed debts. Acaba's backup for the mission was Shannon Walker, who was scheduled to fly as prime crew on Soyuz MS-12 as part of Expedition 59/60, although as of December 2018, she is not assigned to that crew
Acaba launched on Soyuz MS-06 on September 12, 2017, performing a 6-hour rendezvous with the ISS. On October 20, 2017, Acaba and Randy Bresnik performed an EVA to continue with the lubrication of the new end effector on the robotic arm and to install new cameras. The duration was 6 hours and 49 minutes. During the mission Acaba's home in Houston was flooded by Hurricane Harvey and Hurricane Maria struck his native Puerto Rico.
Statistics
Chief of the Astronaut Office
In February 2023, Acaba became Chief of the Astronaut Office at NASA. Acaba replaced Drew Feustel who was acting chief after Reid Wiseman stepped down from the position.
Recognition
On March 18, 2008, Acaba was honored by the Senate of Puerto Rico, which sponsored his first trip to the Commonwealth of Puerto Rico since being selected for space flight. During his visit, which was announced by then President of the Puerto Rican Senate, Kenneth McClintock, he met with schoolchildren at the Capitol, as well as at the Bayamón, Puerto Rico Science Park, which includes a planetarium and several surplus NASA rockets among its exhibits.
Acaba returned to Puerto Rico on June 1, 2009. During his visit, he was presented with a proclamation by Governor Luis Fortuño. He spent seven days on the island and came into contact with over 10,000 persons, most of them schoolchildren.
He received the Ana G. Mendez University System Presidential Medal and an Honorary Doctorate from the Polytechnic University of Puerto Rico, where he inaugurated a flight simulator on February 7, 2013, during one of his visits to Puerto Rico to promote the study of math and science among students, as well as to visit his relatives. Caras Magazine named him one of the most influential and exciting Puerto Ricans of 2012.
See also
References
External links
Spacefacts biography of Joseph Acaba
NASA biography
Video of NASA HQ Social event December 2012
1967 births
American educators
American expatriates in the Dominican Republic
American people of Puerto Rican descent
Aquanauts
Crew members of the International Space Station
Educator astronauts
Esperanza High School alumni
Hispanic and Latino American educators
Hispanic and Latino American military personnel
Hispanic and Latino American scientists
Living people
Military personnel from California
NASA civilian astronauts
People from Inglewood, California
Puerto Rican aviators
Puerto Rican United States Marines
Scientists from Anaheim, California
Space Shuttle program astronauts
Spacewalkers
Texas Tech University alumni
United States Marine Corps astronauts
United States Marine Corps reservists
United States Marines
University of Arizona alumni
University of California, Santa Barbara alumni | Joseph M. Acaba | [
"Astronomy"
] | 1,874 | [
"Educator astronauts",
"Astronomy education"
] |
1,019,002 | https://en.wikipedia.org/wiki/Dirac%20measure | In mathematics, a Dirac measure assigns a size to a set based solely on whether it contains a fixed element x or not. It is one way of formalizing the idea of the Dirac delta function, an important tool in physics and other technical fields.
Definition
A Dirac measure is a measure on a set (with any -algebra of subsets of ) defined for a given and any (measurable) set by
where is the indicator function of .
The Dirac measure is a probability measure, and in terms of probability it represents the almost sure outcome in the sample space . We can also say that the measure is a single atom at ; however, treating the Dirac measure as an atomic measure is not correct when we consider the sequential definition of Dirac delta, as the limit of a delta sequence. The Dirac measures are the extreme points of the convex set of probability measures on .
The name is a back-formation from the Dirac delta function; considered as a Schwartz distribution, for example on the real line, measures can be taken to be a special kind of distribution. The identity
which, in the form
is often taken to be part of the definition of the "delta function", holds as a theorem of Lebesgue integration.
Properties of the Dirac measure
Let denote the Dirac measure centred on some fixed point in some measurable space .
is a probability measure, and hence a finite measure.
Suppose that is a topological space and that is at least as fine as the Borel -algebra on .
is a strictly positive measure if and only if the topology is such that lies within every non-empty open set, e.g. in the case of the trivial topology .
Since is probability measure, it is also a locally finite measure.
If is a Hausdorff topological space with its Borel -algebra, then satisfies the condition to be an inner regular measure, since singleton sets such as are always compact. Hence, is also a Radon measure.
Assuming that the topology is fine enough that is closed, which is the case in most applications, the support of is . (Otherwise, is the closure of in .) Furthermore, is the only probability measure whose support is .
If is -dimensional Euclidean space with its usual -algebra and -dimensional Lebesgue measure , then is a singular measure with respect to : simply decompose as and and observe that .
The Dirac measure is a sigma-finite measure.
Generalizations
A discrete measure is similar to the Dirac measure, except that it is concentrated at countably many points instead of a single point. More formally, a measure on the real line is called a discrete measure (in respect to the Lebesgue measure) if its support is at most a countable set.
See also
Discrete measure
Dirac delta function
References
Measures (measure theory) | Dirac measure | [
"Physics",
"Mathematics"
] | 575 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
1,019,188 | https://en.wikipedia.org/wiki/Williamson%20amplifier | The Williamson amplifier is a four-stage, push-pull, Class A triode-output valve audio power amplifier designed by D. T. N. Williamson during World War II. The original circuit, published in 1947 and addressed to the worldwide do it yourself community, set the standard of high fidelity sound reproduction and served as a benchmark or reference amplifier design throughout the 1950s. The original circuit was copied by hundreds of thousands amateurs worldwide. It was an absolute favourite on the DIY scene of the 1950s, and in the beginning of the decade also dominated British and North American markets for factory-assembled amplifiers.
The Williamson circuit was based on the 1934 Wireless World Quality Amplifier by Walter Cocking, with an additional error amplifier stage and a global negative feedback loop. Deep feedback, triode-connected KT66 power tetrodes, conservative choice of standing currents, and the use of wide-bandwidth output transformer all contributed to the performance of the Williamson. It had a modest output power rating of but surpassed all contemporary designs in having very low harmonic distortion and intermodulation, flat frequency response throughout the audible frequency range, and effective damping of loudspeaker resonances. The 0.1% distortion figure of the Williamson amplifier became the criterion for high fidelity performance that remains valid in the 21st century.
The Williamson amplifier was sensitive to selection and matching of passive components and valves, and prone to unwanted oscillations at infrasonic and ultrasonic frequencies. Enclosing four valve stages and an output transformer in a negative feedback loop was a severe test of design, resulting in a very narrow phase margin or, quite often, no margin at all. Attempts to improve stability of the Williamson could not fix this fundamental flaw. For this reason, and due to high costs of required quality components, manufacturers soon abandoned the Williamson circuit in favour of inherently more stable, cheaper and efficient three-stage, ultralinear or pentode-output designs.
Background
In 1925 Edward W. Kellogg published the first comprehensive theory of audio power amplifier design. Kellogg proposed that the permissible level of harmonic distortion can reach 5%, provided that distortion rises smoothly rather than abruptly, and that it generates only low-order harmonics. Kellogg's work became the de facto industry standard of the interwar period, when most amplifiers were employed in cinemas. Early sound film and public address requirements were low, and customers were content with crude but efficient and affordable transformer-coupled, class B amplifiers. The best theatre amplifiers, built by Western Electric around their 300A and 300B power triodes, far exceeded the average level but were expensive and rare.
By the middle of the 1930s Western Electric and RCA improved performance of their experimental audio equipment to a level approaching modern understanding of high fidelity, but none of these systems could be commercialized yet. They lacked sound sources of matching quality. Industry leaders of the 1930s agreed that the improvement of commercial amplifiers and loudspeakers would make sense only after the introduction of new physical media surpassing low-quality AM broadcasting and shellac records. The Great Depression, World War II and the post-war television boomconsecutively delayed this goal. Development of commercial audio equipment came to a standstill; the few enthusiasts seeking higher level of fidelity had to literally do it themselves. American DIYers experimented with novel beam tetrodes. Australians preferred traditional push-pull circuits built around directly-heated triodes and complex, expensive interstage transformers.
British school of thought led by Walter Cocking of Wireless World leaned to push-pull, class A, RC-coupled triode output stages. RC coupling, as opposed to transformer coupling, argued Cocking, extended the amplifier's bandwidth beyond the required minimum of 10 kHz and improved its transient response. Tetrodes and pentodes were undesirable due to higher harmonic distortion and higher output impedance that failed to control fundamental resonance of the loudspeaker. Cocking wrote that Kellogg's 5% distortion limit was too high for quality amplification, and outlined a different set of requirements - the first definition of high fidelity. Instead of Kellogg's single figure of merit (harmonic distortion), Cocking set three simultaneous targets - low frequency distortion, low harmonic distortion, and low phase distortion. In 1934 Cocking published his first Quality Amplifier design - a two-stage, RC-coupled triode class A amplifier that achieved no more than 2–3% maximum distortion without using feedback. Feedback appeared in his 1943 Wartime Quality Amplifier, built around American 6V6 beam tetrodes; however, both the input stage and the output transformer were placed outside the feedback loop. Cocking's Quality Amplifier family became the foundation of post-war British and Australian audio industry, including the Williamson amplifier.
Development
In 1943, in the middle of World War II, twenty-year-old Scotsman Theo Williamson failed mathematics exam and was discharged from the University of Edinburgh. Theo was not physically fit for military service, so instead the authorities drafted him for mandatory civilian work at Marconi-Osram Valve. In April 1944 Williamson transferred from production line to Applications Laboratory of the company, where he had enough free time for his own DIY projects. Management did not object, and by the end of 1944 Williamson had conceived, built and tested the amplifier that would soon be known as the Williamson amplifier. Another wartime projects, a novel magnetic cartridge, would be commercialized in 1948 as the Ferranti ribbon pickup.
Design targets
Following Cocking's ideas, Williamson devised a different, much stricter set of fidelity requirements:
Negligible non-linear distortion (sum of harmonic distortion and intermodulation products) up to the maximum rated output, at all audible frequencies from 10 to 20000 Hz;
Linear frequency response and constant output power at all audible frequencies;
Negligible phase shift within the audible frequency range;
Good transient response which, in addition to above frequency and phase requirements, demands perfectly constant gain when handling complex waveforms and transients;
Low output impedance and, inversely, high damping factor. At the very least, output impedance of an amplifier must be lower than the loudspeaker impedance;
Output power of 15–20W for reproduction of orchestral music via a dynamic loudspeaker, or for a horn loudspeaker.
Williamson reviewed contemporary amplifier configurations, and, just like Cocking, settled on a low distortion push-pull, class A, triode output stage. Unlike Cocking, Williamson believed that such a stage can deliver high fidelity sound only when the amplifier is governed by 20–30dB deep negative feedback loop (and thus the complete amplifier must have 20–30dB higher open loop gain to compensate the effect of feedback). Deep feedback inevitably causes sudden, harsh onset of distortion at overload but Williamson was content with this flaw. He argued that it is a price worth paying for an improvement in linearity at medium and high power levels. On the contrary, wrote Williamson, slow but steady rise of distortion to 3–5%, as advocated by Kellogg, is distinctly unwanted in a high fidelity system.
Prototypes and tests
Valve complement of the original Williamson amplifier was determined by scarce supply in wartime Britain. The two suitable and available output valves were either the PX25 triode, or a triode-connected KT66 beam tetrode. Williamson initially used the PX25, an already obsolete directly-heated triode introduced in 1932. In his second prototype, Williamson used the more efficient KT66, which became the valve of choice in post-war period. Powered from +500 V power supply, the KT66 prototype delivered 20 Watts at no more than 0.1% distortion. A less costly +425V power supply enabled 15 Watt output power at no more than 0.1% distortion; this arrangement became standard for the Williamson amplifier and defined its physical layout. The complete prototype system, including the amplifier, the experimental magnetic pickup and a Goodmans full-range speaker in an acoustical labyrinth enclosure, has proven to Williamson that a low distortion, deep feedback amplifier, indeed, sounded superior to amplifiers without feedback. The difference was particularly audible with the best available shellac records, despite the physical limitations of this low-fidelity format.
The prototypes impressed the Marconi management, who granted Williamson unlimited access to the company's test facilities and introduced him to the people from Decca Records. The latter provided Williamson with precious, exclusive test material - sample records of the experimental Decca ffrr system, the first true high fidelity medium in the United Kingdom. These records, which exceeded any preexisting media in sound quality, helped Williamson with fine-tuning his prototypes. He was certain that he was now firmly on the right track, but neither Marconi, nor its parent the General Electric Company were willing to invest in mass production of amplifiers for the civilian market. The design was not interesting to company lawyers either, because it did not contain anything patentable. Williamson merely put together well-known circuits and solutions.
Publication
In February 1946 Williamson left Marconi, moved to Edinburgh and joined Ferranti. A few months later a senior Marconi salesman, who sought new means of promoting the KT66 to general public, noticed Williamson's 1944 report about his amplifier prototypes, and sent it for publication to Wireless World. Chief editor H. F. Smith knew Williamson for his earlier contributions; he contacted the author directly and requested a detailed article written specifically for the DIY readers. Williamson promptly responded, but for unknown reasons the publication, originally slotted for 1946, was delayed until April–May 1947. While the paper was waiting for print, the magazine had published the new version of Cocking's Quality amplifier. Cocking, as the technical editor of Wireless World, certainly had precedence; according to Peter Stinson, he was sceptical about the Williamson amplifier, believing that his own design needed no further improvements.
By 1947 British industry had already released two amplifiers of comparable sound quality. Harold Leak announced production of his Leak Point One in September 1945;
later in the same year Peter Walker published the first sketch of his distributed-load output stage that would become the Quad II production model. Leak and Walker tried to commercialize their ideas on the meagre post-war British market; their achievements were practically unknown outside of the United Kingdom. Williamson did the opposite: he donated his design to worldwide DIY community, thus securing lasting popular following.
In August 1949 Williamson, responding to letters from the readers, published the "New Version" of this amplifier. The article dealt extensively with construction, tuning and troubleshooting issues, however, its main objective was to address stability issues reported in letters from the readers. Apart from the additional frequency compensation network, a biasing potentiometer and a new, indirectly-heated rectifier valve that was not available in 1947, the circuit remained the same. In October 1949 – January 1950 and May 1952 Williamson published a series of articles on matching preamplifier stages and brief "Replies to Queries" concerning assembly and testing. A collection of articles published by Williamson in 1947–1950 was printed as a standalone 36-page brochure in 1952, with a second edition in 1953. The Williamson amplifier itself, as described in the August 1949 issue of Wireless World, remained unchanged.
Reception
The Williamson amplifier was an instant success. The publication coincided with the resumption of television broadcasting, the beginning of FM broadcasting, the release of the first high fidelity gramophone records (Decca ffrr and the LP record), and the "discovery" of the captured German Magnetophon. The high fidelity media that did not exist in the 1930s became a reality, and the public wanted playback equipment of matching quality. Off-the-shelf amplifiers available in 1947 were not fit for the task. At the same time, electronic components markets were flooded with military surplus, including cheap American 6L6 and 807 power valves. For a while, DIY construction was the only way to obtain high fidelity amplification. Thousands of amateurs began copying the Williamson design; the required transformers and chassis were soon provided by industry.
In September 1947 Australians R. H. Astor and Fritz Langford-Smith adapted the Williamson circuit for American 6SN7 and 807 valves; a 6L6 variant followed soon. British and Australian press was unanimously enthusiastic: "by far the best we have ever tested ... extraordinary linearity and lack of harmonic and intermodulation distortion", "amplifier to end [all] amplifiers", "absolute tops for obtaining natural reproduction" and so on. America lagged behind by about two years: first reviews appeared in the second half of 1949, and were just as complimentary. American companies adapted the circuit to locally available components, and soon began importing "premium" British valves and transformers, thus launching the market for British hi-fi in the United States. By the end of 1949 the Williamson amplifier became a universally recognized reference design, and a starting point for all valve designs employing global feedback.
The spread of DIY construction and the abundance of publications addressed to the amateurs had a solid economic reason: factory-made electronics of the 1940s were too expensive. The industry has not yet reorganized for mass production of affordable consumer products. Home construction of valve electronics was relatively simple and promised considerable savings. The number of home-made Williamson amplifiers is estimated at least in hundreds of thousands; they absolutely dominated the DIY scene in English-speaking countries. Stereo has not been commercialized yet; almost all surviving Williamson amplifiers are monaural. Each one differs in minor details, assembly quality is usually inferior to factory-made models. In the 21st century these monaural amplifiers are commonly sold at online auctions, but finding a matching pair is almost impossible.
Small-scale factory production in the United Kingdom began in February 1948; first big manufacturer, Rogers, announced production in October 1948. In the early 1950s the Williamson amplifier dominated factory production in both the United Kingdom and the United States; John Frieborn of Radio-Electronics wrote in 1953 that "since Williamson published the first description of his High-Quality Audio Amplifier, other audio designers had two apparent choices, beating him [Williamson] or joining him."
Design features
Specifications
Tube complement, 1947 version: 4x L63 (each equivalent to 6J5), 2x KT66, 1x U52 directly-heated rectifier. The 1949 version also provided for the use of 6SN7 or B65 double triodes, and replaced rectifier with the 53KU indirectly-heated type;
Output power and maximum distortion: 15W RMS at no more than 0.1% THD;
Intermodulation: not specified (Williamson did not have the necessary test equipment);
Frequency range: 10-20000Hz at ±0.2dB; 3-60000Hz at ±3dB;
Phase shift within 10-20000Hz: "never exceeds a few degrees" at the extremes of audio spectrum;
Noise and hum: -85 dBbelow maximum output, almost entirely consisting of mains frequency hum.
Topology
The Williamson amplifier is a four-stage, push-pull, class A triode valve amplifier built around a high quality, wideband output transformer. Its second (concertina-type phase splitter, V1B), third (driver, V2A and V2B) and fourth (output, V3 and V4) stages follow Cocking's Quality Amplifier circuit. The added first stage (V1A) is a dedicated error amplifier, which compensates for the loss of gain caused by negative feedback. Williamson optimized operating points of each stage for best linearity with sufficient overload reserve. The output stage is biased into pure class A; traditionally it used triode-connected beam tetrodes or pentodes. With American 807 or British KT66 valves (Williamson recommended the latter type) and specified power supply the amplifier delivered 15 watts of output power. Further increase in output, according to Williamson, required use of four output valves; his 1947 article mentions construction of a 70-watt prototype.
The plate of the first stage and the grid of the phase splitter are connected directly. This configuration, known since 1940, was still uncommon in 1947; American designers considered it a novelty even in the early 1950s. Phase splitter, driver and output stage are capacitively coupled. Cathode bypass capacitors are absent: Williamson, like Cocking before him, tried to linearize open-loop performance of each stage, and deliberately sacrificed gain for linearity; he was also concerned with potential low-frequency instability introduced by added capacitances. The circuit in either 1947 or 1949 variant contains no electrolytic capacitors; its power supply uses a CLC π-filter with two 8 μF paper capacitors, with a further LC filter feeding the first three stages.
Derivative designs of the 1950s often deviated from Williamson's recommendations while retaining his four-stage topology. According to Peter Stinson, this alone is not sufficient to be called a Williamson amplifier. A true Williamson amplifier must meet five criteria simultaneously:
All four stages must use triodes; the output stage may use triode-connected tetrodes or pentodes;
Output stage must operate in class A;
Phase splitter must be directly coupled to the input stage;
High-quality output transformer must conform to the original Williamson specification;
Global negative feedback loop must be connected from transformer secondary to the cathode of the input triode, and be exactly 20 dB deep.
Feedback
The 20 dB (ten-to-one) feedback loop of the Williamson amplifier wraps around all four stages and the output transformer. According to Richard C. Hitchcock, "this is a severe test of design and is one of the outstanding features of the Williamson circuit." Williamson wrote that the depth of feedback can be easily increased from 20 to 30 dB, but the audible improvements of deeper feedback will be diminishingly low.
All frequency compensation components are located in the first and second stages of the circuit: their local smoothing RC filters subtly alter frequency response at infrasonic frequencies. An additional RC-filter in the first stage, introduced by Williamson in the 1949 version, prevents oscillations at ultrasonic frequencies. Feedback voltage divider is connected to the transformer secondary, thus feedback depth is dependent on loudspeaker impedance, and setting it at precisely 20 dB requires altering the divider ratio. The voltage divider is purely resistive, with no capacitive or inductive frequency compensation components. According to Williamson, a capacitor shunting the upper leg of the divider is only necessary for inferior-quality transformers; if the transformer matches requirements set by Williamson, the capacitor is useless.
Transformer
Williamson was confident that the output transformer is the most critical component in any valve amplifier. Even before applying global feedback, the transformer is liable for at least four types of distortion. Their causes cannot be addressed simultaneously, and the designer must make a compromise between conflicting requirements. Global feedback partially suppresses distortion, but also tightens requirements to the bandwidth of the transformer.
Stability theory predicted that an amplifier built to Williamson's specifications could only be stable if the bandwidth of its output transformer was no less than 2.5...160000 Hz. This was impractically wide for an audio amplifier, requiring an exceptionally large, complex and expensive transformer. Williamson, seeking a working solution, had to decrease phase margin to a bare minimum; even then, the required bandwidth had to be no less than 3,3...60000 Hz. Such a transformer, driven by a pair of triode-connected KT66, had to have primary winding inductance of at least 100 H, and leakage inductance of no more than 33 mH. These were extremely demanding specifications for the period, far exceeding anything available on the consumer market. The Williamson transformers had to be heavier, larger, more complex and more expensive than typical audio transformers, and yet they could only guarantee minimally acceptable stability. A wider phase margin, wrote Williamson, was highly desirable but required absolutely impractical values of primary inductance.
Overload behaviour
Valve amplifiers with capacitive coupling between the driver stage and the output stage do not clip in the same manner as transistor amplifiers (e.g. clamping output voltage to one of the supply rails). Instead, they choke when large signal swings intermittently attempt to bias the grids of the output valves above zero. Positively-biased grids begin conducting, but the coupling capacitors cannot delivered required current. Grid voltages do not reach target values, output waveform flattens.
Feedback attempts to overcome choking by increasing driver voltage swing, but fails because coupling capacitors cannot physically pass direct current. Resulting distortion pattern, as Williamson proved with photocopies of oscillograms and Lissajous curves, is "of the desirable type", i.e. with abrupt onset of distortion at the extremes of otherwise highly linear response curves.
Stability problem
The first attempts to build the Williamson amplifier revealed its tendency to oscillate due to very narrow phase margin. Astor and Langford-Smith, who gave the Williamson excellent ratings, reported that "for fairly large outputs at low frequencies a high frequency oscillation about 60 kC/s [kHz] would commence and be accompanied by a pulsed output of some other frequency". The Australians, armed with first-class test equipment, suppressed the 60 kHz oscillation with small capacitors on screen grids, but could not identify and suppress the cause of "some other" oscillations.
Later, technicians of the United States Naval Research Laboratory examined seven different commercially available Williamson amplifiers, and found that all of them oscillated at infrasonic frequencies of 2...3 Hz. Replacement of output transformers affected stability only at audio and ultrasonic frequencies. The best transformers displayed perfectly flat frequency response from 10 to 100,000 Hz, but were also prone to infrasonic "breathing". The worst transformers displayed prominent ultrasonic resonances that, however, did not cause sustained oscillations. Some "ringed" at relatively low frequencies of 30 to 50 kHz, others extended into 500...700 kHz range.
Custom-built Williamson transformers were imperfect, but general-purpose, off-the-shelf transformers used by amateurs were far worse. Their resonances could only be tamed by narrowing the amplifier's bandwidth. The extent of stability problem in the DIY community remains unknown: the editors of Wireless Worlds were flooded with readers' letters, but preferred to redirect them to Williamson. What is known is that the inventor was compelled to revise and improve the design; he took a leave from his job at Ferranti and presented the second version of the Williamson in 1949. Williamson could not fix the fundamental stability problem; the "New version" was just barely stable. Independent analysis published in December 1950 proved that the revised Williamson amplifier remained prone to both infrasonic and ultrasonic oscillations.
According to the analysis, infrasonic open-loop response of the Williamson amplifier is shaped by three high-pass filters: two interstage RC filters, each with a cutoff frequency of 6 Hz, and the output stage RL filter, formed by the valves' output impedances and the transformer's primary inductance. At zero input signal, the nonlinear RL filter has a cutoff frequency of 3 Hz. This combination of cutoff frequencies, wrapped inside a 2030dB frequency loop, is unstable. Williamson tried to suppress it with a compensation network, also serving as a smoothing filter. The transformer's nonlinearity also improved stability: at high signal currents effective inductance of the primary increased, causing a decrease in cutoff frequency and a rise in phase margin. The simplest solution was to spread apart cutoff frequencies of the RC filters, provided that the output transformer conforms to the Williamson specification. For example, the 1952 Ultralinear Williamson by David Hafler and Herbert Keroes had these frequencies set at 1.3 and 6 Hz.
Precise analysis at ultrasonic frequencies is impossible due to the asymmetry of the phase splitter stage, and unknown parasitics and nonlinearities of the output stage. Depending on the chosen analysis model, open-loop response can be roughly approximated with a combination of either four or five low-pass filters. Different authors used different approaches and estimated somewhat different cutoff points of these filters, but in each case at least three of four or five cutoff frequencies were dangerously close to each other, which was a certain sign of instability. Williamson, again, fixed the problem with an RC compensation network, but even then phase margin remained dangerously low. DIYers had to tackle oscillations themselves: some added shunting capacitors to the screen grids, others tweaked layout and wiring, or deliberately narrowed the amplifier's bandwidth, negating the benefits of the original circuit.
Component problem
The Williamson amplifier was very sensitive to the quality and parameters of passive components and valves. Carbon and composition-type resistors generated excessive noise and caused harmonic distortion; American valves used as substitutes for the British types specified by Williamson, could not match their performance. Williamson warned that the KT66 has no direct substitutes, and should be preferred over any alternatives.
Amateurs who copied the Williamson amplifier were unable to identify and fix its critical weak points. An amateur armed with an analogue multimeter could "see" infrasonic oscillations by watching the instrument needle, but fixing high-frequency issues required an oscilloscope with bandwidth of at least 1 or 2MHz bandwidth. In the 1950s bandwidths of many commercial oscilloscopes were too narrow for the task, and even these models were too expensive for the DIYers.
Articles by professional engineers dealing with analysis and fine tuning of the Williamson amplifier were published relatively late, when the original DIY enthusiasm had already faded - in 1952, 1957, 1961. Martin Kiebert, who built professional-grade Williamson amplifiers for his laboratory at Bendix Corporation, identified five sources of distortion caused by inferior components other than the transformer:
Excessive noise and electromagnetic interference caused by noisy carbon or composition-type resistors and incorrect layout of the first stage. Replacement of resistors specified by Williamson with wirewound resistors could improve signal-to-noise ratio by . Replacement of 6SN7 with low-noise 12AY7 could gain another ;
Frequency and harmonic distortion caused by asymmetry of passive components in two sides of a push-pull circuit. Typical components of the 1950s had 20% tolerances, which was unacceptably high for the Williamson;
The 6SN7 driver stage was often unable to properly swing the KT66 grids, causing excessive distortion. According to Kiebert, the American 5687 dual triode was clearly superior. According to Talbot Wright, the 6SN7 was not at fault - distortion was caused by incorrectly set standing current, and could be improved by a simple increase in bias voltage;
Distortion in the feedback voltage divider. This critical function required low-distortion wirewound resistors;
Distortion was clearly influenced by the choice of output valves, however, Kiebert could not identify any specific rules.
Kiebert rated the design positively but warned the readers that following Williamson's instructions is possible only in a laboratory environment. The amplifier reveals its potential only with expensive, properly matched components that were out of reach of an average amateur. Even a perfectly built and tested Williamson amplifier would sooner or later need valve replacement, which would very likely cause an unexpected rise in distortion.
Variants and derivatives
After 1950 the industry produced numerous derivatives of the Williamson amplifier, often deviating significantly from the principles outlined by its creator. In 1950 Herbert Keroes shunted common cathode resistor of his 807 amplifier with a large electrolytic capacitor which, according to Keroes, significantly reduced distortion at high output power. Contrary to recommendations by Cocking and Williamson, Keroes and his partner David Hafler used cathode shunt capacitors in most of their designs; by 1956 this approach became de facto industry standard. In the same 1956 Hafler used fixed bias in his EL34 Williamson. Later, fixed bias became a staple of Soviet and Russian Williamson-like designs that employed exotic output valves like the 6C4C directly-heated triode, the GU-50 generator pentode or the 6P45S horizontal deflection tetrode.
Throughout the 1950s, as prices of capacitors decreased, designers steadily increased their values. The original Williamson amplifier used paper capacitors; by 1952 Kiebert uses electrolytics; the 1955 reference design by Keroes used at least bypass capacitors; the 1961 budget amplifier by Wright employed a total of . Designers of the commercial Bell2200 amplifier (1953) replaced direct coupling of the first two stages with capacitive coupling; the Stromberg-CarlsonAR-425 (also 1953) use a tetrode-mode output stage in an otherwise familiar Williamson topology. Both Bell and Stromberg-Carson modifications further worsened stability, and required additional frequency compensation. Designers of the BogenDB20 (1953) went even further, and combined global and local negative feedback loops with positive feedback in the output stage.
In December 1951 Hafler and Keroes began promoting the ultralinear stage - a method of distributing load between anode and screen grid of a pentode or tetrode, invented by Alan Blumlein in the 1930s. An ultralinear stage delivered 50% to 100% more output power than the same stage in triode connection, at roughly the same distortion, and cost less than a pure pentode or tetrode stage (the latter required a separate screen grid supply, the ultralinear did not need it). The first Ultralinear Williamson, employing a pair of 6L6 in a Williamson-like topology, delivered ; their second model, built around more powerful 807 tetrodes, delivered . Very soon the American public acquired taste to high-power amplification, and the industry launched the "race for Watts". By 1955 Hafler and Keroes, now working separately, were offering 60-Watt models employing pairs of 6550 tetrodes or quartets of KT66s. Thus in less than a decade, step by step, the industry abandoned the principles set by Williamson, but continued to use his name as a convenient free trademark. In the 21st century it is even used for amplifiers without global negative feedback; the only thing they have in common with the true Williamson amplifier is the four-stage topology.
Following the success of Hafler and Keroes, American manufacturers like Eico, The Fisher, Harman/Kardon and Marantz disposed with "obsolete" power triodes and switched to ultralinear designs. Mullard, Britain's largest valve manufacturer and provider of reference designs to the European industry, publicly supported the novelty. Williamson's former employer, General Electric Company, followed suit and published a reference "30-Watt Williamson" design built around a pair of ultralinear-connected KT88. The original Williamson amplifier lost the race, just like alternative designs by Peter Walker and Frank McIntosh. In September 1952 Williamson and Walker (then business partners in the development of the Quad Electrostatic Loudspeaker) agreed that the ultralinear stage was, indeed, preferable in mass production. Williamson gradually stepped aside from audio engineering. He made his living by designing milling machines and flexible manufacturing systems, which later earned him election to the Royal Society, and never considered audio design a serious occupation for himself.
In 1956 most production amplifiers in North America followed the Ultralinear Williamson template, but in the next few years it was retired, too. The new three-stage reference design combined phase splitter and driver functions in one valve, and thus cost proportionally less than four-stage amplifiers. Hafler's Dynaco Stereo 70, which followed this topology, became the most produced valve amplifier in history. North American consumer market was flooded with millions of similar, almost identical amplifiers and receivers claiming 25 to 20W per channel, as well as clones of less powerful British designs like the Mullard 5-10. Advertisements claimed that these models performed as well as the original Williamson, with higher output power and with guaranteed stability. The customers could not verify these claims, and had to rely to listening tests, hearsay and expert advice. The problem was partially addressed by the concept of subjective listening, advanced by Hafler and Keroes back in 1951: "Excellent measurements are a necessary but not a sufficient condition for the quality of sound. The listening test is one of most importance... the most stringent test of all". By the end of the 1960s subjectivist approach was adopted by the audiophiles and marketing people, who eagerly forgot about the objective principles devised by Williamson in the 1940s.
Objectively, many deep-feedback valve designs of the 1950s matched or exceeded the 0.1% distortion rating of the Williamson amplifier, but none could significantly improve on this figure. Williamson had found that valve amplifier performance was limited mostly by the output transformer. Transistor amplifiers did not have this limitation, and yet it took around 15 years to bring their performance to the level attained by Williamson in 1947.
Notes
References
Sources
; also reprinted as
A collection of articles from the late 1940s and early 1950s, including:
Vacuum tubes
Valve amplifiers
1947 in technology
1947 works
1947 in the United Kingdom | Williamson amplifier | [
"Physics"
] | 6,908 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
1,019,659 | https://en.wikipedia.org/wiki/Aleksander%20Jab%C5%82o%C5%84ski | Aleksander Jabłoński (born 26 February 1898 in Woskresenówka, in Imperial Russia; died 9 September 1980 in Skierniewice, Poland) was a Polish physicist and member of the Polish Academy of Sciences. His research was in molecular spectroscopy and photophysics.
Life and career
He was born on 26 February 1898 in Woskresenówka near Kharkiv in Imperial Russia. He attended Gymnasium high school in Kharkiv as well as a music school where he learned to play the violin under supervision of Konstanty Gorski. In 1916, he started to study physics at the University of Kharkiv.
During the World War I he served in the Polish I Corps in Russia. After the war he settled in Warsaw in 1918. In 1919–1920 he fought for Poland against aggression by Soviet Russia (and was consequently decorated with the Polish Cross of Valour).
Jabłoński initially studied the violin at Warsaw Conservatory, under the virtuoso Stanisław Barcewicz, but later switched to science.
He received a Ph.D. from the University of Warsaw in 1930, writing a thesis On the influence of the change of the wavelength of excitation light on the fluorescence spectra. He then went to Friedrich-Wilhelms-Universität in Berlin, Germany for two years (1930–31) as a fellow of the Rockefeller Foundation. He worked with Peter Pringsheim at the FWU and later with Otto Stern in Hamburg. In 1934 Jabłoński returned to Poland to receive habilitation from the University of Warsaw. His thesis was On the influence of intermolecular interactions on the absorption and emission of light, the subject to which he would devote the rest of his life. He served as president of the Polish Physical Society between 1957 and 1961.
Jabłoński was a pioneer of molecular photophysics, creating the concept of the "luminescent centre" and his own theories of concentrational quenching and depolarization of photoluminescence. He also worked on pressure broadening of emission spectra lines and was the first to recognize the analogy between pressure broadening and molecular spectra. This led to development of the quantum-mechanical pressure broadening theory.
Fluorescence is illustrated schematically with the classical Jablonski diagram, first proposed by Jabłoński in 1933 to describe absorption and emission of light.
In 1946, he settled in Toruń where he was appointed Head of the Faculty of Physics at the Nicolaus Copernicus University.
Awards and honours
Cross of Valour (1920)
Fellow of the Rockefeller Foundation (1930–31)
Golden Cross of Merit (1951)
Marian Smoluchowski Medal (1968)
Honorary degree of the University of Windsor (1973)
Honorary degree of the Nicolaus Copernicus University in Toruń (1973)
Honorary degree of the University of Gdańsk (1975)
References
Complete list of papers published by Professor Aleksander Jablonski
A short biography of Aleksander Jabłoński
Kompletna lista prac Aleksandra Jabłońskiego
Aleksander Jabłoński fulltext articles in Kujawsko-Pomorska Digital Library
Notes
1898 births
1980 deaths
University of Warsaw alumni
Humboldt University of Berlin alumni
Academic staff of Nicolaus Copernicus University in Toruń
20th-century Polish physicists
Spectroscopists | Aleksander Jabłoński | [
"Physics",
"Chemistry"
] | 701 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
1,020,980 | https://en.wikipedia.org/wiki/Starling%20equation | The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic pressure (oncotic pressure) between plasma inside microvessels and interstitial fluid outside them. The Starling equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended.
Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma.
A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small.
Discontinuous capillaries as found in sinusoidal tissues of bone marrow, liver and spleen have little or no filter function.
The rate at which fluid is filtered across vascular endothelium (transendothelial filtration) is determined by the sum of two outward forces, capillary pressure () and interstitial protein osmotic pressure (), and two absorptive forces, plasma protein osmotic pressure () and interstitial pressure (). The Starling equation describes these forces in mathematical terms. It is one of the Kedem–Katchalski equations which bring nonsteady state thermodynamics to the theory of osmotic pressure across membranes that are at least partly permeable to the solute responsible for the osmotic pressure difference. The second Kedem–Katchalsky equation explains the trans endothelial transport of solutes, .
The equation
The classic Starling equation reads as follows:
where:
is the trans endothelial solvent filtration volume per second (SI units of m3·s−1).
is the net driving force (SI units of Pa = kg·m−1·s−2, often expressed as mmHg),
is the capillary hydrostatic pressure
is the interstitial hydrostatic pressure
is the plasma protein oncotic pressure
is the interstitial oncotic pressure
is the hydraulic conductivity of the membrane (SI units of m2·s·kg−1, equivalent to m·s−1·mmHg−1)
is the surface area for filtration (SI units of m2)
the product · is defined as the filtration coefficient (SI units of m4·s·kg−1, or equivalently in m3·s−1·mmHg−1)
is Staverman's reflection coefficient (adimensional)
By convention, outward force is defined as positive, and inward force is defined as negative. If Jv is positive, solvent is leaving the capillary (filtration). If negative, solvent is entering the capillary (absorption).
Applying the classic Starling equation, it had long been taught that continuous capillaries filter out fluid in their arteriolar section and reabsorb most of it in their venular section, as shown by the diagram.
However, empirical evidence shows that, in most tissues, the flux of the intraluminal fluid of capillaries is continuous and, primarily, effluent. Efflux occurs along the whole length of a capillary. Fluid filtered to the space outside a capillary is mostly returned to the circulation via lymph nodes and the thoracic duct.
A mechanism for this phenomenon is the Michel-Weinbaum model, in honour of two scientists who, independently, described the filtration function of the glycocalyx. Briefly, the colloid osmotic pressure πi of the interstitial fluid has been found to have no effect on Jv and the colloid osmotic pressure difference that opposes filtration is now known to be π'p minus the subglycocalyx π, which is close to zero while there is adequate filtration to flush interstitial proteins out of the interendothelial cleft. Consequently, Jv is much less than previously calculated, and the unopposed diffusion of interstitial proteins to the subglycocalyx space if and when filtration falls wipes out the colloid osmotic pressure difference necessary for reabsorption of fluid to the capillary.
The revised Starling equation is compatible with the steady-state Starling principle:
where:
is the trans endothelial solvent filtration volume per second.
is the net driving force,
is the capillary hydrostatic pressure
is the interstitial hydrostatic pressure
is the plasma protein oncotic pressure
is the subglycocalyx oncotic pressure
is the hydraulic conductivity of the membrane
is the surface area for filtration
is Staverman's reflection coefficient
Pressures are often measured in millimetres of mercury (mmHg), and the filtration coefficient in millilitres per minute per millimetre of mercury (ml·min−1·mmHg−1).
Filtration coefficient
In some texts the product of hydraulic conductivity and surface area is called the filtration co-efficient Kfc.
Reflection coefficient
Staverman's reflection coefficient, σ, is a unitless constant that is specific to the permeability of a membrane to a given solute.
The Starling equation, written without σ, describes the flow of a solvent across a membrane that is impermeable to the solutes contained within the solution.
σn corrects for the partial permeability of a semipermeable membrane to a solute n.
Where σ is close to 1, the plasma membrane is less permeable to the denotated species (for example, larger molecules such as albumin and other plasma proteins), which may flow across the endothelial lining, from higher to lower concentrations, more slowly, while allowing water and smaller solutes through the glycocalyx filter to the extravascular space.
Glomerular capillaries have a reflection coefficient close to 1 as normally no protein crosses into the glomerular filtrate.
In contrast, hepatic sinusoids have no reflection coefficient as they are fully permeable to protein. Hepatic interstitial fluid within the Space of Diss has the same colloid osmotic pressure as plasma and so hepatocyte synthesis of albumin can be regulated. Albumin and other proteins in the interstitial spaces return to the circulation via lymph.
Approximated values
Following are typically quoted values for the variables in the classic Starling equation:
It is reasoned that some albumin escapes from the capillaries and enters the interstitial fluid where it would produce a flow of water equivalent to that produced by a hydrostatic pressure of +3 mmHg. Thus, the difference in protein concentration would produce a flow of fluid into the vessel at the venous end equivalent to 28 − 3 = 25 mmHg of hydrostatic pressure. The total oncotic pressure present at the venous end could be considered as +25 mmHg.
In the beginning (arteriolar end) of a capillary, there is a net driving force () outwards from the capillary of +9 mmHg. In the end (venular end), on the other hand, there is a net driving force of −8 mmHg.
Assuming that the net driving force declines linearly, then there is a mean net driving force outwards from the capillary as a whole, which also results in that more fluid exits a capillary than re-enters it. The lymphatic system drains this excess.
J. Rodney Levick argues in his textbook that the interstitial force is often underestimated, and measurements used to populate the revised Starling equation show the absorbing forces to be consistently less than capillary or venular pressures.
Specific organs
Kidneys
Glomerular capillaries have a continuous glycocalyx layer in health and the total transendothelial filtration rate of solvent () to the renal tubules is normally around 125 ml/ min (about 180 litres/ day). Glomerular capillary is more familiarly known as the glomerular filtration rate (GFR). In the rest of the body's capillaries, is typically 5 ml/ min (around 8 litres/ day), and the fluid is returned to the circulation via afferent and efferent lymphatics.
Lungs
The Starling equation can describe the movement of fluid from pulmonary capillaries to the alveolar air space.
Clinical significance
Woodcock and Woodcock showed in 2012 that the revised Starling equation (steady-state Starling principle) provides scientific explanations for clinical observations concerning intravenous fluid therapy. Traditional teaching of both filtration and absorption of fluid occurring in a single capillary has been superseded by the concept of a vital circulation of extracellular fluid running parallel to the circulation of blood. New approaches to the treatment of oedema (tissue swelling) are suggested.
History
The Starling equation is named for the British physiologist Ernest Starling, who is also recognised for the Frank–Starling law of the heart. Starling can be credited with identifying that the "absorption of isotonic salt solutions (from the extravascular space) by the blood vessels is determined by this osmotic pressure of the serum proteins" in 1896.
See also
Renal function
References
External links
Derangedphysiology.com: Starling's Principle of Transvascular Fluid Dynamics Starling's principle of transvascular fluid dynamics | Deranged Physiology
Eponymous equations of physics
Equations of fluid dynamics
Cardiovascular physiology
Mathematics in medicine | Starling equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,276 | [
"Equations of fluid dynamics",
"Equations of physics",
"Applied mathematics",
"Eponymous equations of physics",
"Mathematics in medicine",
"Fluid dynamics"
] |
1,021,118 | https://en.wikipedia.org/wiki/Fast%20fracture | In structural engineering and material science, fast fracture is a phenomenon in which a flaw (such as a crack) in a material expands quickly, and leads to catastrophic failure of the material. It proceeds in high speed and requires a relatively small amount of accumulated strain energy, making it a dangerous failure mode.
Flaw
Stress acting on a material when fast fracture occurs is less than the material's yield stress. A very representative example of this is what happens when poking a blown up balloon with a needle, that is, fast fracture of the balloon's material. The energy in the balloon comes from the compressed gas inside it and the energy stored in the rubber membrane itself. The introduction of the flaw, which in this case is the pin prick, would lead to the explosion as the membrane fails by fast fracture. However, if the same flaw is introduced to a balloon with less energy - as in the case of a partially inflated balloon - the fast fracture will not occur, unless the balloon is punctured progressively so that it reaches a critical pressure at which fast fracture occurs.
The occurrence of fast fracture can depend on the material. For instance, it transpires in the cases of brittle materials with less capacity for deformation even if the flaw only involves small defects caused by the manufacturing process.
See also
Yield (engineering)
References
Mechanical failure | Fast fracture | [
"Materials_science",
"Engineering"
] | 268 | [
"Mechanical failure",
"Materials science",
"Mechanical engineering"
] |
1,021,210 | https://en.wikipedia.org/wiki/Immunohistochemistry | Immunohistochemistry is a form of immunostaining. It involves the process of selectively identifying antigens (proteins) in cells and tissue, by exploiting the principle of antibodies binding specifically to antigens in biological tissues. Albert Hewett Coons, Ernest Berliner, Norman Jones and Hugh J Creech was the first to develop immunofluorescence in 1941. This led to the later development of immunohistochemistry.
Immunohistochemical staining is widely used in the diagnosis of abnormal cells such as those found in cancerous tumors. In some cancer cells certain tumor antigens are expressed which make it possible to detect. Immunohistochemistry is also widely used in basic research, to understand the distribution and localization of biomarkers and differentially expressed proteins in different parts of a biological tissue.
Sample preparation
Immunohistochemistry can be performed on tissue that has been fixed and embedded in paraffin, but also cryopreservated (frozen) tissue. Based on the way the tissue is preserved, there are different steps to prepare the tissue for immunohistochemistry, but the general method includes proper fixation, antigen retrieval incubation with primary antibody, then incubation with secondary antibody.
Tissue preparation and fixation
Fixation of the tissue is important to preserve the tissue and maintaining cellular morphology. The fixation formula, ratio of fixative to tissue and time in the fixative, will affect the result. The fixation solution (fixative) is often 10% neutral buffer formalin. Normal fixation time is 24 hours in room temperature. The ratio of fixative to tissue ranges from 1:1 to 1:20. After the tissue is fixed it can be embedded in paraffin wax.
For frozen sections, fixation is usually performed after sectioning if not new antibodies are going to be tested. Then acetone or formalin can be used.
Sectioning
Sectioning of the tissue sample is done using a microtome. For paraffin embedded tissue 4 μm is normal thickness, and for frozen sections 4 – 6 μm. The thickness of the sliced sections matters, and is an important factor in immunohistochemistry. If you compare a section of brain tissue measuring 4 μm with a section measuring 7 μm, some of what you see in the 7 μm thick section might be lacking in the 4 μm section. This shows the importance of detailed methods related to this methodology. The paraffin embedded tissues should be deparaffinized to remove all the paraffin on and around the tissue sample in xylene or a good substitute, followed by alcohol.
Antigen retrieval
Antigen retrieval is required to make the epitopes accessible for immunohistochemical staining for most formalin fixed tissue section. The epitopes are the binding sites for antibodies used to visualize the targeted antigen which may be masked due to the fixation. Fixation of the tissue may cause formation of methylene bridges or crosslinking of amino groups, so that the epitopes no longer are available. Antigen retrieval can restore the masked antigenicity, possibly by breaking down the crosslinks caused by fixation. The most common way to perform antigen retrieval is by using high-temperature heating while soaking the slides in a buffer solution. This can be done in different ways, for example by using microwave oven, autoclaves, heating plates or water baths. For frozen sections, antigen retrieval is generally not necessary, but for frozen section that has been fixed in acetone or formalin, can antigen retrieval improve the immunohistochemistry signal.
Blocking
Non-specific binding of antibodies can cause background staining. Although antibodies bind to specific epitopes, they may also partially or weakly bind to sites on nonspecific proteins that are similar to the binding site on the target protein. By incubating the tissue with normal serum isolated from the species which the secondary antibody was produced, the background staining can be reduced. It is also possible to use commercially available universal blocking buffers. Other common blocking buffers include normal serum, non-fat dry milk, BSA, or gelatin. Endogenous enzyme activity may also cause background staining but can be reduced if the tissue is treated with hydrogen peroxide.
Sample labeling
After preparing the sample, the target can be visualized by using antibodies labeled with fluorescent compounds, metals or enzymes. There are direct and indirect methods for labeling the sample.
Antibody types
The antibodies used for detection can be polyclonal or monoclonal. Polyclonal antibodies are made by using animals like guinea pig, rabbit, mouse, rat, or goat. The animal is injected with the antigen of interest and trigger an immune response. The antibodies can be isolated from the animal's whole serum. Polyclonal antibody production will result in a mixture of different antibodies and will recognize multiple epitopes. Monoclonal antibodies are made by injecting the animal with the antigen of interest and then isolating an antibody-producing B cell, typically from the spleen. The antibody producing cell is then fused with a cancer cell line. This causes the antibodies to show specificity for a single epitope.
For immunohistochemical detection strategies, antibodies are classified as primary or secondary reagents. Primary antibodies are raised against an antigen of interest and are typically unconjugated (unlabeled). Secondary antibodies are raised against immunoglobulins of the primary antibody species. The secondary antibody is usually conjugated to a linker molecule, such as biotin, that then recruits reporter molecules, or the secondary antibody itself is directly bound to the reporter molecule.
Detection methods
The direct method is a one-step staining method and involves a labeled antibody reacting directly with the antigen in tissue sections. While this technique utilizes only one antibody and therefore is simple and rapid, the sensitivity is lower due to little signal amplification, in contrast to indirect approaches.
The indirect method involves an unlabeled primary antibody that binds to the target antigen in the tissue. Then a secondary antibody, which binds with the primary antibody is added as a second layer. As mentioned, the secondary antibody must be raised against the antibody IgG of the animal species in which the primary antibody has been raised. This method is more sensitive than direct detection strategies because of signal amplification due to the binding of several secondary antibodies to each primary antibody.
The indirect method, aside from its greater sensitivity, also has the advantage that only a relatively small number of standard conjugated (labeled) secondary antibodies needs to be generated. For example, a labeled secondary antibody raised against rabbit IgG, is useful with any primary antibody raised in rabbit. This is particularly useful when a researcher is labeling more than one primary antibody, whether due to polyclonal selection producing an array of primary antibodies for a singular antigen or when there is interest in multiple antigens. With the direct method, it would be necessary to label each primary antibody for every antigen of interest.
Reporter molecules
Reporter molecules vary based on the nature of the detection method, the most common being chromogenic and fluorescence detection. In chromogenic immunohistochemistry an antibody is conjugated to an enzyme, such as alkaline phosphate and horseradish peroxidase, that can catalyze a color-producing reaction in the presence of a chromogenic substrate like diaminobenzidine. The colored product can be analyzed with an ordinary light microscope. In immunofluorescence the antibody is tagged to a fluorophore, such as fluorescein isothiocyanate, tetramethylrhodamine isothiocyanate, aminomethyl Coumarin acetate or Cyanine5. Synthetic fluorochromes from Alexa Fluors is also commonly used. The fluorochromes can be visualized by a fluorescence or confocal microscope.
For chromogenic and fluorescent detection methods, densitometric analysis of the signal can provide semi- and fully quantitative data, respectively, to correlate the level of reporter signal to the level of protein expression or localization.
Counterstains
After immunohistochemical staining of the target antigen, another stain is often applied. The counterstain provide contrast that helps the primary stain stand out and makes it easier to examine the tissue morphology. It also helps with orientation and visualization of the tissue section. Hematoxylin is commonly used.
Troubleshooting
In immunohistochemical techniques, there are several steps prior to the final staining of the tissue that can cause a variety of problems. It can be strong background staining, weak target antigen staining and presence of artifacts. It is important that antibody quality and the immunohistochemistry techniques are optimized. Endogenous biotin, reporter enzymes or primary/secondary antibody cross-reactivity are common causes of strong background staining. Weak or absent staining may be caused by inaccurate fixation of the tissue or to low antigen levels. These aspects of immunohistochemistry tissue prep and antibody staining must be systematically addressed to identify and overcome staining issues.
Methods to eliminate background staining include dilution of the primary or secondary antibodies, changing the time or temperature of incubation, and using a different detection system or different primary antibody. Quality control should as a minimum include a tissue known to express the antigen as a positive control and negative controls of tissue known not to express the antigen, as well as the test tissue probed in the same way with omission of the primary antibody (or better, absorption of the primary antibody).
Diagnostic immunohistochemistry markers
Immunohistochemistry is an excellent detection technique and has the tremendous advantage of being able to show exactly where a given protein is located within the tissue examined. It is also an effective way to examine the tissues. This has made it a widely used technique in neuroscience, enabling researchers to examine protein expression within specific brain structures. Its major disadvantage is that, unlike immunoblotting techniques where staining is checked against a molecular weight ladder, it is impossible to show in immunohistochemistry that the staining corresponds with the protein of interest. For this reason, primary antibodies must be well-validated in a Western Blot or similar procedure. The technique is even more widely used in diagnostic surgical pathology for immunophenotyping tumors (e.g. immunostaining for e-cadherin to differentiate between ductal carcinoma in situ (stains positive) and lobular carcinoma in situ (does not stain positive)). More recently, immunohistochemical techniques have been useful in differential diagnoses of multiple forms of salivary gland, head, and neck carcinomas.
The diversity of immunohistochemistry markers used in diagnostic surgical pathology is substantial. Many clinical laboratories in tertiary hospitals will have menus of over 200 antibodies used as diagnostic, prognostic and predictive biomarkers. Examples of some commonly used markers include:
BrdU: used to identify replicating cells. Used to identify tumors as well as in neuroscience research.
Cytokeratins: used for identification of carcinomas but may also be expressed in some sarcomas.
CD15 and CD30: used for Hodgkin's disease.
Alpha fetoprotein: for yolk sac tumors and hepatocellular carcinoma.
CD117 (KIT): for gastrointestinal stromal tumors (GIST) and mast cell tumors.
CD10 (CALLA): for renal cell carcinoma and acute lymphoblastic leukemia.
Prostate specific antigen (PSA): for prostate cancer.
estrogens and progesterone receptor (ER & PR) staining are used both diagnostically (breast and gyn tumors) as well as prognostic in breast cancer and predictive of response to therapy (estrogen receptor).
Identification of B-cell lymphomas using CD20.
Identification of T-cell lymphomas using CD3.
PIN-4 cocktail, targeting p63, CK-5, CK-14 and AMACR (latter also known as P504S), and used to distinguish prostate adenocarcinoma from benign glands.
Directing therapy
A variety of molecular pathways are altered in cancer and some of the alterations can be targeted in cancer therapy. Immunohistochemistry can be used to assess which tumors are likely to respond to therapy, by detecting the presence or elevated levels of the molecular target.
Chemical inhibitors
Tumor biology allows for a number of potential intracellular targets. Many tumors are hormone dependent. The presence of hormone receptors can be used to determine if a tumor is potentially responsive to antihormonal therapy. One of the first therapies was the antiestrogen, tamoxifen, used to treat breast cancer. Such hormone receptors can be detected by immunohistochemistry.
Imatinib, an intracellular tyrosine kinase inhibitor, was developed to treat chronic myelogenous leukemia, a disease characterized by the formation of a specific abnormal tyrosine kinase. Imitanib has proven effective in tumors that express other tyrosine kinases, most notably KIT. Most gastrointestinal stromal tumors express KIT, which can be detected by immunohistochemistry.
Monoclonal antibodies
Many proteins shown to be highly upregulated in pathological states by immunohistochemistry are potential targets for therapies utilising monoclonal antibodies. Monoclonal antibodies, due to their size, are utilized against cell surface targets. Among the overexpressed targets are members of the EGFR family, transmembrane proteins with an extracellular receptor domain regulating an intracellular tyrosine kinase. Of these, HER2/neu (also known as Erb-B2) was the first to be developed. The molecule is highly expressed in a variety of cancer cell types, most notably breast cancer. As such, antibodies against HER2/neu have been FDA approved for clinical treatment of cancer under the drug name Herceptin. There are commercially available immunohistochemical tests, Dako HercepTest, Leica Biosystems Oracle and Ventana Pathway.
Similarly, epidermal growth factor receptor (HER-1) is overexpressed in a variety of cancers including head and neck and colon. Immunohistochemistry is used to determine patients who may benefit from therapeutic antibodies such as Erbitux (cetuximab). Commercial systems to detect epidermal growth factor receptor by immunohistochemistry include the Dako pharmDx.
Mapping protein expression
Immunohistochemistry can also be used for a more general protein profiling, provided the availability of antibodies validated for immunohistochemistry. The Human Protein Atlas displays a map of protein expression in normal human organs and tissues. The combination of immunohistochemistry and tissue microarrays provides protein expression patterns in a large number of different tissue types. Immunohistochemistry is also used for protein profiling in the most common forms of human cancer.
See also
Cutaneous conditions with immunofluorescence findings
Chromogenic in situ hybridization
Tissue Cytometry, a technique that brings the concept of flow cytometry to tissue section, in situ, and helps to perform whole slide scanning and quantification of markers by maintaining the spatial context using machine learning and AI.
References
Further reading
External links
The Human Protein Atlas
Overview of Immunohistochemistry--describes all aspects of immunohistochemistry including sample prep, staining and troubleshooting
Immunofluorescent Staining of Paraffin-Embedded Tissue (IF-P)
IHC Tip 1: Antigen retrieval - should I do PIER or HIER?
Histochemical Staining Methods - University of Rochester Department of Pathology
Immunohistochemistry Staining Protocol
Histology
Immunologic tests
Protein methods
Anatomical pathology
Staining
Laboratory techniques
Pathology | Immunohistochemistry | [
"Chemistry",
"Biology"
] | 3,317 | [
"Biochemistry methods",
"Pathology",
"Staining",
"Protein methods",
"Protein biochemistry",
"Immunologic tests",
"Histology",
"Microbiology techniques",
"nan",
"Microscopy",
"Cell imaging"
] |
1,021,753 | https://en.wikipedia.org/wiki/Variety%20%28universal%20algebra%29 | In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras, and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called finitary algebraic categories.
A covariety is the class of all coalgebraic structures of a given signature.
Terminology
A variety of algebras should not be confused with an algebraic variety, which means a set of solutions to a system of polynomial equations. They are formally quite distinct and their theories have little in common.
The term "variety of algebras" refers to algebras in the general sense of universal algebra; there is also a more specific sense of algebra, namely as algebra over a field, i.e. a vector space equipped with a bilinear multiplication.
Definition
A signature (in this context) is a set, whose elements are called operations, each of which is assigned a natural number (0, 1, 2, ...) called its arity. Given a signature σ and a set V, whose elements are called variables, a word is a finite rooted tree in which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operation o has as many branches away from the root as the arity of o. An equational law is a pair of such words; the axiom consisting of the words v and w is written as .
A theory consists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theory T, an algebra of T consists of a set A together with, for each operation o of T with arity n, a function such that for each axiom and each assignment of elements of A to the variables in that axiom, the equation holds that is given by applying the operations to the elements of A as indicated by the trees defining v and w. The class of algebras of a given theory T is called a variety of algebras.
Given two algebras of a theory T, say A and B, a homomorphism is a function such that
for every operation o of arity n. Any theory gives a category where the objects are algebras of that theory and the morphisms are homomorphisms.
Examples
The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law:
The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively multiplication (binary), identity (nullary, a constant) and inversion (unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities:
The class of rings also forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation).
If we fix a specific ring R, we can consider the class of left R-modules. To express the scalar multiplication with elements from R, we need one unary operation for each element of R. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the left R-modules do form a variety of algebras.
The fields do not form a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below).
The cancellative semigroups also do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form a quasivariety as the implication defining the cancellation property is an example of a quasi-identity.
Birkhoff's variety theorem
Given a class of algebraic structures of the same signature, we can define the notions of homomorphism, subalgebra, and product. Garrett Birkhoff proved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products. This is a result of fundamental importance to universal algebra and known as Birkhoff's variety theorem or as the HSP theorem. H, S, and P stand, respectively, for the operations of homomorphism, subalgebra, and product.
One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving the converse—classes of algebras closed under the HSP operations must be equational—is more difficult.
Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety.
Subvarieties
A subvariety of a variety of algebras V is a subclass of V that has the same signature as V and is itself a variety, i.e., is defined by a set of identities.
Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups does not form a subvariety of the variety of semigroups because the signatures are different.
Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains and does not contain its subalgebra (more precisely, submonoid) .
However, the class of abelian groups is a subvariety of the variety of groups because it consists of those groups satisfying , with no change of signature. The finitely generated abelian groups do not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated.
Viewing a variety V and its homomorphisms as a category, a subvariety U of V is a full subcategory of V, meaning that for any objects a, b in U, the homomorphisms from a to b in U are exactly those from a to b in V.
Free objects
Suppose V is a non-trivial variety of algebras, i.e. V contains algebras with more than one element. One can show that for every set S, the variety V contains a free algebra FS on S. This means that there is an injective set map that satisfies the following universal property: given any algebra A in V and any map , there exists a unique V-homomorphism such that .
This generalizes the notions of free group, free abelian group, free algebra, free module etc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra.
Category theory
Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitary monads and Lawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called a finitary algebraic category. For any finitary algebraic category V, the forgetful functor has a left adjoint , namely the functor that assigns to each set the free algebra on that set. This adjunction is monadic, meaning that the category V is equivalent to the Eilenberg–Moore category SetT for the monad . Moreover the monad T is finitary, meaning it commutes with filtered colimits.
The monad is thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories.
Working with monads permits the following generalization. One says a category is an algebraic category if it is monadic over Set. This is a more general notion than "finitary algebraic category" because it admits such categories as CABA (complete atomic Boolean algebras) and CSLat (complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category of sigma algebras also has infinitary operations, but their arity is countable whence its signature is small (forms a set).
Every finitary algebraic category is a locally presentable category.
Pseudovariety of finite algebras
Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion of variety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities.
A pseudovariety is usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of a variety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived.
Pseudovarieties are of particular importance in the study of finite semigroups and hence in formal language theory. Eilenberg's theorem, often referred to as the variety theorem, describes a natural correspondence between varieties of regular languages and pseudovarieties of finite semigroups.
See also
Quasivariety
Notes
External links
Two monographs available free online:
Stanley N. Burris and H.P. Sankappanavar (1981), A Course in Universal Algebra. Springer-Verlag. . [Proof of Birkhoff's Theorem is in II§11.]
Peter Jipsen and Henry Rose (1992), Varieties of Lattices, Lecture Notes in Mathematics 1533. Springer Verlag. .
Universal algebra | Variety (universal algebra) | [
"Mathematics"
] | 2,342 | [
"Fields of abstract algebra",
"Universal algebra"
] |
4,194,183 | https://en.wikipedia.org/wiki/International%20Centre%20for%20Theoretical%20Physics | The Abdus Salam International Centre for Theoretical Physics (ICTP) is a research center for physical and mathematical sciences, located in Trieste, Friuli-Venezia Giulia, Italy.
The center operates under a tripartite agreement between the Italian Government, UNESCO, and the International Atomic Energy Agency. It is located near the Miramare Park, about 10 kilometres from the downtown of Trieste city, Italy. The centre was founded in 1964 by Pakistani Nobel Laureate Abdus Salam.
ICTP is part of the Trieste System, a network of national and international scientific institutes in Trieste, promoted by the Italian physicist Paolo Budinich.
Mission
Foster the growth of advanced studies and research in physical and mathematical sciences, especially in support of excellence in developing countries;
Develop high-level scientific programmes keeping in mind the needs of developing countries, and provide an international forum of scientific contact for scientists from all countries;
Conduct research at the highest international standards and maintain a conducive environment of scientific inquiry for the entire ICTP community.
Research
Research at ICTP is carried out by seven scientific sections:
High Energy, Cosmology and Astroparticle Physics
Condensed Matter and Statistical Physics
Mathematics
Earth System Physics
Science, Technology and Innovation
Quantitative Life Sciences
New Research Areas (which includes studies related to Energy and Sustainability and Computing Sciences)
The scientific community at ICTP includes staff research scientists, postdoctoral fellows and long- and short-term visitors engaged in independent or collaborative research. Throughout the year, the sections organize conferences, workshops, seminars and colloquiums in their respective fields. ICTP also has visitor programmes specifically for scientific visitors from developing countries, including programmes under federation and associateship schemes.
Postgraduate programmes
ICTP offers educational training through its pre-PhD programmes and degree programmes (conducted in collaboration with other institutes).
Pre-PhD programmes
Postgraduate diploma programmes in Condensed Matter Physics, High Energy Physics, Mathematics, Earth System Physics, and Quantitative Life Sciences for students from developing countries.
The Sandwich Training Educational Programme (STEP) for students from developing countries already enrolled in PhD programmes in the fields of physics and mathematics.
In collaboration with other institutes, ICTP offers masters and doctoral degrees in physics and mathematics.
Joint ICTP/SISSA PhD Programme in Physics and Mathematics
Joint PhD Programme in Earth Science and Fluid Mechanics
Joint Laurea Magistralis in Physics
Joint ICTP/Collegio Carlo Alberto Program in Economics
International Master, Physics of Complex Systems
Master of Advanced Studies in Medical Physics
Masters in High Performance Computing
In addition, ICTP collaborates with local laboratories, including Elettra Synchrotron Light Laboratory, to provide fellowships and laboratory opportunities.
Prizes and awards
ICTP has instituted awards to honour and encourage high-level research in the fields of physics and mathematics.
The Dirac Medal – For scientists who have made significant contributions to theoretical physics.
The ICTP Prize – For young scientists from and working in developing countries.
ICO/ICTP Gallieno Denardo Award – For significant contributions to the field of optics.
The Ramanujan Prize – For young mathematicians from developing countries.
The Walter Kohn Prize – Given jointly by ICTP and the Quantum ESPRESSO foundation, for work in quantum mechanical materials or molecular modelling, performed by a young scientist working in a developing country.
Partner institutes
One of ICTP's goals is to set up regional centres of excellence around the globe. The idea is to bring ICTP's unique blend of high-quality physics and mathematics education and high-level science meetings closer to scientists everywhere. On February 6, 2012, ICTP opened a partner institute (ICTP South American Institute for Fundamental Research) in São Paulo, Brazil. Its activities are modelled on those of the ICTP and include schools and workshops, as well as a visiting scientists programme.
On October 18, 2018, a partner institute (ICTP-EAIFR, the East African Institute for Fundamental Research), was inaugurated in Kigali, Rwanda. In November 2018, ICTP opened the International Centre for Theoretical Physics Asia-Pacific (ICTP-AP) in Beijing, China, in collaboration with the University of the Chinese Academy of Sciences.
Journal
In 2007 ICTP created the peer-reviewed open-access Journal "African Review of Physics" under the then name "African Physical Review".
See also
International School for Advanced Studies
University of Trieste
Joint Institute for Nuclear Research
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
References
External links
International Atomic Energy Agency
International research institutes for mathematics
International research institutes
Physics research institutes
Physics organizations
Research institutes established in 1964
Trieste
UNESCO
Abdus Salam
Research institutes in Italy
Italy and the United Nations
Theoretical physics institutes | International Centre for Theoretical Physics | [
"Physics"
] | 947 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
16,189,704 | https://en.wikipedia.org/wiki/Project%20Valkyrie | The Valkyrie is a theoretical spacecraft designed by Charles Pellegrino and Jim Powell (a physicist at Brookhaven National Laboratory). The Valkyrie is theoretically able to accelerate to 92% the speed of light and decelerate afterward, carrying a small human crew to another star system.
Design
The Valkyrie's high performance is attributable to its innovative design. Instead of a solid spacecraft with a rocket at the back, Valkyrie is built more like a cable car train, with the crew quarters, fuel tanks, radiation shielding, and other vital components being pulled between front and aft engines on long tethers. This greatly reduces the mass of the ship, because it no longer requires heavy structural members and radiation shielding. This is a considerable advantage because in a rocket every extra kilogram of payload (dry mass) will require a corresponding extra amount of propellant or fuel.
The Valkyrie would have a crew module trailing 10 kilometers behind the engine. A small 20-cm-thick tungsten shield would hang 100 meters behind the engine, to help protect the trailing crew module from its harmful radiation. The fuel tank might be placed between the crew module and the engine, to further protect it. At the trailing end of the ship would be a second engine, which the ship would use to decelerate. The forward engine and the tank holding its fuel supply might be jettisoned before deceleration, to reduce fuel consumption. The tether system requires that the elements of the ship must be moved "up" or "down" the tethers depending on flight direction.
Engines
Initially, the Valkyrie's engine would work by using small quantities of antimatter to initiate an extremely energetic fusion reaction. A magnetic coil captures the exhaust products of this reaction, expelling them with an exhaust velocity of 12-20% the speed of light (35,000-60,000 km/s). As the spacecraft approaches 20% the speed of light, more antimatter is fed into the engines until it switches over to pure matter-antimatter annihilation. It will use this mode to accelerate the remainder of the way to .92 c. Pellegrino estimates that the ship would require 100 tons of matter and antimatter to reach 0.1-0.2c, with an undetermined excess of matter to ensure the antimatter is efficiently utilized. To reach a speed of .92 c and decelerate afterward, Valkyrie would require a mass ratio of 22 (or 2200 tons of fuel for a 100-ton spacecraft).
At such high speeds, incident debris would be a major hazard. While accelerating, Valkyrie uses a device that combines the functions of a particle shield and a liquid droplet radiator. Waste heat is dumped into liquid droplets that are cast out in front of the ship. As the ship accelerates the droplets (now cool) effectively fall back into the ship, so the system is self-recycling. During deceleration, the ship will be protected by ultra-thin umbrella shields, augmented by a dust shield, possibly made by grinding up pieces of the discarded first stage.
Criticism
The chief feasibility issue of Valkyrie (or for any antimatter-beam drive) lies in its requirement of tons of antimatter fuel. Antimatter cannot be produced at an efficiency of more than 50% (that is to say, to produce one gram of antimatter requires twice as much energy as you would get from annihilating that gram with a gram of matter). Since half a kilogram of antimatter would yield 9×1016 J if annihilated with an equal amount of matter, this quickly adds up to enormous energy requirements for its production. To produce the 50 tons of antimatter Valkyrie would require 1.8×1022 J. This is the same amount of energy that the entire human race currently uses in about forty years.
This may be solved by creating a truly enormous power plant for the antimatter factory, probably in the form of a vast array of solar panels with a combined area of millions of square kilometers or many fusion reactors. Alternately the antimatter-fusion hybrid drive the Valkyrie uses to accelerate up to 0.2 c would require much less antimatter and, with an exhaust velocity of 30–60,000 km·s−1, still compares quite favorably with competing engines such as the inertial confinement pulse drive used by Project Daedalus or Project Orion. The Valkyrie's lightweight construction could also be applied to a wide variety of space vehicles.
By using tethers there is no rigidity between ship elements and engines. Without active acceleration or thrust to pull and straighten the tethers the slightest imbalance, excess force, or the moving of the ship elements into different flight configurations pose a danger for collisions between ship elements and engines. As long term space flight at interstellar velocities causes erosion due to collision with particles, gas, dust and micrometeorites the tethers are literally lifelines. Changing course or turning the ship requires re-positioning or aligning every ship element and presumably consumes more fuel in doing so.
As the liquid droplet radiators (LDR) are deployed on the other side of propulsion and the main body, the droplets and the collectors are exposed to the other half of the heat energy from the gamma radiation from the antimatter annihilation. If the total area of the collectors are larger than the radiation shield the LDR would serve to cool itself rather than the shield for the ship's main components.
Trivia
A superficially-similar interstellar spacecraft is featured in the movie Avatar.
See also
Project Prometheus
Project Longshot
References
External links
Valkyrie Edited Guide Entry (BBC.com)
Valkyrie at Atomic Rockets
Hypothetical spacecraft
Interstellar travel
Antimatter | Project Valkyrie | [
"Physics",
"Astronomy",
"Technology"
] | 1,185 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Antimatter",
"Hypothetical spacecraft",
"Interstellar travel",
"Matter"
] |
10,962,898 | https://en.wikipedia.org/wiki/Indium%28III%29%20selenide | Indium(III) selenide is a compound of indium and selenium. It has potential for use in photovoltaic devices and has been the subject of extensive research. The two most common phases, α and β, have a layered structure, while γ has a "defect wurtzite structure." In all, five polymorphs are known: α, β, γ, δ, κ. The α-β phase transition is accompanied by a change in electrical conductivity. The band gap of γ-In2Se3 is approximately 1.9 eV.
Preparation
The method of production influences the polymorph generated. For example, thin films of pure γ-In2Se3 have been produced from trimethylindium (InMe3) and hydrogen selenide via MOCVD techniques.
A conventional route entails heating the elements in a sealed tube:
See also
Gallium(III) selenide
Indium chalcogenides
Nanoparticle
General references
WebElements
Footnotes
External links
Indium Selenide Nanoparticles Used In Solar Energy Conversion.
Indium compounds
Selenides
Solar cells
Semiconductor materials | Indium(III) selenide | [
"Chemistry"
] | 241 | [
"Semiconductor materials"
] |
10,964,172 | https://en.wikipedia.org/wiki/Flush%20deck | In naval architecture, a flush deck is a ship deck that is continuous from stem to stern.
History
Flush decks have been in use since the times of the ancient Egyptians. Greco-Roman Trireme often had a flush deck but may have also had a fore and aft castle deck. Flush decks were also common on medieval and Renaissance galleys but some also featured fore and aft castle decks. The medieval Brigantine and later Brig and Snow ships also featured flush decks.
Two different meanings of "flush"
"Flush deck" with "flush" in its generic meaning of "even or level; forming an unbroken plane", is sometimes applied to vessels, as in describing yachts lacking a raised pilothouse for instance. "Flush deck aircraft carrier" uses "flush deck" in this generic sense.
"Flush deck" in its more specific maritime-architecture sense denotes (for instance) the flush deck destroyers described above: the flush decks are broken by masts, guns, funnels, and other structures and impediments, and are far from being unbroken planes. "Flush deck" in this sense only signifies that the main deck runs the length of the ship and does not end before the stem (with a separate raised forecastle deck forward) or before the stern (with a separate raised or, as seen on many modern warships, lowered quarterdeck rearward).
Types
Flush deck aircraft carriers are those with no island superstructure, so that the top deck of the vessel consists of only an unbroken flight deck.
"Flush deckers" is a common nickname for a series of American destroyers built in large quantities during or shortly after World War I – the , , and classes – so called because they lacked the raised forecastle of preceding American destroyers, thus the main deck was a flush deck.
References
Shipbuilding
Naval architecture
Bangladeshi inventions
Indian inventions | Flush deck | [
"Engineering"
] | 370 | [
"Naval architecture",
"Shipbuilding",
"Marine engineering"
] |
10,965,310 | https://en.wikipedia.org/wiki/Prefoldin | Prefoldin (GimC) is a superfamily of proteins used in protein folding complexes. It is classified as a heterohexameric molecular chaperone in both archaea and eukarya, including humans. A prefoldin molecule works as a transfer protein in conjunction with a molecule of chaperonin to form a chaperone complex and correctly fold other nascent proteins. One of prefoldin's main uses in eukarya is the formation of molecules of actin for use in the eukaryotic cytoskeleton.
Purpose and uses
Prefoldin is one family of chaperone proteins found in the domains of eukarya and archaea. Prefoldin acts in combination with other molecules to promote protein folding in cells where there are many other competing pathways for folding. Chaperone proteins perform non-covalent assembly of other polypeptide-containing structures in vivo. They are implicated in the folding of most other proteins.
In archaea, prefoldins are believed to function in combination with group II chaperonins in de novo protein folding. In eukarya however, prefoldins have acquired a more specific function: they are used to establish correct tubular assembly for many tubular proteins, such as actin. Actin accounts for 5-10% of all protein found in eukaryotic cells, which therefore means that prefoldin is quite prevalent in the cells. Actin is made of two strings of beads wound round each other and is one of the three main parts of the cytoskeleton of eukaryotic cells. Prefoldin bonds specifically to cytosolic chaperonin protein. This complex of prefoldin and chaperonin then forms molecules of actin in the cytosol. The prefoldin acts as a transporter molecule that transports bound, unfolded target proteins to the chaperonin (C-CPN) molecule.
For example, the prefoldin that is used in the formation of actin also transfers α or β tubulin to a cytosolic chaperonin. The prefoldin, however, does not form a ternary complex with tubulin and chaperonin. Once the tubulins are in contact with the chaperonin, the prefoldin automatically lets go and leaves the active site, due to its high affinity for the chaperonin molecule. Once the prefoldin is in contact with the chaperonin protein, it loses its affinity for the unfolded target protein.
Prefoldin is triggered only to bind to nonnative target proteins in the cytosol so that it will only bind to unfolded proteins. Unlike many other molecular chaperones, prefoldin does not use chemical energy, in the form of adenosine triphosphate (ATP), to promote protein folding.
Discovery
Prefoldin was found by the laboratory of Nicholas J. Cowan from the Department of Biochemistry at the New York University Medical Center. It was discovered using chromatography. Unfolded labeled β-actin from bovine testes was put into solution. This solution contained an excess of cytosolic chaperonin (C-CPN), a eukaryotic chaperone protein necessary for actin folding. After gel filtration of the actin, the actin complex, consisting of actin and its bonded proteins, began to form and the molecular weight of the complex was observed. Gel electrophoresis was used to analyze the protein complex, the complex formed a single band that was excised and ran on an SDS gel. It resolved into five bands, therefore proving that a heterooligomeric protein is used to bind to unfolded actin.
An archaeal homolog of prefoldin that also functions as a molecular chaperone has been identified. Eukaryotic prefoldin likely evolved from archaea, as it is not present (or has been lost) from bacteria.
Structure
Prefoldin is a hetero hexameric protein consisting of two α subunits and four β subunits. The beta subunits contain 120 amino acid residues each, while the α subunits contain 140 amino acid residues each. Each subunit was found to have a width of 8.4 nm in the archaea Methanococcus thermoautrophicum. The height was calculated at 1.8-2.6 nm. The subunits are arranged by hydrophobic interactions with two β barrels at the center and coiled-coil α helices protruding down from them as if it were a jellyfish.
The lower "tentacles" of the jellyfish shape is the interface between prefoldin and chaperonin.
References
Proteins
Cell biology | Prefoldin | [
"Chemistry",
"Biology"
] | 968 | [
"Biomolecules by chemical classification",
"Proteins",
"Cell biology",
"Molecular biology"
] |
10,967,556 | https://en.wikipedia.org/wiki/Tomotherapy | Tomotherapy is a type of radiation therapy treatment machine. In tomotherapy a thin radiation beam is modulated as it rotates around the patient, while they are moved through the bore of the machine. The name comes from the use of a strip-shaped beam, so that only one “slice” (Greek prefix “tomo-”) of the target is exposed at any one time by the radiation. The external appearance of the system and movement of the radiation source and patient can be considered analogous to a CT scanner (computed tomography), which uses lower doses of radiation for imaging. Like a conventional machine used for X-ray external beam radiotherapy (often referred to as a linear accelerator or linac, their main component), it [the tomotherapy machine] generates the radiation beam, but the external appearance of the machine, patient positioning, and treatment delivery differ. Conventional linacs do not work on a slice-by-slice basis but typically have a large area beam which can also be resized and modulated.
General principles
The treatment field's length (the width of the radiation slice) is adjustable using collimator jaws. In static-jaw delivery, the field length remains constant during a treatment. In dynamic-jaw delivery, the field length changes so that it begins and ends at its minimum setting.
Tomotherapy treatment times vary compared to normal radiation therapy treatment times. Tomotherapy treatment times can be as low as 6.5 minutes for common prostate treatment, excluding extra time for imaging. Modern tomotherapy and conventional linac systems incorporate one or both of megavoltage X-ray or kilovoltage X-ray imaging systems, enabling image-guided radiation therapy (IGRT). In tomotherapy, images are acquired in a very similar manner to a CT scanner, thanks to their closely related design.
There are few head-to-head comparisons of tomotherapy and other IMRT techniques, however there is some evidence that a conventional linac using VMAT can provide faster treatment whereas tomotherapy is better able to spare surrounding healthy tissue while delivering a uniform dose.
Helical delivery
In helical tomotherapy, the linac rotates on its gantry at a constant speed while the beam is delivered; so that from the patient's perspective, the shape traced out by the linac is helical.
While helical tomotherapy can treat very long volumes without a need to abut fields in the longitudinal direction, it does display a distinct artifact due to "thread effect" when treating non-central tumors. Thread effect can be suppressed during planning through good pitch selection.
Fixed-angle delivery
Fixed-angle tomotherapy uses multiple tomotherapy beams, each delivered from a separate fixed gantry angle, in which only the couch moves during beam delivery. This is branded as TomoDirect, but has also been called topotherapy.
The technology enables fixed beam treatments by moving the patient through the machine bore while maintaining specified beam angles.
Clinical considerations
Lung cancer, head and neck tumors, breast cancer, prostate cancer, stereotactic radiosurgery (SRS) and stereotactic body radiotherapy (SBRT) are some examples of treatments commonly performed using tomotherapy.
In general, radiation therapy (or radiotherapy) has developed with a strong reliance on homogeneity of dose throughout the tumor. Tomotherapy embodies the sequential delivery of radiation to different parts of the tumor which raises two important issues. First, this method is known as "field matching" and brings with it the possibility of a less-than-perfect match between two adjacent fields with a resultant hot and/or cold spot within the tumor. The second issue is that if the patient or tumor moves during this sequential delivery, then again, a hot or cold spot will result. The first problem is reduced by use of a helical motion, as in spiral computed tomography.
Some research has suggested tomotherapy provides more conformal treatment plans and decreased acute toxicity.
Non-helical static beam techniques such as IMRT and TomoDirect are well suited to whole breast radiation therapy. These treatment modes avoid the low-dose integral splay and long treatment times associated with helical approaches by confining dose delivery to tangential angles.
This risk is accentuated in younger patients with early-stage breast cancer, where cure rates are high and life expectancy is substantial.
Static beam angle approaches aim to maximize the therapeutic ratio by ensuring that the tumor control probability (TCP) significantly outweighs the associated normal tissue complication probability (NTCP).
History
The tomotherapy technique was developed in the early 1990s at the University of Wisconsin–Madison by Professor Thomas Rockwell Mackie and Paul Reckwerdt. A small megavoltage x-ray source was mounted in a similar fashion to a CT x-ray source, and the geometry provided the opportunity to provide CT images of the body in the treatment setup position. Although original plans were to include kilovoltage CT imaging, current models use megavoltage energies. With this combination, the unit was one of the first devices capable of providing modern image-guided radiation therapy (IGRT).
The first implementation of tomotherapy was the Corvus system developed by Nomos Corporation, with the first patient treated in April 1994. This was the first commercial system for planning and delivering intensity modulated radiation therapy (IMRT). The original system, designed solely for use in the brain, incorporated a rigid skull-based fixation system to prevent patient motion between the delivery of each slice of radiation. But some users eschewed the fixation system and applied the technique to tumors in many different parts of the body.
Mobile tomotherapy
Due to their internal shielding and small footprint, TomoTherapy Hi-Art and TomoTherapy TomoHD treatment machines were the only high energy radiotherapy treatment machines used in relocatable radiotherapy treatment suites. Two different types of suites were available: TomoMobile developed by TomoTherapy Inc. which was a moveable truck; and Pioneer, developed by UK-based Oncology Systems Limited. The latter was developed to meet the requirements of UK and European transport law requirements and was a contained unit placed on a concrete pad, delivering radiotherapy treatments in less than five weeks.
See also
Radiation therapy
Radiosurgery
References
External links
Radiation therapy procedures
Medical physics | Tomotherapy | [
"Physics"
] | 1,338 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
10,969,277 | https://en.wikipedia.org/wiki/Rank%20product | The rank product is a biologically motivated rank test for the detection of differentially expressed genes in replicated microarray experiments.
It is a simple non-parametric statistical method based on ranks of fold changes. In addition to its use in expression profiling, it can be used to combine ranked lists in various application domains, including proteomics, metabolomics, statistical meta-analysis, and general feature selection.
Calculation of the rank product
Given n genes and k replicates, let be the rank of gene g in the i-th replicate.
Compute the rank product via the geometric mean:
Determination of significance levels
Simple permutation-based estimation is used to determine how likely a given RP value or better is observed in a random experiment.
generate p permutations of k rank lists of length n.
calculate the rank products of the n genes in the p permutations.
count how many times the rank products of the genes in the permutations are smaller or equal to the observed rank product. Set c to this value.
calculate the average expected value for the rank product by: .
calculate the percentage of false positives as : where is the rank of gene g in a list of all n genes sorted by increasing .
Exact probability distribution and accurate approximation
Permutation re-sampling requires a computationally demanding number of permutations to get reliable estimates of the p-values for the most differentially expressed genes, if n is large. Eisinga, Breitling and Heskes (2013) provide the exact probability mass distribution of the rank product statistic. Calculation of the exact p-values offers a substantial improvement over permutation approximation, most significantly for that part of the distribution rank product analysis is most interested in, i.e., the thin right tail. However, exact statistical significance of large rank products may take unacceptable long amounts of time to compute. Heskes, Eisinga and Breitling (2014) provide a method to determine accurate approximate p-values of the rank product statistic in a computationally fast manner.
See also
Ranking
Schulze method
Comparison of electoral systems
Arrow's impossibility theorem
References
Breitling, R., Armengaud, P., Amtmann, A., and Herzyk, P. (2004) Rank Products: A simple, yet powerful, new method to detect differentially regulated genes in replicated microarray experiments, FEBS Letters, 573:83–-92
Gene expression
Nonparametric statistics
Microarrays | Rank product | [
"Chemistry",
"Materials_science",
"Biology"
] | 518 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Gene expression",
"Bioinformatics",
"Molecular biology techniques",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
10,971,432 | https://en.wikipedia.org/wiki/Methylglyoxal%20pathway | The methylglyoxal pathway is an offshoot of glycolysis found in some prokaryotes, which converts glucose into methylglyoxal and then into pyruvate. However unlike glycolysis the methylglyoxal pathway does not produce adenosine triphosphate, ATP. The pathway is named after the substrate methylglyoxal which has three carbons and two carbonyl groups located on the 1st carbon and one on the 2nd carbon. Methylglyoxal is, however, a reactive aldehyde that is very toxic to cells, it can inhibit growth in E. coli at milimolar concentrations. The excessive intake of glucose by a cell is the most important process for the activation of the methylglyoxal pathway.
The Methylglyoxal pathway
The methylglyoxal pathway is activated by the increased intercellular uptake of carbon containing molecules such as glucose, glucose-6-phosphate, lactate, or glycerol. Methylglyoxal is formed from dihydroxyacetone phosphate (DHAP) by the enzyme methylglyoxal synthase, giving off a phosphate group.
Methylglyoxal is then converted into two different products, either D-lactate, and L-lactate. Methylglyoxal reductase and aldehyde dehydrogenase convert methylglyoxal into lactaldehyde and, eventually, L-lactate. If methylglyoxal enters the glyoxylase pathway, it is converted into lactoylguatathione and eventually D-lactate. Both D-lactate, and L-lactate are then converted into pyruvate. The pyruvate that is created most often goes on to enter the Krebs cycle (Weber 711–13).
Enzymes and regulation
The potentially hazardous effects of methylglyoxal require regulation of the reactions with this substrate. Synthesis of methylglyoxal is regulated by levels of DHAP and phosphate concentrations. High concentrations of DHAP encourage methylglyoxal synthase to produce methylglyoxal, while high phosphate concentrations inhibit the enzyme, and therefore the production of more methylglyoxal. The enzyme triose phosphate isomerase affects the levels of DHAP by converting glyceraldehyde 3-phosphate (GAP) into DHAP. The usual pathway converting GAP to pyruvate starts with the enzyme glyceraldehyde 3-phosphate dehydrogenase (Weber 711–13). Low phosphate levels inhibit GAP dehydrogenase; GAP is instead converted into DHAP by triosephosphate isomerase. Again, increased levels of DHAP activate methylglyoxal synthase and methylglyoxal production (Weber 711–13).
The oscillation of Methylglyoxal concentration in feast concentrations
Jan Weber, Anke Kayser, and Ursula Rinas, performed an experiment to test what happened to the methylglyoxal pathway when E. coli was in the presence of a constantly high concentration of glucose. The concentration of methylglyoxal increased until it reached 20 μmol. Methylglyoxal concentration then began to decrease, once it reached this level. The decrease in the concentration of methylglyoxal was connected to the drop in respiratory activity. When respiration activity increased the concentration of methylglyoxal increased again, until it reached the 20 μmol concentration (Weber 714–15).
Why does the Methylglyoxal pathway exist?
This pathway does not produce any ATP, this pathway does not replace glycolysis, it runs simultaneously to glycolysis and is only initiated with an increased concentration of sugar phosphates. One believed purpose of the methylglyoxal pathway is to help release the stress of elevated sugar phosphate concentration. Also when methylglyoxal is formed from DHAP, an inorganic phosphate is given off which can be used to replenish a low concentration of needed inorganic phosphate. The methylglyoxal pathway is a rather dangerous tactic, both because less energy is produced and a toxic compound, methylglyoxal is formed. (Weber 715).
References
Weber, Jan, Anke Kayser, and Ursula Rinas. Metabolic Flux Analysis of Escherichia Coli In. Vers. 151: 707-716. 6 Dec. 2004. Microbiology. 10 Apr. 2007 <http://mic.sgmjournals.org/cgi/reprint/151/3/707>.
Saadat, D., Harrison, D.H.T. "Methylglyoxal Synthase From Escherichia Coli." RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 25 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1B93>.
"Methylglyoxal Synthase From Escherichia Coli." RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 25 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1B93>.
Yun, M., C.-G. Park, J.-Y Kim, and H.-W. Park. "Structural Anayysis of Glyeraldehyde 3-Phosphate Dehydrogenase from Escherichia coli: Direct Evidence for Substrate Binding and Cofactor-Induced Confromational Changes. RCSB Protein Data Base. 24 Apr. 2007. RCSB Protein Data Base. 30 Apr. 2007 <http://www.pdb.org/pdb/explore.do?structureId=1DC4>.
Cellular respiration | Methylglyoxal pathway | [
"Chemistry",
"Biology"
] | 1,187 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
341,703 | https://en.wikipedia.org/wiki/Bertram%20Brockhouse | Bertram Neville Brockhouse, (July 15, 1918 – October 13, 2003) was a Canadian physicist. He was awarded the Nobel Prize in Physics (1994, shared with Clifford Shull) "for pioneering contributions to the development of neutron scattering techniques for studies of condensed matter", in particular "for the development of neutron spectroscopy".
Education and early life
Brockhouse was born in Lethbridge, Alberta, to a family of English descent. He was a graduate of the University of British Columbia (BA, 1947) and the University of Toronto (MA, 1948; Ph.D, 1950).
Career and research
From 1950 to 1962, Brockhouse carried out research at Atomic Energy of Canada's Chalk River Nuclear Laboratory. Here he was joined by P. K. Iyengar, who is treated as the father of India's nuclear program.
In 1962, he became a professor at McMaster University in Canada, where he remained until his retirement in 1984.
Brockhouse died on October 13, 2003, in Hamilton, Ontario, aged 85.
Awards and honours
Brockhouse was elected a Fellow of the Royal Society (FRS) in 1965. In 1982, Brockhouse was made an Officer of the Order of Canada and was promoted to Companion in 1995.
Brockhouse shared the 1994 Nobel Prize in Physics with American Clifford Shull of MIT for developing neutron scattering techniques for studying condensed matter.
In October 2005, as part of the 75th anniversary of McMaster University's establishment in Hamilton, Ontario, a street on the university campus (University Avenue) was renamed to Brockhouse Way in honour of Brockhouse. The town of Deep River, Ontario, has also named a street in his honour.
The Nobel Prize that Bertram Brockhouse won (shared with Clifford Shull) in 1994 was awarded after the longest-ever waiting time (counting from the time when the award-winning research had been carried out).
In 1999 the Division of Condensed Matter and Materials Physics (DCMMP) and the Canadian Association of Physicists (CAP) created a medal in honour of Brockhouse. The medal is called the Brockhouse Medal and is awarded to recognize and encourage outstanding experimental or theoretical contributions to condensed matter and materials physics. This medal is awarded annually on the basis of outstanding experimental or theoretical contributions to condensed matter physics. An eligible candidate must have performed their research primarily with a Canadian Institution.
References
External links
Bertram Brockhouse, the Triple-axis Spectrometer, and Neutron Spectroscopy , from the Office of Scientific and Technical Information, United States Department of Energy
including the Nobel Lecture, December 8, 1994 Slow Neutron Spectroscopy and the Grand Atlas of the Physical World
1918 births
2003 deaths
Scientists from Lethbridge
Spectroscopists
Canadian nuclear physicists
University of Toronto alumni
University of British Columbia Faculty of Science alumni
Academic staff of McMaster University
Nobel laureates in Physics
Canadian Nobel laureates
Fellows of the Royal Society of Canada
Fellows of the American Physical Society
Companions of the Order of Canada
Canadian fellows of the Royal Society
Oliver E. Buckley Condensed Matter Prize winners
20th-century Canadian scientists
Members of the Royal Swedish Academy of Sciences | Bertram Brockhouse | [
"Physics",
"Chemistry"
] | 623 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
341,818 | https://en.wikipedia.org/wiki/Lane%E2%80%93Emden%20equation | In astrophysics, the Lane–Emden equation is a dimensionless form of Poisson's equation for the gravitational potential of a Newtonian self-gravitating, spherically symmetric, polytropic fluid. It is named after astrophysicists Jonathan Homer Lane and Robert Emden. The equation reads
where is a dimensionless radius and is related to the density, and thus the pressure, by for central density . The index is the polytropic index that appears in the polytropic equation of state,
where and are the pressure and density, respectively, and is a constant of proportionality. The standard boundary conditions are and . Solutions thus describe the run of pressure and density with radius and are known as polytropes of index . If an isothermal fluid (polytropic index tends to infinity) is used instead of a polytropic fluid, one obtains the Emden–Chandrasekhar equation.
Applications
Physically, hydrostatic equilibrium connects the gradient of the potential, the density, and the gradient of the pressure, whereas Poisson's equation connects the potential with the density. Thus, if we have a further equation that dictates how the pressure and density vary with respect to one another, we can reach a solution. The particular choice of a polytropic gas as given above makes the mathematical statement of the problem particularly succinct and leads to the Lane–Emden equation. The equation is a useful approximation for self-gravitating spheres of plasma such as stars, but typically it is a rather limiting assumption.
Derivation
From hydrostatic equilibrium
Consider a self-gravitating, spherically symmetric fluid in hydrostatic equilibrium. Mass is conserved and thus described by the continuity equation
where is a function of . The equation of hydrostatic equilibrium is
where is also a function of . Differentiating again gives
where the continuity equation has been used to replace the mass gradient. Multiplying both sides by and collecting the derivatives of on the left, one can write
Dividing both sides by yields, in some sense, a dimensional form of the desired equation. If, in addition, we substitute for the polytropic equation of state with and , we have
Gathering the constants and substituting , where
we have the Lane–Emden equation,
From Poisson's equation
Equivalently, one can start with Poisson's equation,
One can replace the gradient of the potential using the hydrostatic equilibrium, via
which again yields the dimensional form of the Lane–Emden equation.
Exact solutions
For a given value of the polytropic index , denote the solution to the Lane–Emden equation as . In general, the Lane–Emden equation must be solved numerically to find . There are exact, analytic solutions for certain values of , in particular: . For between 0 and 5, the solutions are continuous and finite in extent, with the radius of the star given by , where .
For a given solution , the density profile is given by
The total mass of the model star can be found by integrating the density over radius, from 0 to .
The pressure can be found using the polytropic equation of state, , i.e.
Finally, if the gas is ideal, the equation of state is , where is the Boltzmann constant and the mean molecular weight. The temperature profile is then given by
In spherically symmetric cases, the Lane–Emden equation is integrable for only three values of the polytropic index .
For n = 0
If , the equation becomes
Re-arranging and integrating once gives
Dividing both sides by and integrating again gives
The boundary conditions and imply that the constants of integration are and . Therefore,
For n = 1
When , the equation can be expanded in the form
One assumes a power series solution:
This leads to a recursive relationship for the expansion coefficients:
This relation can be solved leading to the general solution:
The boundary condition for a physical polytrope demands that as .
This requires that , thus leading to the solution:
For n = 2
This exact solution was found by accident when searching for zero values of the related TOV Equation.
We consider a series expansion around
with initial values and .
Plugging this into the Lane-Emden equation, we can show that all odd coefficients of the series vanish .
Furthermore, we obtain a recursive relationship between the even coefficients of the series.
It was proven that this series converges at least for but numerical results showed good agreement for much larger values.
For n = 5
We start from with the Lane–Emden equation:
Rewriting for produces:
Differentiating with respect to leads to:
Reduced, we come by:
Therefore, the Lane–Emden equation has the solution
when . This solution is finite in mass but infinite in radial extent, and therefore the complete polytrope does not represent a physical solution. Chandrasekhar believed for a long time that finding other solution for "is complicated and involves elliptic integrals".
Srivastava's solution
In 1962, Sambhunath Srivastava found an explicit solution when . His solution is given by
and from this solution, a family of solutions can be obtained using homology transformation. Since this solution does not satisfy the conditions at the origin (in fact, it is oscillatory with amplitudes growing indefinitely as the origin is approached), this solution can be used in composite stellar models.
Analytic solutions
In applications, the main role play analytic solutions that are expressible by the convergent power series expanded around some initial point. Typically the expansion point is , which is also a singular point (fixed singularity) of the equation, and there is provided some initial data at the centre of the star. One can prove that the equation has the convergent power series/analytic solution around the origin of the form
The radius of convergence of this series is limited due to existence of two singularities on the imaginary axis in the complex plane. These singularities are located symmetrically with respect to the origin. Their position change when we change equation parameters and the initial condition , and therefore, they are called movable singularities due to classification of the singularities of non-linear ordinary differential equations in the complex plane by Paul Painlevé. A similar structure of singularities appears in other non-linear equations that result from the reduction of the Laplace operator in spherical symmetry, e.g., Isothermal Sphere equation.
Analytic solutions can be extended along the real line by analytic continuation procedure resulting in the full profile of the star or molecular cloud cores. Two analytic solutions with the overlapping circles of convergence can also be matched on the overlap to the larger domain solution, which is a commonly used method of construction of profiles of required properties.
The series solution is also used in the numerical integration of the equation. It is used to shift the initial data for analytic solution slightly away from the origin since at the origin the numerical methods fail due to the singularity of the equation.
Numerical solutions
In general, solutions are found by numerical integration. Many standard methods require that the problem is formulated as a system of first-order ordinary differential equations. For example,
Here, is interpreted as the dimensionless mass, defined by . The relevant initial conditions are and . The first equation represents hydrostatic equilibrium and the second represents mass conservation.
Homologous variables
Homology-invariant equation
It is known that if is a solution of the Lane–Emden equation, then so is . Solutions that are related in this way are called homologous; the process that transforms them is homology. If one chooses variables that are invariant to homology, then we can reduce the order of the Lane–Emden equation by one.
A variety of such variables exist. A suitable choice is
and
We can differentiate the logarithms of these variables with respect to , which gives
and
Finally, we can divide these two equations to eliminate the dependence on , which leaves
This is now a single first-order equation.
Topology of the homology-invariant equation
The homology-invariant equation can be regarded as the autonomous pair of equations
and
The behaviour of solutions to these equations can be determined by linear stability analysis. The critical points of the equation (where ) and the eigenvalues and eigenvectors of the Jacobian matrix are tabulated below.
See also
Emden–Chandrasekhar equation
Chandrasekhar's white dwarf equation
References
Further reading
External links
Astrophysics
Ordinary differential equations | Lane–Emden equation | [
"Physics",
"Astronomy"
] | 1,708 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
342,113 | https://en.wikipedia.org/wiki/Archimedes%20number | In viscous fluid dynamics, the Archimedes number (Ar), is a dimensionless number used to determine the motion of fluids due to density differences, named after the ancient Greek scientist and mathematician Archimedes.
It is the ratio of gravitational forces to viscous forces and has the form:
where:
is the local external field (for example gravitational acceleration), ,
is the characteristic length of body, .
is the submerged specific gravity,
is the density of the fluid, ,
is the density of the body, ,
is the kinematic viscosity, ,
is the dynamic viscosity, ,
Uses
The Archimedes number is generally used in design of tubular chemical process reactors. The following are non-exhaustive examples of using the Archimedes number in reactor design.
Packed-bed fluidization design
The Archimedes number is applied often in the engineering of packed beds, which are very common in the chemical processing industry. A packed bed reactor, which is similar to the ideal plug flow reactor model, involves packing a tubular reactor with a solid catalyst, then passing incompressible or compressible fluids through the solid bed. When the solid particles are small, they may be "fluidized", so that they act as if they were a fluid. When fluidizing a packed bed, the pressure of the working fluid is increased until the pressure drop between the bottom of the bed (where fluid enters) and the top of the bed (where fluid leaves) is equal to the weight of the packed solids. At this point, the velocity of the fluid is just not enough to achieve fluidization, and extra pressure is required to overcome the friction of particles with each other and the wall of the reactor, allowing fluidization to occur. This gives a minimum fluidization velocity, , that may be estimated by:
where:
is the diameter of sphere with the same volume as the solid particle and can often be estimated as , where is the diameter of the particle.
Bubble column design
Another use is in the estimation of gas holdup in a bubble column. In a bubble column, the gas holdup (fraction of a bubble column that is gas at a given time) can be estimated by:
where:
is the gas holdup fraction
is the Eötvos number
is the Froude number
is the diameter of holes in the column's spargers (holed discs that emit bubbles)
is the column diameter
Parameters to are found empirically
Spouted-bed minimum spouting velocity design
A spouted bed is used in drying and coating. It involves spraying a liquid into a bed packed with the solid to be coated. A fluidizing gas fed from the bottom of the bed causes a spout, which causes the solids to circle linearly around the liquid. Work has been undertaken to model the minimum velocity of gas required for spouting in a spouted bed, including the use of artificial neural networks. Testing with such models found that Archimedes number is a parameter that has a very large effect on the minimum spouting velocity.
See also
Viscous fluid dynamics
Convection
Convection (heat transfer)
Dimensionless quantity
Galilei number
Grashof number
Reynolds number
Froude number
Eötvös number
Sherwood number
References
Dimensionless numbers of fluid mechanics
Fluid dynamics | Archimedes number | [
"Chemistry",
"Engineering"
] | 668 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
342,127 | https://en.wikipedia.org/wiki/Anti-gravity | Anti-gravity (also known as non-gravitational field) is the phenomenon of creating a place or object that is free from the force of gravity. It does not refer to either the lack of weight under gravity experienced in free fall or orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift. Anti-gravity is a recurring concept in science fiction.
"Anti-gravity" is often used to refer to devices that look as if they reverse gravity even though they operate through other means, such as lifters, which fly in the air by moving air with electromagnetic fields.
Historical attempts at understanding gravity
The possibility of creating anti-gravity depends upon a complete understanding and description of gravity and its interactions with other physical theories, such as general relativity and quantum mechanics; however, no quantum theory of gravity has yet been found.
During the summer of 1666, Isaac Newton observed an apple falling from the tree in his garden, thus realizing the principle of universal gravitation. Albert Einstein in 1915 considered the physical interaction between matter and space, where gravity occurs as a consequence of matter causing a geometric deformation of spacetime which is otherwise flat. Einstein, both independently and with Walther Mayer, attempted to unify his theory of gravity with electromagnetism using the work of Theodor Kaluza and James Clerk Maxwell to link gravity and quantum field theory.
Theoretical quantum physicists have postulated the existence of a quantum gravity particle, the graviton. Various theoretical explanations of quantum gravity have been created, including superstring theory, loop quantum gravity, E8 theory and asymptotic safety theory amongst many others.
Probable solutions
In Newton's law of universal gravitation, gravity was an external force transmitted by unknown means. In the 20th century, Newton's model was replaced by general relativity where gravity is not a force but the result of the geometry of spacetime. Under general relativity, anti-gravity is impossible except under contrived circumstances.
Gravity shields
In 1948 businessman Roger Babson (founder of Babson College) formed the Gravity Research Foundation to study ways to reduce the effects of gravity. Their efforts were initially somewhat "crankish", but they held occasional conferences that drew such people as Clarence Birdseye, known for his frozen-food products, and helicopter pioneer Igor Sikorsky. Over time the Foundation turned its attention away from trying to control gravity, to simply better understanding it. The Foundation nearly disappeared after Babson's death in 1967. However, it continues to run an essay award, offering prizes of up to $4,000. As of 2017, it is still administered out of Wellesley, Massachusetts, by George Rideout Jr., son of the foundation's original director. Winners include California astrophysicist George F. Smoot (1993), who later won the 2006 Nobel Prize in Physics, and Gerard 't Hooft (2015) who previously won the 1999 Nobel Prize in Physics.
General relativity research in the 1950s
General relativity was introduced in the 1910s, but development of the theory was greatly slowed by a lack of suitable mathematical tools. It appeared that anti-gravity was outlawed under general relativity.
It is claimed the US Air Force also ran a study effort throughout the 1950s and into the 1960s. Former Lieutenant Colonel Ansel Talbert wrote two series of newspaper articles claiming that most of the major aviation firms had started gravity control propulsion research in the 1950s. However, there is no outside confirmation of these stories, and since they take place in the midst of the policy by press release era, it is not clear how much weight these stories should be given.
It is known that there were serious efforts underway at the Glenn L. Martin Company, who formed the Research Institute for Advanced Study. Major newspapers announced the contract that had been made between theoretical physicist Burkhard Heim and the Glenn L. Martin Company. Another effort in the private sector to master understanding of gravitation was the creation of the Institute for Field Physics, University of North Carolina at Chapel Hill in 1956, by Gravity Research Foundation trustee Agnew H. Bahnson.
Military support for anti-gravity projects was terminated by the Mansfield Amendment of 1973, which restricted Department of Defense spending to only the areas of scientific research with explicit military applications. The Mansfield Amendment was passed specifically to end long-running projects that had no results.
Under general relativity, gravity is the result of following spatial geometry (change in the normal shape of space) caused by local mass-energy. This theory holds that it is the altered shape of space, deformed by massive objects, that causes gravity, which is actually a property of deformed space rather than being a true force. Although the equations cannot normally produce a "negative geometry", it is possible to do so by using "negative mass". The same equations do not, of themselves, rule out the existence of negative mass.
Both general relativity and Newtonian gravity appear to predict that negative mass would produce a repulsive gravitational field. In particular, Sir Hermann Bondi proposed in 1957 that negative gravitational mass, combined with negative inertial mass, would comply with the strong equivalence principle of general relativity theory and the Newtonian laws of conservation of linear momentum and energy. Bondi's proof yielded singularity-free solutions for the relativity equations. In July 1988, Robert L. Forward presented a paper at the AIAA/ASME/SAE/ASEE 24th Joint Propulsion Conference that proposed a Bondi negative gravitational mass propulsion system.
Bondi pointed out that a negative mass will fall toward (and not away from) "normal" matter, since although the gravitational force is repulsive, the negative mass (according to Newton's law, F=ma) responds by accelerating in the opposite of the direction of the force. Normal mass, on the other hand, will fall away from the negative matter. He noted that two identical masses, one positive and one negative, placed near each other will therefore self-accelerate in the direction of the line between them, with the negative mass chasing after the positive mass. Notice that because the negative mass acquires negative kinetic energy, the total energy of the accelerating masses remains at zero. Forward pointed out that the self-acceleration effect is due to the negative inertial mass, and could be seen induced without the gravitational forces between the particles.
The Standard Model of particle physics, which describes all currently known forms of matter, does not include negative mass. Although cosmological dark matter may consist of particles outside the Standard Model whose nature is unknown, their mass is ostensibly known – since they were postulated from their gravitational effects on surrounding objects, which implies their mass is positive. The proposed cosmological dark energy, on the other hand, is more complicated, since according to general relativity the effects of both its energy density and its negative pressure contribute to its gravitational effect.
Unique force
Under general relativity any form of energy couples with spacetime to create the geometries that cause gravity. A longstanding question was whether or not these same equations applied to antimatter. The issue was considered solved in 1960 with the development of CPT symmetry, which demonstrated that antimatter follows the same laws of physics as "normal" matter, and therefore has positive energy content and also causes (and reacts to) gravity like normal matter (see gravitational interaction of antimatter).
For much of the last quarter of the 20th century, the physics community was involved in attempts to produce a unified field theory, a single physical theory that explains the four fundamental forces: gravity, electromagnetism, and the strong and weak nuclear forces. Scientists have made progress in unifying the three quantum forces, but gravity has remained "the problem" in every attempt. This has not stopped any number of such attempts from being made, however.
Generally these attempts tried to "quantize gravity" by positing a particle, the graviton, that carried gravity in the same way that photons (light) carry electromagnetism. Simple attempts along this direction all failed, however, leading to more complex examples that attempted to account for these problems. Two of these, supersymmetry and the relativity related supergravity, both required the existence of an extremely weak "fifth force" carried by a graviphoton, which coupled together several "loose ends" in quantum field theory, in an organized manner. As a side effect, both theories also all but required that antimatter be affected by this fifth force in a way similar to anti-gravity, dictating repulsion away from mass. Several experiments were carried out in the 1990s to measure this effect, but none yielded positive results.
In 2013 CERN looked for an antigravity effect in an experiment designed to study the energy levels within antihydrogen. The antigravity measurement was just an "interesting sideshow" and was inconclusive.
Breakthrough Propulsion Physics Program
During the close of the twentieth century NASA provided funding for the Breakthrough Propulsion Physics Program (BPP) from 1996 through 2002. This program studied a number of "far out" designs for space propulsion that were not receiving funding through normal university or commercial channels. Anti-gravity-like concepts were investigated under the name "diametric drive". The work of the BPP program continues in the independent, non-NASA affiliated Tau Zero Foundation.
Empirical claims and commercial efforts
There have been a number of attempts to build anti-gravity devices, and a small number of reports of anti-gravity-like effects in the scientific literature. None of the examples that follow are accepted as reproducible examples of anti-gravity.
Gyroscopic devices
Gyroscopes produce a force when twisted that operates "out of plane" and can appear to lift themselves against gravity. Although this force is well understood to be illusory, even under Newtonian models, it has nevertheless generated numerous claims of anti-gravity devices and any number of patented devices. None of these devices has ever been demonstrated to work under controlled conditions, and they have often become the subject of conspiracy theories as a result.
Another "rotating device" example is shown in a series of patents granted to Henry Wallace between 1968 and 1974. His devices consist of rapidly spinning disks of brass, a material made up largely of elements with a total half-integer nuclear spin. He claimed that by rapidly rotating a disk of such material, the nuclear spin became aligned, and as a result created a "gravitomagnetic" field in a fashion similar to the magnetic field created by the Barnett effect. No independent testing or public demonstration of these devices is known.
In 1989, it was reported that a weight decreases along the axis of a right spinning gyroscope. A test of this claim a year later yielded null results. A recommendation was made to conduct further tests at a 1999 AIP conference.
Thomas Townsend Brown's gravitator
In 1921, while still in high school, Thomas Townsend Brown found that a high-voltage Coolidge tube seemed to change mass depending on its orientation on a balance scale. Through the 1920s Brown developed this into devices that combined high voltages with materials with high dielectric constants (essentially large capacitors); he called such a device a "gravitator". Brown made the claim to observers and in the media that his experiments were showing anti-gravity effects. Brown would continue his work and produced a series of high-voltage devices in the following years in attempts to sell his ideas to aircraft companies and the military. He coined the names Biefeld–Brown effect and electrogravitics in conjunction with his devices. Brown tested his asymmetrical capacitor devices in a vacuum, supposedly showing it was not a more down-to-earth electrohydrodynamic effect generated by high voltage ion flow in air.
Electrogravitics is a popular topic in ufology, anti-gravity, free energy, with government conspiracy theorists and related websites, in books and publications with claims that the technology became highly classified in the early 1960s and that it is used to power UFOs and the B-2 bomber. There is also research and videos on the internet purported to show lifter-style capacitor devices working in a vacuum, therefore not receiving propulsion from ion drift or ion wind being generated in air.
Follow-up studies on Brown's work and other claims have been conducted by R. L. Talley in a 1990 US Air Force study, NASA scientist Jonathan Campbell in a 2003 experiment, and Martin Tajmar in a 2004 paper. They have found that no thrust could be observed in a vacuum and that Brown's and other ion lifter devices produce thrust along their axis regardless of the direction of gravity consistent with electrohydrodynamic effects.
Gravitoelectric coupling
In 1992, the Russian researcher Eugene Podkletnov claimed to have discovered, whilst experimenting with superconductors, that a fast rotating superconductor reduces the gravitational effect. Many studies have attempted to reproduce Podkletnov's experiment, always to negative results.
Douglas Torr, of the University of Alabama in Huntsville proposed how a time-dependent magnetic field could cause the spins of the lattice ions in a superconductor to generate detectable gravitomagnetic and gravitoelectric fields in a series of papers published between 1991 and 1993. In 1999, a Miss Li appeared in Popular Mechanics, claiming to have constructed a working prototype to generate what she described as "AC Gravity." No further evidence of this prototype has been offered.
Douglas Torr and Timir Datta were involved in the development of a "gravity generator" at the University of South Carolina. According to a leaked document from the Office of Technology Transfer at the University of South Carolina and confirmed to Wired reporter Charles Platt in 1998, the device would create a "force beam" in any desired direction and the university planned to patent and license this device. No further information about this university research project or the "Gravity Generator" device was ever made public.
Göde Award
The Institute for Gravity Research of the Göde Scientific Foundation has tried to reproduce many of the different experiments which claim any "anti-gravity" effects. All attempts by this group to observe an anti-gravity effect by reproducing past experiments have been unsuccessful thus far. The foundation has offered a reward of one million euros for a reproducible anti-gravity experiment.
In fiction
The existence of anti-gravity is a common theme in science fiction. The Encyclopedia of Science Fiction lists Francis Godwin's posthumously-published 1638 novel The Man in the Moone, where a "semi-magical" stone has the power to make gravity stronger or weaker, as the earliest variation of the theme. The first story to use anti-gravity for the purpose of space travel, as well as the first to treat the subject from a scientific rather than supernatural angle, was George Tucker's 1827 novel A Voyage to the Moon.
Apergy
Apergy is a term for a fictitious form of anti-gravitational energy first used by Percy Greg in his 1880 sword and planet novel Across the Zodiac. The term was later adopted by other fiction authors such as John Jacob Astor IV in his 1894 science fiction novel A Journey in Other Worlds, and it also appeared outside of explicit fiction writing.
See also
Area 51
Aerodynamic levitation
Artificial gravity
Burkhard Heim
Casimir effect
Clinostat
Electrostatic levitation
Exotic matter
Gravitational interaction of antimatter
Gravitational shielding
Gravitational wave
Ion-propelled aircraft
Heim theory
Magnetic levitation
Nazi UFOs
Optical levitation
Reactionless drive
Tractor beam
References
Bibliography
Criteria:
"Newtons discovery of the apple law"
"Newtons principle of gravitation" > "Newtons principle of gravitation apple falls" (google > google books)
Further reading
Cady, W. M. (15 September 1952). "Thomas Townsend Brown: Electro-Gravity Device" (File 24–185). Pasadena, CA: Office of Naval Research. Public access to the report was authorized on 1 October 1952.
External links
Responding to Mechanical Antigravity, a NASA paper debunking a wide variety of gyroscopic (and related) devices
Göde Scientific Foundation
KURED Research
General relativity
History of physics
History of science and technology in the United States
Historiography of science
Science fiction themes
Fringe physics | Anti-gravity | [
"Physics",
"Astronomy"
] | 3,322 | [
"Astronomical hypotheses",
"General relativity",
"Anti-gravity",
"Theory of relativity"
] |
342,371 | https://en.wikipedia.org/wiki/Directional%20antenna | A directional antenna or beam antenna is an antenna which radiates or receives greater radio wave power in specific directions. Directional antennas can radiate radio waves in beams, when greater concentration of radiation in a certain direction is desired, or in receiving antennas receive radio waves from one specific direction only. This can increase the power transmitted to receivers in that direction, or reduce interference from unwanted sources. This contrasts with omnidirectional antennas such as dipole antennas which radiate radio waves over a wide angle, or receive from a wide angle.
The extent to which an antenna's angular distribution of radiated power, its radiation pattern, is concentrated in one direction is measured by a parameter called antenna gain. A high-gain antenna (HGA) is a directional antenna with a focused, narrow beam width, permitting more precise targeting of the radio signals. Most commonly referred to during space missions, these antennas are also in use all over Earth, most successfully in flat, open areas where there are no mountains to disrupt radiowaves.
In contrast, a low-gain antenna (LGA) is an omnidirectional antenna, with a broad radiowave beam width, that allows the signal to propagate reasonably well even in mountainous regions and is thus more reliable regardless of terrain. Low-gain antennas are often used in spacecraft as a backup to the high-gain antenna, which transmits a much narrower beam and is therefore susceptible to loss of signal.
All practical antennas are at least somewhat directional, although usually only the direction in the plane parallel to the earth is considered, and practical antennas can easily be omnidirectional in one plane. The most common directional antenna types are
the Yagi–Uda antenna,
the log-periodic antenna, and
the corner reflector antenna.
These antenna types, or combinations of several single-frequency versions of one type or (rarely) a combination of two different types, are frequently sold commercially as residential TV antennas. Cellular repeaters often make use of external directional antennas to give a far greater signal than can be obtained on a standard cell phone. Satellite television receivers usually use parabolic antennas. For long and medium wavelength frequencies, tower arrays are used in most cases as directional antennas.
Principle of operation
When transmitting, a high-gain antenna allows more of the transmitted power to be sent in the direction of the receiver, increasing the received signal strength. When receiving, a high gain antenna captures more of the signal, again increasing signal strength. Due to reciprocity, these two effects are equal—an antenna that makes a transmitted signal 100 times stronger (compared to an isotropic radiator) will also capture 100 times as much energy as the isotropic antenna when used as a receiving antenna. As a consequence of their directivity, directional antennas also send less (and receive less) signal from directions other than the main beam. This property may avoid interference from other out-of-beam transmitters, and always reduces antenna noise. (Noise comes from every direction, but a desired signal will only come from one approximate direction, so the narrower the antenna's beam, the better the crucial signal-to-noise ratio.)
There are many ways to make a high-gain antenna; the most common are parabolic antennas, helical antennas, Yagi-Uda antennas, and phased arrays of smaller antennas of any kind. Horn antennas can also be constructed with high gain, but are less commonly seen. Still other configurations are possible—the Arecibo Observatory used a combination of a line feed with an enormous spherical reflector (as opposed to a more usual parabolic reflector), to achieve extremely high gains at specific frequencies.
Antenna gain
Antenna gain is often quoted with respect to a hypothetical antenna that radiates equally in all directions, an isotropic radiator. This gain, when measured in decibels, is called dBi. Conservation of energy dictates that high gain antennas must have narrow beams. For example, if a high gain antenna makes a 1 Watt transmitter look like a 100 Watt transmitter, then the beam can cover at most of the sky (otherwise the total amount of energy radiated in all directions would sum to more than the transmitter power, which is not possible). In turn this implies that high-gain antennas must be physically large, since according to the diffraction limit, the narrower the beam desired, the larger the antenna must be (measured in wavelengths).
Antenna gain can also be measured in dBd, which is gain in decibels compared to the maximum intensity direction of a half wave dipole. In the case of Yagi-type aerials this more or less equates to the gain one would expect from the aerial under test minus all its directors and reflector. It is important not to confuse dB and dB; the two differ by 2.15 dB, with the dBi figure being higher, since a dipole has 2.15 dB of gain with respect to an isotropic antenna.
Gain is also dependent on the number of elements and the tuning of those elements. Antennas can be tuned to be resonant over a wider spread of frequencies but, all other things being equal, this will mean the gain of the aerial is lower than one tuned for a single frequency or a group of frequencies. For example, in the case of wideband TV antennas the fall off in gain is particularly large at the bottom of the TV transmitting band. In the UK this bottom third of the TV band is known as group A.
Other factors may also affect gain such as aperture (the area the antenna collects signal from, almost entirely related to the size of the antenna but for small antennas can be increased by adding a ferrite rod), and efficiency (again, affected by size, but also resistivity of the materials used and impedance matching). These factors are easy to improve without adjusting other features of the antennas or coincidentally improved by the same factors that increase directivity, and so are typically not emphasized.
Applications
High gain antennas are typically the largest component of deep space probes, and the highest gain radio antennas are physically enormous structures, such as the Arecibo Observatory. The Deep Space Network uses 35 m dishes at about 1 cm wavelengths. This combination gives the antenna gain of about 100,000,000 (or 80 dB, as normally measured), making the transmitter appear about 100 million times stronger, and a receiver about 100 million times more sensitive, provided the target is within the beam. This beam can cover at most one hundred millionth () of the sky, so very accurate pointing is required.
Use of high gain and millimeter-wave communication in WPAN gaining increases the probability of concurrent scheduling of non‐interfering transmissions in a localized area, which results in an immense increase in network throughput. However, the optimum scheduling of concurrent transmission is an NP-Hard problem.
Gallery
See also
Amateur radio direction finding
Antenna boresight
Antenna gain
Cantenna
Cardioid
Cassegrain antenna
Cassegrain reflector
Directivity
Loop antenna
Omnidirectional antenna
Parabolic antenna
Phased array
Radio direction finder
Radio propagation model, Antenna subsection
Radiation pattern
References
External links
Radio frequency antenna types
Radio frequency propagation
Antennas (radio) | Directional antenna | [
"Physics"
] | 1,474 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
342,520 | https://en.wikipedia.org/wiki/Fuel%20efficiency | Fuel efficiency (or fuel economy) is a form of thermal efficiency, meaning the ratio of effort to result of a process that converts chemical potential energy contained in a carrier (fuel) into kinetic energy or work. Overall fuel efficiency may vary per device, which in turn may vary per application, and this spectrum of variance is often illustrated as a continuous energy profile. Non-transportation applications, such as industry, benefit from increased fuel efficiency, especially fossil fuel power plants or industries dealing with combustion, such as ammonia production during the Haber process.
In the context of transport, fuel economy is the energy efficiency of a particular vehicle, given as a ratio of distance traveled per unit of fuel consumed. It is dependent on several factors including engine efficiency, transmission design, and tire design. In most countries, using the metric system, fuel economy is stated as "fuel consumption" in liters per 100 kilometers (L/100 km) or kilometers per liter (km/L or kmpl). In a number of countries still using other systems, fuel economy is expressed in miles per gallon (mpg), for example in the US and usually also in the UK (imperial gallon); there is sometimes confusion as the imperial gallon is 20% larger than the US gallon so that mpg values are not directly comparable. Traditionally, litres per mil were used in Norway and Sweden, but both have aligned to the EU standard of L/100 km.
Fuel consumption is a more accurate measure of a vehicle's performance because it is a linear relationship while fuel economy leads to distortions in efficiency improvements. Weight-specific efficiency (efficiency per unit weight) may be stated for freight, and passenger-specific efficiency (vehicle efficiency per passenger) for passenger vehicles.
Vehicle design
Fuel efficiency is dependent on many parameters of a vehicle, including its engine parameters, aerodynamic drag, weight, AC usage, fuel and rolling resistance. There have been advances in all areas of vehicle design in recent decades. Fuel efficiency of vehicles can also be improved by careful maintenance and driving habits.
Hybrid vehicles use two or more power sources for propulsion. In many designs, a small combustion engine is combined with electric motors. Kinetic energy which would otherwise be lost to heat during braking is recaptured as electrical power to improve fuel efficiency. The larger batteries in these vehicles power the car's electronics, allowing the engine to shut off and avoid prolonged idling.
Fleet efficiency
Fleet efficiency describes the average efficiency of a population of vehicles. Technological advances in efficiency may be offset by a change in buying habits with a propensity to heavier vehicles that are less fuel-efficient.
Energy efficiency terminology
Energy efficiency is similar to fuel efficiency but the input is usually in units of energy such as megajoules (MJ), kilowatt-hours (kW·h), kilocalories (kcal) or British thermal units (BTU). The inverse of "energy efficiency" is "energy intensity", or the amount of input energy required for a unit of output such as MJ/passenger-km (of passenger transport), BTU/ton-mile or kJ/t-km (of freight transport), GJ/t (for production of steel and other materials), BTU/(kW·h) (for electricity generation), or litres/100 km (of vehicle travel). Litres per 100 km is also a measure of "energy intensity" where the input is measured by the amount of fuel and the output is measured by the distance travelled. For example: Fuel economy in automobiles.
Given a heat value of a fuel, it would be trivial to convert from fuel units (such as litres of gasoline) to energy units (such as MJ) and conversely. But there are two problems with comparisons made using energy units:
There are two different heat values for any hydrogen-containing fuel which can differ by several percent (see below).
When comparing transportation energy costs, a kilowatt hour of electric energy may require an amount of fuel with heating value of 2 or 3 kilowatt hours to produce it.
Energy content of fuel
The specific energy content of a fuel is the heat energy obtained when a certain quantity is burned (such as a gallon, litre, kilogram). It is sometimes called the heat of combustion. There exists two different values of specific heat energy for the same batch of fuel. One is the high (or gross) heat of combustion and the other is the low (or net) heat of combustion. The high value is obtained when, after the combustion, the water in the exhaust is in liquid form. For the low value, the exhaust has all the water in vapor form (steam). Since water vapor gives up heat energy when it changes from vapor to liquid, the liquid water value is larger since it includes the latent heat of vaporization of water. The difference between the high and low values is significant, about 8 or 9%. This accounts for most of the apparent discrepancy in the heat value of gasoline. In the U.S. (and the table) the high heat values have traditionally been used, but in many other countries, the low heat values are commonly used.
Neither the gross heat of combustion nor the net heat of combustion gives the theoretical amount of mechanical energy (work) that can be obtained from the reaction. (This is given by the change in Gibbs free energy, and is around 45.7 MJ/kg for gasoline.) The actual amount of mechanical work obtained from fuel (the inverse of the specific fuel consumption) depends on the engine. A figure of 17.6 MJ/kg is possible with a gasoline engine, and 19.1 MJ/kg for a diesel engine. See Brake-specific fuel consumption for more information.
Transportation
Fuel efficiency of motor vehicles
Driving technique
Advanced technology
The most efficient machines for converting energy to rotary motion are electric motors, as used in electric vehicles. However, electricity is not a primary energy source so the efficiency of the electricity production has also to be taken into account. Railway trains can be powered using electricity, delivered through an additional running rail, overhead catenary system or by on-board generators used in diesel-electric locomotives as common on the US and UK rail networks. Pollution produced from centralised generation of electricity is emitted at a distant power station, rather than "on site". Pollution can be reduced by using more railway electrification and low carbon power for electricity. Some railways, such as the French SNCF and Swiss federal railways derive most, if not 100% of their power, from hydroelectric or nuclear power stations, therefore atmospheric pollution from their rail networks is very low. This was reflected in a study by AEA Technology between a Eurostar train and airline journeys between London and Paris, which showed the trains on average emitting 10 times less CO2, per passenger, than planes, helped in part by French nuclear generation.
Hydrogen fuel cells
In the future, hydrogen cars may be commercially available. Toyota is test-marketing vehicles powered by hydrogen fuel cells in southern California, where a series of hydrogen fueling stations has been established. Powered either through chemical reactions in a fuel cell that create electricity to drive very efficient electrical motors or by directly burning hydrogen in a combustion engine (near identically to a natural gas vehicle, and similarly compatible with both natural gas and gasoline); these vehicles promise to have near-zero pollution from the tailpipe (exhaust pipe). Potentially the atmospheric pollution could be minimal, provided the hydrogen is made by electrolysis using electricity from non-polluting sources such as solar, wind or hydroelectricity or nuclear. Commercial hydrogen production uses fossil fuels and produces more carbon dioxide than hydrogen.
Because there are pollutants involved in the manufacture and destruction of a car and the production, transmission and storage of electricity and hydrogen, the label "zero pollution" applies only to the car's conversion of stored energy into movement.
In 2004, a consortium of major auto-makers — BMW, General Motors, Honda, Toyota and Volkswagen/Audi — came up with "Top Tier Detergent Gasoline Standard" to gasoline brands in the US and Canada that meet their minimum standards for detergent content and do not contain metallic additives. Top Tier gasoline contains higher levels of detergent additives in order to prevent the build-up of deposits (typically, on fuel injector and intake valve) known to reduce fuel economy and engine performance.
In microgravity
How fuel combusts affects how much energy is produced. The National Aeronautics and Space Administration (NASA) has investigated fuel consumption in microgravity.
The common distribution of a flame under normal gravity conditions depends on convection, because soot tends to rise to the top of a flame, such as in a candle, making the flame yellow. In microgravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient. There are several possible explanations for this difference, of which the most likely one given is the hypothesis that the temperature is evenly distributed enough that soot is not formed and complete combustion occurs., National Aeronautics and Space Administration, April 2005. Experiments by NASA in microgravity reveal that diffusion flames in microgravity allow more soot to be completely oxidised after they are produced than diffusion flames on Earth, because of a series of mechanisms that behaved differently in microgravity when compared to normal gravity conditions.LSP-1 experiment results, National Aeronautics and Space Administration, April 2005. Premixed flames in microgravity burn at a much slower rate and more efficiently than even a candle on Earth, and last much longer.
See also
References
External links
US Government website on fuel economy
UK DfT comparisons on road and rail
NASA Offers a $1.5 Million Prize for a Fast and Fuel-Efficient Aircraft
Car Fuel Consumption Official Figures
Spritmonitor.de "the most fuel efficient cars" - Database of thousands of (mostly German) car owners' actual fuel consumption figures (cf. Spritmonitor)
Searchable fuel economy data from the EPA - United States Environmental Protection Agency
penghemat bbm - Alat penghemat bbm
Ny Times: A Road Test of Alternative Fuel Visions
Energy economics
Physical quantities
Energy efficiency
Transport economics | Fuel efficiency | [
"Physics",
"Mathematics",
"Environmental_science"
] | 2,084 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Energy economics",
"Physical properties",
"Environmental social science"
] |
342,851 | https://en.wikipedia.org/wiki/Bilayer | A bilayer is a double layer of closely packed atoms or molecules.
The properties of bilayers are often studied in condensed matter physics, particularly in the context of semiconductor devices, where two distinct materials are united to form junctions, such as p–n junctions, Schottky junctions, etc. Layered materials, such as graphene, boron nitride, or transition metal dichalcogenides, have unique electronic properties as bilayer systems and are an active area of current research.
In biology, a common example is the lipid bilayer, which describes the structure of multiple organic structures, such as the membrane of a cell.
See also
Monolayer
Non-carbon nanotube
Semiconductor
Thin film
References
Phases of matter
Thin films | Bilayer | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 152 | [
"Materials science stubs",
"Planes (geometry)",
"Phases of matter",
"Materials science",
"Nanotechnology stubs",
"Condensed matter physics",
"Nanotechnology",
"Condensed matter stubs",
"Thin films",
"Matter"
] |
343,031 | https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker%20metric | The Friedmann–Lemaître–Robertson–Walker metric (FLRW; ) is a metric that describes a homogeneous, isotropic, expanding (or otherwise, contracting) universe that is path-connected, but not necessarily simply connected. The general form of the metric follows from the geometric properties of homogeneity and isotropy; Einstein's field equations are only needed to derive the scale factor of the universe as a function of time. Depending on geographical or historical preferences, the set of the four scientists – Alexander Friedmann, Georges Lemaître, Howard P. Robertson and Arthur Geoffrey Walker – are variously grouped as Friedmann, Friedmann–Robertson–Walker (FRW), Robertson–Walker (RW), or Friedmann–Lemaître (FL). This model is sometimes called the Standard Model of modern cosmology, although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.
Concept
The metric is a consequence of assuming that the mass in the universe has constant density – homogeneity – and is the same in all directions – isotropy. Assuming isotropy alone is sufficient to reduce the possible motions of mass in the universe to radial velocity variations. The Copernican principle, that our observation point in the universe is the equivalent to every other point, combined with isotropy ensures homogeneity. Direct observation of stars has shown their velocities to be dominated by radial recession, validating these assumptions for cosmological models.
To measure distances in this space, that is to define a metric, we can compare the positions of two points in space moving along with their local radial velocity of mass. Such points can be thought of as ideal galaxies. Each galaxy can be given a clock to track local time, with the clocks synchronized by imagining the radial velocities run backwards until the clocks coincide in space. The equivalence principle applied to each galaxy means distance measurements can be made using special relativity locally. So a distance can be related to the local time and the coordinates:
An isotropic, homogeneous mass distribution is highly symmetric. Rewriting the metric in spherical coordinates reduces four coordinates to three coordinates. The radial coordinate is written as a product of a comoving coordinate, , and a time dependent scale factor . The resulting metric can be written in several forms. Two common ones are:
or
where is the angle between the two locations and
(The meaning of in these equations is not the same). Other common variations use a dimensionless scale factor
where time zero is now.
FLRW models
Relativisitic cosmology models based on the FLRW metric and obeying the Friedmann equations are called FRW models.
These models are the basis of the standard Big Bang cosmological model including the current ΛCDM model.
To apply the metric to cosmology and predict its time evolution via the scale factor requires Einstein's field equations together with a way of calculating the density, such as a cosmological equation of state.
This process allows an approximate analytic solution Einstein's field equations giving the Friedmann equations when the energy–momentum tensor is similarly assumed to be isotropic and homogeneous. The resulting equations are:
Because the FLRW model assumes homogeneity, some popular accounts mistakenly assert that the Big Bang model cannot account for the observed lumpiness of the universe. In a strictly FLRW model, there are no clusters of galaxies or stars, since these are objects much denser than a typical part of the universe. Nonetheless, the FLRW model is used as a first approximation for the evolution of the real, lumpy universe because it is simple to calculate, and models that calculate the lumpiness in the universe are added onto the FLRW models as extensions. Most cosmologists agree that the observable universe is well approximated by an almost FLRW model, i.e., a model that follows the FLRW metric apart from primordial density fluctuations. , the theoretical implications of the various extensions to the FLRW model appear to be well understood, and the goal is to make these consistent with observations from COBE and WMAP.
Interpretation
The pair of equations given above is equivalent to the following pair of equations
with , the spatial curvature index, serving as a constant of integration for the first equation.
The first equation can be derived also from thermodynamical considerations and is equivalent to the first law of thermodynamics, assuming the expansion of the universe is an adiabatic process (which is implicitly assumed in the derivation of the Friedmann–Lemaître–Robertson–Walker metric).
The second equation states that both the energy density and the pressure cause the expansion rate of the universe to decrease, i.e., both cause a deceleration in the expansion of the universe. This is a consequence of gravitation, with pressure playing a similar role to that of energy (or mass) density, according to the principles of general relativity. The cosmological constant, on the other hand, causes an acceleration in the expansion of the universe.
Cosmological constant
The cosmological constant term can be omitted if we make the following replacements
Therefore, the cosmological constant can be interpreted as arising from a form of energy that has negative pressure, equal in magnitude to its (positive) energy density:
which is an equation of state of vacuum with dark energy.
An attempt to generalize this to
would not have general invariance without further modification.
In fact, in order to get a term that causes an acceleration of the universe expansion, it is enough to have a scalar field that satisfies
Such a field is sometimes called quintessence.
Newtonian interpretation
This is due to McCrea and Milne, although sometimes incorrectly ascribed to Friedmann. The Friedmann equations are equivalent to this pair of equations:
The first equation says that the decrease in the mass contained in a fixed cube (whose side is momentarily a) is the amount that leaves through the sides due to the expansion of the universe plus the mass equivalent of the work done by pressure against the material being expelled. This is the conservation of mass–energy (first law of thermodynamics) contained within a part of the universe.
The second equation says that the kinetic energy (seen from the origin) of a particle of unit mass moving with the expansion plus its (negative) gravitational potential energy (relative to the mass contained in the sphere of matter closer to the origin) is equal to a constant related to the curvature of the universe. In other words, the energy (relative to the origin) of a co-moving particle in free-fall is conserved. General relativity merely adds a connection between the spatial curvature of the universe and the energy of such a particle: positive total energy implies negative curvature and negative total energy implies positive curvature.
The cosmological constant term is assumed to be treated as dark energy and thus merged into the density and pressure terms.
During the Planck epoch, one cannot neglect quantum effects. So they may cause a deviation from the Friedmann equations.
General metric
The FLRW metric assume homogeneity and isotropy of space. It also assumes that the spatial component of the metric can be time-dependent. The generic metric that meets these conditions is
where ranges over a 3-dimensional space of uniform curvature, that is, elliptical space, Euclidean space, or hyperbolic space. It is normally written as a function of three spatial coordinates, but there are several conventions for doing so, detailed below. does not depend on t – all of the time dependence is in the function a(t), known as the "scale factor".
Reduced-circumference polar coordinates
In reduced-circumference polar coordinates the spatial metric has the form
k is a constant representing the curvature of the space. There are two common unit conventions:
k may be taken to have units of length−2, in which case r has units of length and a(t) is unitless. k is then the Gaussian curvature of the space at the time when . r is sometimes called the reduced circumference because it is equal to the measured circumference of a circle (at that value of r), centered at the origin, divided by 2 (like the r of Schwarzschild coordinates). Where appropriate, a(t) is often chosen to equal 1 in the present cosmological era, so that measures comoving distance.
Alternatively, k may be taken to belong to the set (for negative, zero, and positive curvature respectively). Then r is unitless and a(t) has units of length. When , a(t) is the radius of curvature of the space, and may also be written R(t).
A disadvantage of reduced circumference coordinates is that they cover only half of the 3-sphere in the case of positive curvature—circumferences beyond that point begin to decrease, leading to degeneracy. (This is not a problem if space is elliptical, i.e. a 3-sphere with opposite points identified.)
Hyperspherical coordinates
In hyperspherical or curvature-normalized coordinates the coordinate r is proportional to radial distance; this gives
where is as before and
As before, there are two common unit conventions:
k may be taken to have units of length−2, in which case r has units of length and a(t) is unitless. k is then the Gaussian curvature of the space at the time when . Where appropriate, a(t) is often chosen to equal 1 in the present cosmological era, so that measures comoving distance.
Alternatively, as before, k may be taken to belong to the set (for negative, zero, and positive curvature respectively). Then r is unitless and a(t) has units of length. When , a(t) is the radius of curvature of the space, and may also be written R(t). Note that when , r is essentially a third angle along with θ and φ. The letter χ may be used instead of r.
Though it is usually defined piecewise as above, S is an analytic function of both k and r. It can also be written as a power series
or as
where sinc is the unnormalized sinc function and is one of the imaginary, zero or real square roots of k. These definitions are valid for all k.
Cartesian coordinates
When k = 0 one may write simply
This can be extended to by defining
, and
where r is one of the radial coordinates defined above, but this is rare.
Curvature
Cartesian coordinates
In flat FLRW space using Cartesian coordinates, the surviving components of the Ricci tensor are
and the Ricci scalar is
Spherical coordinates
In more general FLRW space using spherical coordinates (called "reduced-circumference polar coordinates" above), the surviving components of the Ricci tensor are
and the Ricci scalar is
Name and history
The Soviet mathematician Alexander Friedmann first derived the main results of the FLRW model in 1922 and 1924. Although the prestigious physics journal Zeitschrift für Physik published his work, it remained relatively unnoticed by his contemporaries. Friedmann was in direct communication with Albert Einstein, who, on behalf of Zeitschrift für Physik, acted as the scientific referee of Friedmann's work. Eventually Einstein acknowledged the correctness of Friedmann's calculations, but failed to appreciate the physical significance of Friedmann's predictions.
Friedmann died in 1925. In 1927, Georges Lemaître, a Belgian priest, astronomer and periodic professor of physics at the Catholic University of Leuven, arrived independently at results similar to those of Friedmann and published them in the Annales de la Société Scientifique de Bruxelles (Annals of the Scientific Society of Brussels). In the face of the observational evidence for the expansion of the universe obtained by Edwin Hubble in the late 1920s, Lemaître's results were noticed in particular by Arthur Eddington, and in 1930–31 Lemaître's paper was translated into English and published in the Monthly Notices of the Royal Astronomical Society.
Howard P. Robertson from the US and Arthur Geoffrey Walker from the UK explored the problem further during the 1930s. In 1935 Robertson and Walker rigorously proved that the FLRW metric is the only one on a spacetime that is spatially homogeneous and isotropic (as noted above, this is a geometric result and is not tied specifically to the equations of general relativity, which were always assumed by Friedmann and Lemaître).
This solution, often called the Robertson–Walker metric since they proved its generic properties, is different from the dynamical "Friedmann–Lemaître" models, which are specific solutions for a(t) that assume that the only contributions to stress–energy are cold matter ("dust"), radiation, and a cosmological constant.
Einstein's radius of the universe
Einstein's radius of the universe is the radius of curvature of space of Einstein's universe, a long-abandoned static model that was supposed to represent our universe in idealized form. Putting
in the Friedmann equation, the radius of curvature of space of this universe (Einstein's radius) is
where is the speed of light, is the Newtonian constant of gravitation, and is the density of space of this universe. The numerical value of Einstein's radius is of the order of 1010 light years, or 10 billion light years.
Current status
The current standard model of cosmology, the Lambda-CDM model, uses the FLRW metric. By combining the observation data from some experiments such as WMAP and Planck with theoretical results of Ehlers–Geren–Sachs theorem and its generalization, astrophysicists now agree that the early universe is almost homogeneous and isotropic (when averaged over a very large scale) and thus nearly a FLRW spacetime. That being said, attempts to confirm the purely kinematic interpretation of the Cosmic Microwave Background (CMB) dipole through studies of radio galaxies and quasars show disagreement in the magnitude. Taken at face value, these observations are at odds with the Universe being described by the FLRW metric. Moreover, one can argue that there is a maximum value to the Hubble constant within an FLRW cosmology tolerated by current observations, = , and depending on how local determinations converge, this may point to a breakdown of the FLRW metric in the late universe, necessitating an explanation beyond the FLRW metric.
References
Further reading
. (See Chapter 23 for a particularly clear and concise introduction to the FLRW models.)
Coordinate charts in general relativity
Exact solutions in general relativity
Physical cosmology
Metric tensors | Friedmann–Lemaître–Robertson–Walker metric | [
"Physics",
"Astronomy",
"Mathematics",
"Engineering"
] | 3,075 | [
"Exact solutions in general relativity",
"Tensors",
"Theoretical physics",
"Mathematical objects",
"Astrophysics",
"Equations",
"Metric tensors",
"Physical cosmology",
"Coordinate systems",
"Coordinate charts in general relativity",
"Astronomical sub-disciplines"
] |
344,116 | https://en.wikipedia.org/wiki/Korteweg%E2%80%93De%20Vries%20equation | In mathematics, the Korteweg–De Vries (KdV) equation is a partial differential equation (PDE) which serves as a mathematical model of waves on shallow water surfaces. It is particularly notable as the prototypical example of an integrable PDE, exhibiting typical behaviors such as a large number of explicit solutions, in particular soliton solutions, and an infinite number of conserved quantities, despite the nonlinearity which typically renders PDEs intractable. The KdV can be solved by the inverse scattering method (ISM). In fact, Clifford Gardner, John M. Greene, Martin Kruskal and Robert Miura developed the classical inverse scattering method to solve the KdV equation.
The KdV equation was first introduced by and rediscovered by Diederik Korteweg and Gustav de Vries in 1895, who found the simplest solution, the one-soliton solution. Understanding of the equation and behavior of solutions was greatly advanced by the computer simulations of Norman Zabusky and Kruskal in 1965 and then the development of the inverse scattering transform in 1967.
Definition
The KdV equation is a partial differential equation that models (spatially) one-dimensional nonlinear dispersive nondissipative waves described by a function adhering to:
where accounts for dispersion and the nonlinear element is an advection term.
For modelling shallow water waves, is the height displacement of the water surface from its equilibrium height.
The constant in front of the last term is conventional but of no great significance: multiplying , , and by constants can be used to make the coefficients of any of the three terms equal to any given non-zero constants.
Soliton solutions
One-soliton solution
Consider solutions in which a fixed waveform, given by , maintains its shape as it travels to the right at phase speed . Such a solution is given by . Substituting it into the KdV equation gives the ordinary differential equation
or, integrating with respect to ,
where is a constant of integration. Interpreting the independent variable above as a virtual time variable, this means
satisfies Newton's equation of motion of a particle of unit mass in a cubic potential
.
If
then the potential function has local maximum at ; there is a solution in which starts at this point at 'virtual time'
, eventually slides down to the local minimum, then back up the other side, reaching an equal height, and then reverses direction, ending up at the local maximum again at time . In other words, approaches as . This is the characteristic shape of the solitary wave solution.
More precisely, the solution is
where stands for the hyperbolic secant and is an arbitrary constant. This describes a right-moving soliton with velocity .
N-soliton solution
There is a known expression for a solution which is an -soliton solution, which at late times resolves into separate single solitons. The solution depends on a set of decreasing positive parameters and a set of non-zero parameters . The solution is given in the form
where the components of the matrix are
This is derived using the inverse scattering method.
Integrals of motion
The KdV equation has infinitely many integrals of motion, functionals on a solution which do not change with time. They can be given explicitly as
where the polynomials are defined recursively by
The first few integrals of motion are:
the mass
the momentum
the energy .
Only the odd-numbered terms result in non-trivial (meaning non-zero) integrals of motion.
Lax pairs
The KdV equation
can be reformulated as the Lax equation
with a Sturm–Liouville operator:
where is the commutator such that . The Lax pair accounts for the infinite number of first integrals of the KdV equation.
In fact, is the time-independent Schrödinger operator (disregarding constants) with potential . It can be shown that due to this Lax formulation that in fact the eigenvalues do not depend on .
Zero-curvature representation
Setting the components of the Lax connection to be
the KdV equation is equivalent to the zero-curvature equation for the Lax connection,
Least action principle
The Korteweg–De Vries equation
is the Euler–Lagrange equation of motion derived from the Lagrangian density,
with defined by
Since the Lagrangian (eq (1)) contains second derivatives, the Euler–Lagrange equation of motion for this field is
where is a derivative with respect to the component.
A sum over is implied so eq (2) really reads,
Evaluate the five terms of eq (3) by plugging in eq (1),
Remember the definition , so use that to simplify the above terms,
Finally, plug these three non-zero terms back into eq (3) to see
which is exactly the KdV equation
Long-time asymptotics
It can be shown that any sufficiently fast decaying smooth solution will eventually split into a finite superposition of solitons travelling to the right plus a decaying dispersive part travelling to the left. This was first observed by and can be rigorously proven using the nonlinear steepest descent analysis for oscillatory Riemann–Hilbert problems.
History
The history of the KdV equation started with experiments by John Scott Russell in 1834, followed by theoretical investigations by Lord Rayleigh and Joseph Boussinesq around 1870 and, finally, Korteweg and De Vries in 1895.
The KdV equation was not studied much after this until discovered numerically that its solutions seemed to decompose at large times into a collection of "solitons": well separated solitary waves. Moreover, the solitons seems to be almost unaffected in shape by passing through each other (though this could cause a change in their position). They also made the connection to earlier numerical experiments by Fermi, Pasta, Ulam, and Tsingou by showing that the KdV equation was the continuum limit of the FPUT system. Development of the analytic solution by means of the inverse scattering transform was done in 1967 by Gardner, Greene, Kruskal and Miura.
The KdV equation is now seen to be closely connected to Huygens' principle.
Applications and connections
The KdV equation has several connections to physical problems. In addition to being the governing equation of the string in the Fermi–Pasta–Ulam–Tsingou problem in the continuum limit, it approximately describes the evolution of long, one-dimensional waves in many physical settings, including:
shallow-water waves with weakly non-linear restoring forces,
long internal waves in a density-stratified ocean,
ion acoustic waves in a plasma,
acoustic waves on a crystal lattice.
The KdV equation can also be solved using the inverse scattering transform such as those applied to the non-linear Schrödinger equation.
KdV equation and the Gross–Pitaevskii equation
Considering the simplified solutions of the form
we obtain the KdV equation as
or
Integrating and taking the special case in which the integration constant is zero, we have:
which is the special case of the generalized stationary Gross–Pitaevskii equation (GPE)
Therefore, for the certain class of solutions of generalized GPE ( for the true one-dimensional condensate and
while using the three dimensional equation in one dimension), two equations are one. Furthermore, taking the case with the minus sign and the real, one obtains an attractive self-interaction that should yield a bright soliton.
Variations
Many different variations of the KdV equations have been studied. Some are listed in the following table.
See also
Advection-diffusion equation
Benjamin–Bona–Mahony equation
Boussinesq approximation (water waves)
Cnoidal wave
Dispersion (water waves)
Dispersionless equation
Fifth-order Korteweg–De Vries equation
Kadomtsev–Petviashvili equation
KdV hierarchy
Modified KdV–Burgers equation
Novikov–Veselov equation
Schamel equation
Ursell number
Vector soliton
Notes
References
External links
Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Cylindrical Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at EqWorld: The World of Mathematical Equations.
Modified Korteweg–De Vries equation at NEQwiki, the nonlinear equations encyclopedia.
Derivation of the Korteweg–De Vries equation for a narrow canal.
Three Solitons Solution of KdV Equation –
Three Solitons (unstable) Solution of KdV Equation –
Mathematical aspects of equations of Korteweg–De Vries type are discussed on the Dispersive PDE Wiki.
Solitons from the Korteweg–De Vries Equation by S. M. Blinder, The Wolfram Demonstrations Project.
Solitons & Nonlinear Wave Equations
Eponymous equations of physics
Partial differential equations
Exactly solvable models
Integrable systems
Solitons
Equations of fluid dynamics | Korteweg–De Vries equation | [
"Physics",
"Chemistry"
] | 1,904 | [
"Equations of fluid dynamics",
"Equations of physics",
"Integrable systems",
"Theoretical physics",
"Eponymous equations of physics",
"Fluid dynamics"
] |
344,142 | https://en.wikipedia.org/wiki/Audio%20frequency | An audio frequency or audible frequency (AF) is a periodic vibration whose frequency is audible to the average human. The SI unit of frequency is the hertz (Hz). It is the property of sound that most determines pitch.
The generally accepted standard hearing range for humans is 20 to 20,000 Hz. In air at atmospheric pressure, these represent sound waves with wavelengths of to . Frequencies below 20 Hz are generally felt rather than heard, assuming the amplitude of the vibration is great enough. Sound frequencies above 20 kHz are called ultrasonic.
Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch. Higher pitches have higher frequency, and lower pitches are lower frequency.
The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency.
Frequencies and descriptions
See also
Absolute threshold of hearing
Hypersonic effect, controversial claim for human perception above 20,000 Hz
Loudspeaker
Musical acoustics
Piano key frequencies
Scientific pitch notation
Whistle register
References
Acoustics
Sound
Sound measurements
Physical quantities
Audio engineering | Audio frequency | [
"Physics",
"Mathematics",
"Engineering"
] | 343 | [
"Physical phenomena",
"Sound measurements",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Acoustics",
"Electrical engineering",
"Audio engineering",
"Physical properties"
] |
344,173 | https://en.wikipedia.org/wiki/Rotational%20energy | Rotational energy or angular kinetic energy is kinetic energy due to the rotation of an object and is part of its total kinetic energy. Looking at rotational energy separately around an object's axis of rotation, the following dependence on the object's moment of inertia is observed:
where
The mechanical work required for or applied during rotation is the torque times the rotation angle. The instantaneous power of an angularly accelerating body is the torque times the angular velocity. For free-floating (unattached) objects, the axis of rotation is commonly around its center of mass.
Note the close relationship between the result for rotational energy and the energy held by linear (or translational) motion:
In the rotating system, the moment of inertia, I, takes the role of the mass, m, and the angular velocity, , takes the role of the linear velocity, v. The rotational energy of a rolling cylinder varies from one half of the translational energy (if it is massive) to the same as the translational energy (if it is hollow).
An example is the calculation of the rotational kinetic energy of the Earth. As the Earth has a sidereal rotation period of 23.93 hours, it has an angular velocity of . The Earth has a moment of inertia, I = . Therefore, it has a rotational kinetic energy of .
Part of the Earth's rotational energy can also be tapped using tidal power. Additional friction of the two global tidal waves creates energy in a physical manner, infinitesimally slowing down Earth's angular velocity ω. Due to the conservation of angular momentum, this process transfers angular momentum to the Moon's orbital motion, increasing its distance from Earth and its orbital period (see tidal locking for a more detailed explanation of this process).
See also
Flywheel
List of energy storage projects
Rigid rotor
Rotational spectroscopy
Notes
References
Resnick, R. and Halliday, D. (1966) PHYSICS, Section 12-5, John Wiley & Sons Inc.
Forms of energy
Rotation | Rotational energy | [
"Physics"
] | 409 | [
"Physical phenomena",
"Physical quantities",
"Classical mechanics",
"Rotation",
"Forms of energy",
"Energy (physics)",
"Motion (physics)"
] |
344,413 | https://en.wikipedia.org/wiki/World%20Solar%20Challenge | The World Solar Challenge (WSC), since 2013 named Bridgestone World Solar Challenge, is an international event for solar powered cars driving 3000 kilometres through the Australian outback.
With the exception of a four-year gap between the 2019 and 2023 events, owing to the cancellation of the 2021 event, the World Solar Challenge is typically held every two years. The course is over through the Australian Outback, from Darwin, Northern Territory, to Adelaide, South Australia. The event was created to foster the development of solar-powered vehicles.
The WSC attracts teams from around the world, most of which are fielded by universities or corporations, although some are fielded by high schools. It has a 32-year history spanning fifteen events, with the inaugural event taking place in 1987. Initially held once every three years, the event became biennial from the turn of the century.
Since 2001 the WSC was won seven times out of ten efforts by the Nuna team and cars of the Delft University of Technology from the Netherlands. The Tokai Challenger, built by the Tokai University of Japan, was able to win 2009 and 2011. In the most recent editions (2019 & 2023), the Belgian Innoptus Solar Team formerly known as the Agoria Solar Team from KU Leuven University won.
Starting in 2007, the WSC has multiple classes. After the German team of Bochum University of Applied Sciences competed with a four-wheeled, multi-seat car, the BoCruiser (in 2009), in 2013 a radically new "Cruiser Class" was introduced, stimulating the technological development of practically usable, and ideally road-legal, multi-seater solar vehicles. Since its inception, Solar Team Eindhoven's four- and five-seat Stella solar cars from Eindhoven University of Technology (Netherlands) won the Cruiser Class in all four events so far.
Remarkable technological progress has been achieved since the General Motors led, highly experimental, single-seat Sunraycer prototype first won the WSC with an average speed of . Once competing cars became steadily more capable to match or exceed legal maximum speeds on the Australian highway, the challenge rules were consistently made more demanding and challenging — for instance after Honda's Dream car first won with an average speed exceeding in 1996. In 2005 the Dutch Nuna team were the first to beat an average speed of .
The 2017 Cruiser class winner, the five-seat Stella Vie vehicle, was able to carry an average of 3.4 occupants at an average speed of . Like its two predecessors, the vehicle was successfully road registered by the Dutch team, further emphasizing the great progress in real-world compliance and practicality that has been achieved.
The WSC held its 30th anniversary event on 8–15 October 2017.
Objective
The objective of the challenge is to promote the innovation of solar-powered cars. It is a design competition at its core, and every team/car that successfully crosses the finish line is considered successful. Teams from universities and enterprises participate. In 2015, 43 teams from 23 countries competed in the challenge.
Challenge strategy
Efficient balancing of power resources and power consumption is the key to success during the challenge. At any moment in time, the optimal driving speed depends on the weather forecast and the remaining capacity of the batteries. The team members in the escort cars will continuously remotely retrieve data from the solar car about its condition and use these data as input for prior developed computer programs to work out the best driving strategy.
It is equally important to charge the batteries as much as possible in periods of daylight when the car is not driving. To capture as much solar energy as possible, the solar panels are generally directed such that these are perpendicular to the incident sun rays. Sometimes the whole solar array is tilted for this purpose.
Important rules
The timed portion of the challenge stops at the outskirts of Adelaide, 2998 km from Darwin. However, for the timings recorded at that point to count, competitors must reach the official finish line in the centre of the city under solar power alone.
As the challenge utilises public roads, the cars have to adhere to the normal traffic regulations.
A minimum of 2 and maximum 4 drivers have to be registered. If the weight of a driver (including clothes) is less than , ballast will be added to make up the difference.
Driving time is between 8:00 and 17:00 (from 8 a.m. to 5 p.m.). In order to select a suitable place for the overnight stop (alongside the highway) it is possible to extend the driving period for a maximum of 10 minutes, which extra driving time will be compensated by a starting time delay the next day.
At various points along the route there are checkpoints where every car has to pause for 30 minutes. Only limited maintenance tasks (no repairs) are allowed during these compulsory stops.
The capacity of the batteries is limited to a mass for each chemistry (such as Lithium Ion) equivalent to approximately 5 kWh maximum. At the start of the route, the batteries may be fully charged. Batteries may not be replaced during the competition, except in the situation of a breakdown. However, in that case, a penalty time will apply.
Except for the maximum outer dimensions, there are no further restrictions on the design and construction of the car.
The deceleration of the dual braking system must be at least 3.8 m/s2 (149.6 in/s2).
Rule evolution
By 2005, several teams were handicapped by the South Australian speed limit of , as well as the difficulties of support crews keeping up with solar vehicles. It was generally agreed that the challenge of building a solar vehicle capable of crossing Australia at vehicular speeds had been met and exceeded. A new challenge was set: to build a new generation of solar car, which, with little modification, could be the basis for a practical proposition for sustainable transport.
Entrants to the 2007 event chose between racing in the Adventure and Challenge classes. Challenge class cars were restricted to 6 square meters of Si solar collectors (a 25% reduction), and later to 3 square meters for GaAs, driver access and egress were required to be unaided, seating position upright, steering controlled with a steering wheel, and many new safety requirements were added. Competitors also had to adhere to the new speed limit across the Northern Territory portion of the Stuart Highway. The 2007 event again featured a range of supplementary classes, including the Greenfleet class, which features a range of non-solar energy-efficient vehicles exhibiting their fuel efficiency.
For the 2009 challenge class several new rules were adopted, including the use of profiled tyres. Battery weight limits depend on secondary cell chemistries so that competitors have similar energy storage capabilities. Battery mass is now 20 kg for Li-ion and Li-polymer battery (was reduced from 25 and 21 kg in the past).
In 2013, a new Cruiser Class was introduced. The route took place in four stages. Final placings were based on a combination of time taken (56.6%), number of passengers carried (5.7%), battery energy from the grid between stages (18.9%), and a subjective assessment of practicality (18.9%)
In the 2015 Cruiser Class regulations, the scoring formula emphasized practicality less than before. Elapsed time will account for 70% of the score, passengers 5%, grid energy use 15%, and practicality 10%.
In 2017, solar array areas were reduced, and the Cruiser Class was changed to a Regularity Trial, with scoring based on energy efficiency and practicality.
History
The idea for the competition originates from Danish-born adventurer Hans Tholstrup. He was the first to circumnavigate the Australian continent in a open boat. At a later stage in his life he became involved in various competitions with fuel-saving cars and trucks. Already in the 1980s, he became aware of the necessity to explore sustainable energy as a replacement for the limited available fossil fuel. Sponsored by BP, he designed the world's first solar car, called The Quiet Achiever, and traversed the between Sydney, New South Wales and Perth, Western Australia in 20 days. That was the precursor of the WSC.
After the 4th event, he sold the rights to the state of South Australia and leadership of the event was assumed by Chris Selwood.
The event was held every three years until 1999 when it was switched to every two years.
1987
The first edition of the World Solar Challenge was run in 1987 when the winning entry, GM's Sunraycer won with an average speed of . Ford Australia's "Sunchaser" came in second. The "Solar Resource", which came in 7th overall, was first in the Private Entry category.
1990
The 1990 WSC was won by the "Spirit of Biel", built by Biel School of Engineering and Architecture in Switzerland followed by Honda in second place. Video coverage here.
1993
The 1993 WSC was won by the Honda Dream, and Biel School of Engineering and Architecture took second. Video coverage here.
1996
In the 1996 WSC, the Honda Dream and Biel School of Engineering and Architecture once again placed first and second overall, respectively.
1999
The 1999 WSC was finally won by a "home" team, the Australian Aurora team's Aurora 101 took the prize while Queen's University was the runner-up in the most closely contested WSC so far. The SunRayce class of American teams was won by Massachusetts Institute of Technology.
2001
The 2001 WSC was won by Nuna of the Delft University of Technology from the Netherlands, participating for the first time. Aurora took second place.
2003
In the 2003 WSC Nuna 2, the successor to the winner of 2001 won again, with an average speed of , while Aurora took second place again.
2005
In the 2005 WSC the top finishers were the same for the third consecutive event as Nuon's Nuna 3 won with a record average speed of , and Aurora was the runner-up.
2007
The 2007 WSC saw the Dutch Nuon Solar team score their fourth successive victory with Nuna 4 in the Challenge Class, averaging under the new, more restrictive rules, while the Belgian Punch Powertrain Solar Team's Umicar Infinity placed second.
The Adventure Class was added this year, run under the old rules, and won by Japanese Ashiya team's Tiga.
The Japanese Ashiya team's Tiga won the Adventure Class, run under the old rules, with an average speed of .
2009
The 2009 WSC was won by the "Tokai Challenger", built by the Tokai University Solar Car Team in Japan with an average speed of . The longtime reigning champion Nuon Solar Team's Nuna 5 finished in second place.
The Sunswift IV built by students at the University of New South Wales, Australia was the winner of the Silicon-based Solar Cell Class, while Japan's Osaka Sangyo University's OSU Model S won the Adventure class.
2011
In the 2011 WSC Tokai University took their second title with an updated "Tokai Challenger" averaging , and finishing just an hour before Nuna 6 of the Delft University of Technology. The challenge was marred by delays caused by wildfires.
2013
The 2013 WSC featured the introduction of the Cruiser Class, which comprised more 'practical' solar cars with 2–4 occupants. The inaugural winner was Solar Team Eindhoven's Stella from Eindhoven University of Technology in the Netherlands with an average speed of , while second place was taken by the PowerCore SunCruiser vehicle from team Hochschule Bochum in Germany, who inspired the creation of the Cruiser Class by racing more practical solar cars in previous WSC events. The Australian team, the University of New South Wales solar racing team Sunswift was the fastest competitor to complete the route, but was awarded third place overall after points were awarded for 'practicality' and for carrying passengers.
In the Challenger Class, the Dutch team from Delft University of Technology took back the title with Nuna 7 and an average speed of , while defending champions Tokai University finished second after an exciting close competition, which saw a 10–30 minute distance, though they drained the battery in final stint due to bad weather and finished some 3 hours later; an opposite situation of the previous challenge in 2011.
The Adventure Class was won by Aurora's Aurora Evolution.
2015
The 2015 WSC was held on 15–25 October with the same classes as the 2013 challenge.
In the Cruiser Class, the winner was once again Solar Team Eindhoven's Stella Lux from Eindhoven University of Technology in the Netherlands with an average speed of , while the second place team was Kogakuin University from Japan who was the first to cross the finish line, but did not receive as many points for passenger-kilometers and practicality. Bochum took 3rd place this year with the latest in their series of cruiser cars.
In the Challenger Class, the team from Delft University of Technology retained the title with Nuna 8 and an average speed of , while their Dutch counterparts, the University of Twente, who led most of the challenge, finished just 8 minutes behind them in second place, making 2015 the closest finish in WSC history. Tokai University passed the University of Michigan on the last day of the event to take home the bronze.
The Adventure Class was won by the Houston High School solar car team from Houston, Mississippi, United States.
2017
The 2017 WSC was held on 8–15 October, featuring the same classes as 2015. The Dutch NUON team won again in the Challenger class, which concluded on 2017-10-12, and in the Cruiser Class, the winner was once again Solar Team Eindhoven, from the Netherlands as well.
2019
The 2019 WSC was held from 13 to 20 October. 53 teams from 24 countries entered the competition, featuring the same three classes, Challenger (30 teams), Cruiser (23 teams) and Adventure. In the Challenger class, Agoria Solar Team (formerly Punch Powertrain) won for the first time. Tokai University Solar Car Team finished in second place.
In the Cruiser class, Solar Team Eindhoven won their fourth consecutive title. Despite multiple incidents on the road, Team Sonnenwagen Aachen managed to beat other teams and finished in 6th position.
Several teams had mishaps. Vattenfall was leading when their car Nuna X caught fire. The driver was uninjured, but the vehicle was destroyed. It was the first no-finish for that team in 20 years. Others were badly affected by strong winds.
Dutch team Twente was leading the journey at , when their car was forced off the road by winds and rolled over. The driver was taken to hospital. Within 30 minutes team Sonnenwagen Aachen was also blown off the road north of Coober Pedy, the driver was not hurt. An speed limit was then imposed by event officials, lifted when conditions improved. The day before, wind damage to solar panels put the team from Western Sydney University out of the challenge. The driver of Agoria from Belgium escaped injury when their vehicle was "uprooted" at 100 km/h (62 mph) by severe winds, but still went on to win the Challenger class.
2021
In response to the COVID-19 pandemic in Australia the WSC closed entries three months earlier than normal, on 18 December 2020. They were then to "… review all current government measures relating to social distancing, density and contact tracing, international travel restrictions and isolation requirements." On 12 February 2021, the South Australian Government confirmed the cancellation of the 2021 staging of the event. While the COVID-19 pandemic was not explicitly cited as the reason, the "complexities of international border closures" affecting Australia at the time appear to be the primary reason for the event's cancellation. The same statement also noted the next event would take place in October 2023 - at least 962 days from the date of announcement, and resulting in a four-year gap between events. Registered teams should receive a full refund of all fees.
2023
The 2023 World Solar Challenge was held from October 22-29. At the beginning of the race, 31 teams were participating, with 23 in the Challenger division and 8 in the Cruiser division. The Challenger division was won by defending champions Innoptus (formerly Agoria) with an average speed of 88.2km/h, and the Cruiser division was won by UNSW Sunswift with a score of 91.1. Uniquely, no Cruisers were able to finish the race this year.
Many of the leading teams faced trouble during the competition. Dutch team Top Dutch raced on a perovskite-tandem solar array damaged from testing in the month leading up to race. Michigan experienced electrical issues during qualifying and had to start last. German team Sonnenwagen was blown off the road just outside of Port Agusta and had to withdraw due to new regulations. Tokai had to stop for several hours on Day 4 to repair their car after sustaining damage from crossing a cattle grid. Kogakuin had consistent problems with their MPPT charge controller, and reported in an Instagram post that their panels were generating less than half the power than they should have been. On the fifth day of the competition, only 4 teams (Innoptus, Twente, Brunel, and Michigan) had finished the course, and by the official end of timing, only 12 teams made it to the finish line successfully.
See also
Solar car racing
List of prototype solar-powered cars
List of solar car teams
Shell Eco-marathon
The Quiet Achiever, the world's first solar-powered racecar
Other solar vehicle challenges
American Solar Challenge, a biennial United States event held since 1990 that has previously included Canada
Formula Sun Grand Prix, an annual U.S. event held on race tracks.
The Solar Car Challenge, an annual event for High School students from the U.S. and (to a lesser extent) other parts of the world, first held in 1995
South African Solar Challenge, a biennial South African event that was first held in 2008
Victorian Model Solar Vehicle Challenge, an annual event in Australia for schoolchildren
European Solar Challenge, a biennial 24-hour race in Belgium
Atacama Solar Race, a biennial event held in Chile
Movie
Race the Sun, a movie loosely based on a participating team
References
External links
Images from Alice Springs, Australia – 2007
An overview of all the competing teams in the 2013 WSC.
Solar car races
Engineering competitions
Auto races in Australia
Scientific organisations based in Australia
Science competitions
Photovoltaics
Recurring sporting events established in 1987
Motorsport in the Northern Territory
Motorsport in South Australia
Australian outback | World Solar Challenge | [
"Technology"
] | 3,826 | [
"Science and technology awards",
"Engineering competitions",
"Science competitions"
] |
11,946,749 | https://en.wikipedia.org/wiki/Respirocyte | Respirocytes are hypothetical, microscopic, artificial red blood cells that are intended to emulate the function of their organic counterparts, so as to supplement or replace the function of much of the human body's normal respiratory system. Respirocytes were proposed by Robert A. Freitas Jr in his 1998 paper "A Mechanical Artificial Red Blood Cell: Exploratory Design in Medical Nanotechnology".
Respirocytes are an example of molecular nanotechnology, a field of technology still in the very earliest, purely hypothetical phase of development. Current technology is not sufficient to build a respirocyte due to considerations of power, atomic-scale manipulation, immune reaction or toxicity, computation and communication.
Structure of a respirocyte
Freitas proposed a spherical robot made up of 18 billion atoms arranged as a tiny pressure tank, which would be filled up with oxygen and carbon dioxide.
Uses
In Freitas' proposal, each respirocyte could store and transport 236 times more oxygen than a natural red blood cell, and could release it in a more controlled manner.
Freitas has also proposed "microbivore" robots that would attack pathogens in the manner of white blood cells.
See also
Artificial cell
Biotechnology
Blood substitute
Oxycyte
Synthetic biology
References
External links
Respirocytes at foresight.org
Synthetic biology
Blood cells
Hypothetical technology
Blood substitutes | Respirocyte | [
"Engineering",
"Biology"
] | 285 | [
"Synthetic biology",
"Molecular genetics",
"Biological engineering",
"Bioinformatics"
] |
13,493,012 | https://en.wikipedia.org/wiki/Relative%20volatility | Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as .
Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages.
Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide).
Definition
For a liquid mixture of two components (called a binary mixture) at a given temperature and pressure, the relative volatility is defined as
When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a value (= ) for a more volatile component is larger than a value for a less volatile component. That means that ≥ 1 since the larger value of the more volatile component is in the numerator and the smaller of the less volatile component is in the denominator.
is a unitless quantity. When the volatilities of both key components are equal, = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same (azeotrope). As the value of increases above 1, separation by distillation becomes progressively easier.
A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation column consists predominantly of the more volatile component and some small amount of the less volatile component and the bottoms fraction consists predominantly of the less volatile component and some small amount of the more volatile component.
A liquid mixture containing many components is called a multi-component mixture. When a multi-component mixture is distilled, the overhead fraction and the bottoms fraction typically contain much more than one or two components. For example, some intermediate products in an oil refinery are multi-component liquid mixtures that may contain alkane, alkene and alkyne hydrocarbons—ranging from methane, having one carbon atom, to decanes having ten carbon atoms. For distilling such a mixture, the distillation column may be designed (for example) to produce:
An overhead fraction containing predominantly the more volatile components ranging from methane (having one carbon atom) to propane (having three carbon atoms)
A bottoms fraction containing predominantly the less volatile components ranging from isobutane (having four carbon atoms) to decanes (ten carbon atoms).
Such a distillation column is typically called a depropanizer.
The designer would designate the key components governing the separation design to be propane as the so-called and isobutane as the so-called . In that context, a lighter component means a component with a lower boiling point (or a higher vapor pressure) and a heavier component means a component with a higher boiling point (or a lower vapor pressure).
Thus, for the distillation of any multi-component mixture, the relative volatility is often defined as
Large-scale industrial distillation is rarely undertaken if the relative volatility is less than 1.05.
The values of have been correlated empirically or theoretically in terms of temperature, pressure and phase compositions in the form of equations, tables or graph such as the well-known DePriester charts.
values are widely used in the design of large-scale distillation columns for distilling multi-component mixtures in oil refineries, petrochemical and chemical plants, natural gas processing plants and other industries.
See also
References
External links
Distillation Theory by Ivar J. Halvorsen and Sigurd Skogestad, Norwegian University of Science and Technology (scroll down to: 2.2.3 K-values and Relative Volatility)
Distillation Principals by Ming T. Tham, University of Newcastle upon Tyne (scroll down to Relative Volatility)
Engineering thermodynamics
Distillation
Chemical engineering
Petroleum engineering | Relative volatility | [
"Physics",
"Chemistry",
"Engineering"
] | 905 | [
"Separation processes",
"Chemical engineering",
"Engineering thermodynamics",
"Petroleum engineering",
"Energy engineering",
"Distillation",
"Thermodynamics",
"nan",
"Mechanical engineering"
] |
13,495,046 | https://en.wikipedia.org/wiki/Quantum%20spin%20Hall%20effect | The quantum spin Hall state is a state of matter proposed to exist in special, two-dimensional semiconductors that have a quantized spin-Hall conductance and a vanishing charge-Hall conductance. The quantum spin Hall state of matter is the cousin of the integer quantum Hall state, and that does not require the application of a large magnetic field. The quantum spin Hall state does not break charge conservation symmetry and spin- conservation symmetry (in order to have well defined Hall conductances).
Description
The first proposal for the existence of a quantum spin Hall state was developed by Charles Kane and Gene Mele who adapted an earlier model for graphene by F. Duncan M. Haldane which exhibits an integer quantum Hall effect. The Kane and Mele model is two copies of the Haldane model such that the spin up electron exhibits a chiral integer quantum Hall Effect while the spin down electron exhibits an anti-chiral integer quantum Hall effect. A relativistic version of the quantum spin Hall effect was introduced in the 1990s for the numerical simulation of chiral gauge theories; the simplest example consisting of a parity and time reversal symmetric U(1) gauge theory with bulk fermions of opposite sign mass, a massless Dirac surface mode, and bulk currents that carry chirality but not charge (the spin Hall current analogue). Overall the Kane-Mele model has a charge-Hall conductance of exactly zero but a spin-Hall conductance of exactly (in units of ). Independently, a quantum spin Hall model was proposed by Andrei Bernevig and Shoucheng Zhang in an intricate strain architecture which engineers, due to spin-orbit coupling, a magnetic field pointing upwards for spin-up electrons and a magnetic field pointing downwards for spin-down electrons. The main ingredient is the existence of spin–orbit coupling, which can be understood as a momentum-dependent magnetic field coupling to the spin of the electron.
Real experimental systems, however, are far from the idealized picture presented above in which spin-up and spin-down electrons are not coupled. A very important achievement was the realization that the quantum spin Hall state remains to be non-trivial even after the introduction of spin-up spin-down scattering, which destroys the quantum spin Hall effect. In a separate paper, Kane and Mele introduced a topological invariant which characterizes a state as trivial or non-trivial band insulator (regardless if the state exhibits or does not exhibit a quantum spin Hall effect). Further stability studies of the edge liquid through which conduction takes place in the quantum spin Hall state proved, both analytically and numerically that the non-trivial state is robust to both interactions and extra spin-orbit coupling terms that mix spin-up and spin-down electrons. Such a non-trivial state (exhibiting or not exhibiting a quantum spin Hall effect) is called a topological insulator, which is an example of symmetry-protected topological order protected by charge conservation symmetry and time reversal symmetry. (Note that the quantum spin Hall state is also a symmetry-protected topological state protected by charge conservation symmetry and spin- conservation symmetry. We do not need time reversal symmetry to protect quantum spin Hall state. Topological insulator and quantum spin Hall state are different symmetry-protected topological states. So topological insulator and quantum spin Hall state are different states of matter.)
In HgTe quantum wells
Since graphene has extremely weak spin-orbit coupling, it is very unlikely to support a quantum spin Hall state at temperatures achievable with today's technologies. Two-dimensional topological insulators (also known as the quantum spin Hall insulators) with one-dimensional helical edge states were predicted in 2006 by Bernevig, Hughes and Zhang to occur in quantum wells (very thin layers) of mercury telluride sandwiched between cadmium telluride, and were observed in 2007.
Different quantum wells of varying HgTe thickness can be built. When the sheet of HgTe in between the CdTe is thin, the system behaves like an ordinary insulator and does not conduct when the Fermi level resides in the band-gap. When the sheet of HgTe is varied and made thicker (this requires the fabrication of separate quantum wells), an interesting phenomenon happens. Due to the inverted band structure of HgTe, at some critical HgTe thickness, a Lifshitz transition occurs in which the system closes the bulk band gap to become a semi-metal, and then re-opens it to become a quantum spin Hall insulator.
In the gap closing and re-opening process, two edge states are brought out from the bulk and cross the bulk-gap. As such, when the Fermi level resides in the bulk gap, the conduction is dominated by the edge channels that cross the gap. The two-terminal conductance is in the quantum spin Hall state and zero in the normal insulating state. As the conduction is dominated by the edge channels, the value of the conductance should be insensitive to how wide the sample is. A magnetic field should destroy the quantum spin Hall state by breaking time-reversal invariance and allowing spin-up spin-down electron scattering processes at the edge. All these predictions have been experimentally verified in an experiment performed in the Molenkamp labs at Universität Würzburg in Germany.
See also
Spin Hall effect
Quantum Hall effect
References
Further reading
Hall effect
Condensed matter physics
Quantum electronics
Spintronics | Quantum spin Hall effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,114 | [
"Physical phenomena",
"Quantum electronics",
"Hall effect",
"Spintronics",
"Phases of matter",
"Quantum mechanics",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering",
"Matter"
] |
13,496,530 | https://en.wikipedia.org/wiki/Category%20algebra | In category theory, a field of mathematics, a category algebra is an associative algebra, defined for any locally finite category and commutative ring with unity. Category algebras generalize the notions of group algebras and incidence algebras, just as categories generalize the notions of groups and partially ordered sets.
Definition
If the given category is finite (has finitely many objects and morphisms), then the following two definitions of the category algebra agree.
Group algebra-style definition
Given a group G and a commutative ring R, one can construct RG, known as the group algebra; it is an R-module equipped with a multiplication. A group is the same as a category with a single object in which all morphisms are isomorphisms (where the elements of the group correspond to the morphisms of the category), so the following construction generalizes the definition of the group algebra from groups to arbitrary categories.
Let C be a category and R be a commutative ring with unity. Define RC (or R[C]) to be the free R-module with the set of morphisms of C as its basis. In other words, RC consists of formal linear combinations (which are finite sums) of the form , where fi are morphisms of C, and ai are elements of the ring R. Define a multiplication operation on RC as follows, using the composition operation in the category:
where if their composition is not defined. This defines a binary operation on RC, and moreover makes RC into an associative algebra over the ring R. This algebra is called the category algebra of C.
From a different perspective, elements of the free module RC could also be considered as functions from the morphisms of C to R which are finitely supported. Then the multiplication is described by a convolution: if (thought of as functionals on the morphisms of C), then their product is defined as:
The latter sum is finite because the functions are finitely supported, and therefore .
Incidence algebra-style definition
The definition used for incidence algebras assumes that the category C is locally finite (see below), is dual to the above definition, and defines a different object. This isn't a useful assumption for groups, as a group that is locally finite as a category is finite.
A locally finite category is one where every morphism can be written in only finitely many ways as the composition of two non-identity morphisms (not to be confused with the "has finite Hom-sets" meaning). The category algebra (in this sense) is defined as above, but allowing all coefficients to be non-zero.
In terms of formal sums, the elements are all formal sums
where there are no restrictions on the (they can all be non-zero).
In terms of functions, the elements are any functions from the morphisms of C to R, and multiplication is defined as convolution. The sum in the convolution is always finite because of the local finiteness assumption.
Dual
The module dual of the category algebra (in the group algebra sense of the definition) is the space of all maps from the morphisms of C to R, denoted F(C), and has a natural coalgebra structure. Thus for a locally finite category, the dual of a category algebra (in the group algebra sense) is the category algebra (in the incidence algebra sense), and has both an algebra and coalgebra structure.
Examples
If C is a group (thought of as a groupoid with a single object), then RC is the group algebra.
If C is a monoid (thought of as a category with a single object), then RC is the monoid ring.
If C is a partially ordered set, then (using the appropriate definition), RC is the incidence algebra.
While partial orders only allow for viewing upper or lower triangular matrices as incidence algebras, the concept of category algebras also encompasses the ring of matrices of R. Indeed, if C is the preorder on n points where every point has a relation to every other (a complete graph), then RC is the matrix ring .
If C is a discrete category, then RC may be seen as the ring of functions with pointwise addition and multiplication, or equivalently the direct product of copies of R indexed over C. In the case of infinite C, one needs to distinguish the "group algebra-style" and the "incidence algebra-style", because in the former, one only allows for finitely many terms in the formal linear combination, resulting in RC being instead the direct sum of copies of R.
The path algebra of a quiver Q is the category algebra of the free category on Q.
References
Haigh, John. On the Möbius Algebra and the Grothendieck Ring of a Finite Category J. London Math. Soc (2), 21 (1980) 81–92.
Further reading
http://www.math.umn.edu/~webb/Publications/CategoryAlgebras.pdf Standard text.
Category theory | Category algebra | [
"Mathematics"
] | 1,051 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
7,312,048 | https://en.wikipedia.org/wiki/Float-out | Float-out is the process in shipbuilding that follows the keel laying and precedes the fitting-out process. It is analogous to launching a ship, a specific process that has largely been discontinued in modern shipbuilding. Both floating-out and launching are the times when the ship leaves dry land and becomes waterborne for the first time, and often take place during ceremonies celebrating and commemorating that event.
Launching
Prior to the large-scale use of drydocks (building or graving docks) for constructing ships, most vessels were constructed on a slipway, i.e. an inclined building platform sloping toward a body of water into which the ship would be launched.
Contemporary shipbuilding
The launching of ships has been largely replaced by the "floating" process. After a ship is ordered for construction, its keel is laid in a drydock. Construction of the ship continues in the dock, usually in the form of prefabricated units that are assembled.
After the empty hull has been substantially completed, sluice gates are opened and the drydock fills with water. The dock gates are then opened and the ship is pulled out by tugboat to a berth where the remaining construction continues namely fitting out. This usually includes further construction of the superstructure, attaching of masts and funnels, and the installation of equipment and furnishings.
The completed ship will usually return to drydock for installation of other equipment, propulsion parts, and the painting of its hull.
The first superliner to be constructed in this manner was , but the history of "floating" ships rather than "launching" goes back more than one hundred years before that vessel's construction. designed by Isambard Kingdom Brunel was constructed in drydock and floated on 19 July 1843. She is currently in Bristol, England, United Kingdom.
Naming ceremony
Ships which are launched typically are christened and formally named at their launching ceremonies, even though they are not completed until later. Some recent passenger vessels which were constructed in drydocks were not formally christened when floated out. The naming ceremonies of and took place after completion and delivery to their owners, in the case of Freedom of the Seas after her first transatlantic crossing.
External links
‘’Birth of a Ship’’ (the construction process of container ship MV Maunawili)
Shipbuilding
Naval architecture | Float-out | [
"Engineering"
] | 467 | [
"Naval architecture",
"Shipbuilding",
"Marine engineering"
] |
7,315,935 | https://en.wikipedia.org/wiki/Unit%20Operations%20of%20Chemical%20Engineering | Unit Operations of Chemical Engineering, first published in 1956, is one of the oldest chemical engineering textbooks still in widespread use. The current Seventh Edition, published in 2004, continues its successful tradition of being used as a textbook in university undergraduate chemical engineering courses. It is widely used in colleges and universities throughout the world, and often referred just "McCabe-Smith-Harriott" or "MSH".
Subjects covered in the book
The book starts with an introductory chapter devoted to definitions and principles. It then follows with 28 additional chapters, each covering a principal chemical engineering unit operation. The 28 chapters are grouped into four major sections:
Fluid mechanics
Heat transfer
Mass transfer and equilibrium stages
Operations involving particulate solids.
A more detailed table of contents is available on the Internet.
See also
Chemical engineer
:Category:Unit operations
Distillation Design
Perry's Chemical Engineers' Handbook
Process design
Transport Phenomena
Unit operations
References
Chemical engineering books
Engineering textbooks
Unit operations
1956 books | Unit Operations of Chemical Engineering | [
"Chemistry",
"Engineering"
] | 192 | [
"Chemical process engineering",
"Chemical engineering books",
"Chemical engineering",
"Unit operations"
] |
9,487,577 | https://en.wikipedia.org/wiki/Surgical%20staple | Surgical staples are specialized staples used in surgery in place of sutures to close skin wounds or to resect and/or connect parts of an organ (e.g. bowels, stomach or lungs). The use of staples over sutures reduces the local inflammatory response, width of the wound, and time it takes to close a defect.
A more recent development, from the 1990s, uses clips instead of staples for some applications; this does not require the staple to penetrate.
History
The technique was pioneered by "father of surgical stapling", Hungarian surgeon Hümér Hültl. Hultl's prototype stapler of 1908 weighed , and required two hours to assemble and load.
The technology was refined in the 1950s in the Soviet Union, allowing for the first commercially produced re-usable stapling devices for creation of bowel and anastomeses. Mark M. Ravitch brought a sample of stapling device after attending a surgical conference in USSR, and introduced it to entrepreneur Leon C. Hirsch, who founded the United States Surgical Corporation in 1964 to manufacture surgical staplers under its Auto Suture brand. Until the late 1970s USSC had the market essentially to itself, but in 1977 Johnson & Johnson's Ethicon brand entered the market and today both are widely used, along with competitors from the Far East. USSC was bought by Tyco Healthcare in 1998, which became Covidien on June 29, 2007.
Safety and patency of mechanical (stapled) bowel anastomoses has been widely studied. It is generally the case in such studies that sutured anastomoses are either comparable or less prone to leakage. It is possible that this is the result of recent advances in suture technology, along with increasingly risk-conscious surgical practice. Certainly modern synthetic sutures are more predictable and less prone to infection than catgut, silk and linen, which were the main suture materials used up to the 1990s.
One key feature of intestinal staplers is that the edges of the stapler act as a haemostat, compressing the edges of the wound and closing blood vessels during the stapling process. Recent studies have shown that with current suturing techniques there is no significant difference in outcome between hand sutured and mechanical anastomoses (including clips), but mechanical anastomoses are significantly quicker to perform.
In patients that are subjected to pulmonary resections where lung tissue is sealed with staplers, there is often postoperative air leakage. Alternative techniques to seal lung tissue are currently investigated.
Types and applications
The first commercial staplers were made of stainless steel with titanium staples loaded into reloadable staple cartridges.
Modern surgical staplers are either disposable and made of plastic, or reusable and made of stainless steel. Both types are generally loaded using disposable cartridges.
The staple line may be straight, curved or circular. Circular staplers are used for end-to-end anastomosis after bowel resection or, somewhat more controversially, in esophagogastric surgery. The instruments may be used in either open or laparoscopic surgery, different instruments are used for each application. Laparoscopic staplers are longer, thinner, and may be articulated to allow for access from a restricted number of trocar ports.
Some staplers incorporate a knife, to complete excision and anastomosis in a single operation.
Staplers are used to close both internal and skin wounds. Skin staples are usually applied using a disposable stapler, and removed with a specialized staple remover. Staplers are also used in vertical banded gastroplasty surgery (popularly known as "stomach stapling").
While devices for circular end-to-end anastomosis of digestive tract are widely used, in spite of intensive research circular staplers for vascular anastomosis never had yet significant impact on standard hand (Carrel) suture technique. Apart from the different modality of coupling of vascular (everted) in respect to digestive (inverted) stumps, the main basic reason could be that, particularly for small vessels, the manuality and precision required just for positioning on vascular stumps and actioning any device cannot be significantly inferior to that required to carry out the standard hand suture, then making of little utility the use of any device. An exception to that however could be organ transplantation where these two phases, i.e.device positioning at the vascular stumps and device actioning, can be carried out in different time, by different surgical team, in safe conditions when the time required does not influence donor organ preservation, i.e. at the back table in cold ischemia condition for the donor organ and after native organ removal in the recipient. This is finalized to make as brief as possible the donor organ dangerous warm ischemia phase that can be contained in the couple of minutes or less necessary just to connect the device's ends and actioning the stapler.
Although most surgical staples are made of titanium, stainless steel is more often used in some skin staples and clips. Titanium produces less reaction with the immune system and, being non-ferrous, does not interfere significantly with MRI scanners, although some imaging artifacts may result. Synthetic absorbable (bioabsorbable) staples are also now becoming available, based on polyglycolic acid, as with many synthetic absorbable sutures.
Removal of skin staples
Where skin staples are used to seal a skin wound it will be necessary to remove the staples after an appropriate healing period, usually between 5 and 10 days, depending on the location of the wound and other factors. The skin staple remover is a small manual device which consists of a shoe or plate that is sufficiently narrow and thin to insert under the skin staple. The active part is a small vertical blade that, when hand-pressure is exerted, pushes the staple down through a slot in the shoe, deforming the staple open into an 'M' shape to facilitate its removal. In an emergency, it is also possible to remove staples with a pair of artery forceps.
Skin staple removers are manufactured in many shapes and forms, some disposable and some reusable.
See also
Instruments used in general surgery
References
Surgical instruments
Fasteners
Hungarian inventions
1908 establishments in Hungary
1908 in science
1900s in medicine | Surgical staple | [
"Engineering"
] | 1,309 | [
"Construction",
"Fasteners"
] |
9,487,795 | https://en.wikipedia.org/wiki/Relativistic%20electron%20beam | Relativistic electron beams are streams of electrons moving at relativistic speeds. They are the lasing medium in free electron lasers to be used in atmospheric research conducted at entities such as the Pan-oceanic Environmental and Atmospheric Research Laboratory (PEARL) at the University of Hawaii and NASA. It has been suggested that relativistic electron beams could be used to heat and accelerate the reaction mass in electrical rocket engines that Dr. Robert W. Bussard called quiet electric-discharge engines (QEDs).
References
External links
PEARL Lab @ UHawaii
Applying REBs for the development of high-powered microwaves (HPM)
Electron beam
Quantum mechanics
Electron beam | Relativistic electron beam | [
"Physics",
"Chemistry"
] | 138 | [
"Electron",
"Electron beam",
"Theoretical physics",
"Quantum mechanics",
"Special relativity",
"Relativity stubs",
"Theory of relativity",
"Quantum physics stubs"
] |
9,492,439 | https://en.wikipedia.org/wiki/Differential%20graded%20category | In mathematics, especially homological algebra, a differential graded category, often shortened to dg-category or DG category, is a category whose morphism sets are endowed with the additional structure of a differential graded -module.
In detail, this means that , the morphisms from any object A to another object B of the category is a direct sum
and there is a differential d on this graded group, i.e., for each n there is a linear map
,
which has to satisfy . This is equivalent to saying that is a cochain complex. Furthermore, the composition of morphisms
is required to be a map of complexes, and for all objects A of the category, one requires .
Examples
Any additive category may be considered to be a DG-category by imposing the trivial grading (i.e. all vanish for ) and trivial differential ().
A little bit more sophisticated is the category of complexes over an additive category . By definition, is the group of maps which do not need to respect the differentials of the complexes A and B, i.e.,
.
The differential of such a morphism of degree n is defined to be
,
where are the differentials of A and B, respectively. This applies to the category of complexes of quasi-coherent sheaves on a scheme over a ring.
A DG-category with one object is the same as a DG-ring. A DG-ring over a field is called DG-algebra, or differential graded algebra.
Further properties
The category of small dg-categories can be endowed with a model category structure such that weak equivalences are those functors that induce an equivalence of derived categories.
Given a dg-category C over some ring R, there is a notion of smoothness and properness of C that reduces to the usual notions of smooth and proper morphisms in case C is the category of quasi-coherent sheaves on some scheme X over R.
Relation to triangulated categories
A DG category C is called pre-triangulated if it has a suspension functor
and a class of distinguished triangles compatible with the
suspension, such that its homotopy category Ho(C) is a triangulated category.
A triangulated category T is said to have a dg enhancement C if C
is a pretriangulated dg category whose homotopy category is equivalent to T. dg enhancements of an exact functor between triangulated categories are defined similarly. In general, there need not exist dg enhancements of triangulated categories or functors between them, for example stable homotopy category can be shown not to arise from a dg category in this way. However, various positive results do exist, for example the derived category D(A) of a Grothendieck abelian category A admits a unique dg enhancement.
See also
Differential algebra
Graded (mathematics)
Graded category
Derivator
References
External links
dg-category in nLab
Homological algebra
Categories in category theory | Differential graded category | [
"Mathematics"
] | 619 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Categories in category theory",
"Homological algebra"
] |
9,493,560 | https://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c | A gömböc () is any member of a class of convex, three-dimensional and homogeneous bodies that are mono-monostatic, meaning that they have just one stable and one unstable point of equilibrium when resting on a flat surface. The existence of this class was conjectured by the Russian mathematician Vladimir Arnold in 1995 and proven in 2006 by the Hungarian scientists Gábor Domokos and Péter Várkonyi by constructing at first a mathematical example and subsequently a physical example.
The gömböc's shape helped to explain the body structure of some tortoises and their ability to return to an equilibrium position after being placed upside down. Copies of the first physically constructed example of a gömböc have been donated to institutions and museums, and the largest one was presented at the World Expo 2010 in Shanghai, China.
Name
If analyzed quantitatively in terms of flatness and thickness, the discovered mono-monostatic bodies are the most sphere-like, apart from the sphere itself. Because of this, they were given the name gömböc, a diminutive form of ("sphere" in Hungarian).
History
In geometry, a body with a single stable resting position is called monostatic, and the term mono-monostatic has been coined to describe a body which additionally has only one unstable point of balance (the previously known monostatic polyhedron does not qualify, as it has several unstable equilibria). A sphere weighted so that its center of mass is shifted from the geometrical center is mono-monostatic. However, it is inhomogeneous; its material density varies across its body. Another example of an inhomogeneous mono-monostatic body is the Comeback Kid, Weeble or roly-poly toy (see left figure). At equilibrium, the center of mass and the contact point are on the line perpendicular to the ground. When the toy is pushed, its center of mass rises and shifts away from that line. This produces a righting moment, which returns the toy to its equilibrium position.
The above examples of mono-monostatic objects are inhomogeneous. The question of whether it is possible to construct a three-dimensional body which is mono-monostatic but also homogeneous and convex was raised by Russian mathematician Vladimir Arnold in 1995. Being convex is essential as it is trivial to construct a mono-monostatic non-convex body: an example would be a ball with a cavity inside it. It was already well known, from a geometrical and topological generalization of the classical four-vertex theorem, that a plane curve has at least four extrema of curvature, specifically, at least two local maxima and at least two local minima, meaning that a (convex) mono-monostatic object does not exist in two dimensions. Whereas a common expectation was that a three-dimensional body should have at least four extrema, Arnold conjectured that this number could be smaller.
Mathematical solution
The problem was solved in 2006 by Gábor Domokos and Péter Várkonyi. Domokos met Arnold in 1995 at a major mathematics conference in Hamburg, where Arnold presented a plenary talk illustrating that most geometrical problems have four solutions or extremal points. In a personal discussion, however, Arnold questioned whether four is a requirement for mono-monostatic bodies and encouraged Domokos to seek examples with fewer equilibria.
The rigorous proof of the solution can be found in references of their work. The summary of the results is that the three-dimensional homogeneous convex (mono-monostatic) body, which has one stable and one unstable equilibrium point, does exist and is not unique. Their form is dissimilar to any typical representative of any other equilibrium geometrical class. They should have minimal "flatness" and, to avoid having two unstable equilibria, must also have minimal "thinness". They are the only non-degenerate objects having simultaneously minimal flatness and thinness. The shape of those bodies is susceptible to small variation, outside which it is no longer mono-monostatic. For example, the first solution of Domokos and Várkonyi closely resembled a sphere, with a shape deviation of only 10−5. It was dismissed as it was tough to test experimentally. The first physically produced example is less sensitive; yet it has a shape tolerance of 10−3, that is 0.1 mm for a 10 cm size.
Domokos developed a classification system for shapes based on their points of equilibrium by analyzing pebbles and noting their equilibrium points. In one experiment, Domokos and his wife tested 2000 pebbles collected on the beaches of the Greek island of Rhodes and found not a single mono-monostatic body among them, illustrating the difficulty of finding or constructing such a body.
A gömböc's unstable equilibrium position is obtained by rotating the figure 180° about a horizontal axis. Theoretically, it will rest there, but the smallest perturbation will bring it back to the stable point. All gömböcs have sphere-like properties. In particular, their flatness and thinness are minimal, and they are the only type of nondegenerate object with this property. Domokos and Várkonyi are interested in finding a polyhedral solution with a surface consisting of a minimal number of flat planes. There is a prize to anyone who finds the respective minimal numbers F, E, and V of faces, edges and vertices for such a polyhedron, which amounts to $10,000 divided by the number C = F + E + V − 2, which is called the mechanical complexity of mono-monostatic polyhedra. It has been proved that one can approximate a curvilinear mono-monostatic shape with a finite number of discrete surfaces; however, they estimate that it would take thousands of planes to achieve that. By offering this prize, they hope to stimulate finding a radically different solution from their own.
Relation to animals
The balancing properties of gömböcs are associated with the "righting response" — the ability to turn back when placed upside down — of shelled animals such as tortoises and beetles. These animals may become flipped over in a fight or predator attack, so the righting response is crucial for survival. To right themselves, relatively flat animals (such as beetles) heavily rely on momentum and thrust developed by moving their limbs and wings. However, the limbs of many dome-shaped tortoises are too short to be used for righting.
Domokos and Várkonyi spent a year measuring tortoises in the Budapest Zoo, Hungarian Museum of Natural History and various pet shops in Budapest, digitizing and analyzing their shells, and attempting to "explain" their body shapes and functions from their geometry work published by the biology journal Proceedings of the Royal Society. It was then immediately popularized in several science news reports, including the science journals Nature and Science. The reported model can be summarized as flat shells in tortoises are advantageous for swimming and digging. However, the sharp shell edges hinder the rolling. Those tortoises usually have long legs and necks and actively use them to push the ground to return to the normal position if placed upside down. On the contrary, "rounder" tortoises easily roll on their own; those have shorter limbs and use them little when recovering from lost balance (some limb movement would always be needed because of imperfect shell shape, ground conditions, etc). Round shells also resist the crushing jaws of a predator better and are better for thermal regulation.
Art
On June 7, 2012, RocketJump released "Video Game High School (VGHS) - S1: Ep. 4" to YouTube. It features the Gömböc 7:40 into the video.
In the fall of 2020, the Korzo Theatre in The Hague and the Theatre Municipal in Biarritz presented the solo dance production "Gömböc" by French choreographer Antonin Comestaz.
A 2021 solo exhibition of conceptual artist Ryan Gander evolved around the theme of self-righting and featured seven large gömböc shapes gradually covered by black volcanic sand.
Media
For their discovery, Domokos and Várkonyi were decorated with the Knight's Cross of the Republic of Hungary. The New York Times Magazine selected the gömböc as one of the 70 most interesting ideas of the year 2007.
The Stamp News website shows Hungary's new stamps issued on 30 April 2010, illustrating a gömböc in different positions. The stamp booklets are arranged so that the gömböc appears to come to life when the booklet is flipped. The stamps were issued in association with the gömböc on display at the World Expo 2010 (1 May to 31 October). This was also covered by the Linn's Stamp News magazine.
See also
Flatness measures
Instability
Monostatic polytope
Self-righting watercraft
References
External links
Non-technical description of development, with short video
Expo 2010 presentation of a gömböc shape, with photos
2006 in science
2006 introductions
2006 in Hungary
Euclidean solid geometry
Science and technology in Hungary
Statics
Hungarian inventions
Volume | Gömböc | [
"Physics",
"Mathematics"
] | 1,855 | [
"Scalar physical quantities",
"Statics",
"Physical quantities",
"Euclidean solid geometry",
"Quantity",
"Classical mechanics",
"Size",
"Extensive quantities",
"Spacetime",
"Space",
"Volume",
"Wikipedia categories named after physical quantities"
] |
9,493,613 | https://en.wikipedia.org/wiki/Sieving%20coefficient | In mass transfer, the sieving coefficient is a measure of equilibration between the concentrations of two mass transfer streams. It is defined as the mean pre- and post-contact concentration of the mass receiving stream divided by the pre- and post-contact concentration of the mass donating stream.
where
S is the sieving coefficient
Cr is the mean concentration mass receiving stream
Cd is the mean concentration mass donating stream
A sieving coefficient of unity implies that the concentrations of the receiving and donating stream equilibrate, i.e. the out-flow concentrations (post-mass transfer) of the mass donating and receiving stream are equal to one another. Systems with sieving coefficient that are greater than one require an external energy source, as they would otherwise violate the laws of thermodynamics.
Sieving coefficients less than one represent a mass transfer process where the concentrations have not equilibrated.
Contact time between mass streams is important in consider in mass transfer and affects the sieving coefficient.
In kidney
In renal physiology, the glomerular sieving coefficient (GSC) can be expressed as:
sieving coefficient = clearance / ultrafiltration rate
See also
Heat exchanger
Condenser pinch point
Sieve
References
Transport phenomena
Chemical engineering
Mechanical engineering | Sieving coefficient | [
"Physics",
"Chemistry",
"Engineering"
] | 258 | [
"Transport phenomena",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Chemical engineering",
"nan",
"Mechanical engineering"
] |
9,494,074 | https://en.wikipedia.org/wiki/KIT%20%28gene%29 | Proto-oncogene c-KIT is the gene encoding the receptor tyrosine kinase protein known as tyrosine-protein kinase KIT, CD117 (cluster of differentiation 117) or mast/stem cell growth factor receptor (SCFR). Multiple transcript variants encoding different isoforms have been found for this gene.
KIT was first described by the German biochemist Axel Ullrich in 1987 as the cellular homolog of the feline sarcoma viral oncogene v-kit.
Function
KIT is a cytokine receptor expressed on the surface of hematopoietic stem cells as well as other cell types. Altered forms of this receptor may be associated with some types of cancer. KIT is a receptor tyrosine kinase type III, which binds to stem cell factor, also known as "steel factor" or "c-kit ligand". When this receptor binds to stem cell factor (SCF) it forms a dimer that activates its intrinsic tyrosine kinase activity, that in turn phosphorylates and activates signal transduction molecules that propagate the signal in the cell. After activation, the receptor is ubiquitinated to mark it for transport to a lysosome and eventual destruction. Signaling through KIT plays a role in cell survival, proliferation, and differentiation. For instance, KIT signaling is required for melanocyte survival, and it is also involved in haematopoiesis and gametogenesis.
Structure
Like other members of the receptor tyrosine kinase III family, KIT consists of an extracellular domain, a transmembrane domain, a juxtamembrane domain, and an intracellular tyrosine kinase domain. The extracellular domain is composed of five immunoglobulin-like domains, and the protein kinase domain is interrupted by a hydrophilic insert sequence of about 80 amino acids. The ligand stem cell factor binds via the second and third immunoglobulin domains.
Cell surface marker
Cluster of differentiation (CD) molecules are markers on the cell surface, as recognized by specific sets of antibodies, used to identify the cell type, stage of differentiation and activity of a cell. KIT is an important cell surface marker used to identify certain types of hematopoietic (blood) progenitors in the bone marrow. To be specific, hematopoietic stem cells (HSC), multipotent progenitors (MPP), and common myeloid progenitors (CMP) express high levels of KIT. Common lymphoid progenitors (CLP) express low surface levels of KIT. KIT also identifies the earliest thymocyte progenitors in the thymus—early T lineage progenitors (ETP/DN1) and DN2 thymocytes express high levels of c-Kit. It is also a marker for mouse prostate stem cells. In addition, mast cells, melanocytes in the skin, and interstitial cells of Cajal in the digestive tract express KIT. In humans, expression of c-kit in helper-like innate lymphoid cells (ILCs) which lack the expression of CRTH2 (CD294) is used to mark the ILC3 population.
CD117/c-KIT is expressed not only by bone marrow-derived stem cells, but also by those found in other adult organs, such as the prostate, liver, and heart, suggesting that SCF/c-KIT signaling pathways may contribute to stemness in some organs. Additionally, c-KIT has been associated with numerous biological processes in other cell types. For example, c-KIT signaling, has been shown to regulate oogenesis, folliculogenesis, and spermatogenesis, playing important roles in female and male fertility.
Mobilization
Hematopoietic progenitor cells are normally present in the blood at low levels. Mobilization is the process by which progenitors are made to migrate from the bone marrow into the bloodstream, thus increasing their numbers in the blood. Mobilization is used clinically as a source of hematopoietic stem cells for hematopoietic stem cell transplantation (HSCT). Signaling through KIT has been implicated in mobilization. At the current time, G-CSF is the main drug used for mobilization; it indirectly activates KIT. Plerixafor (an antagonist of CXCR4-SDF1) in combination with G-CSF, is also being used for mobilization of hematopoietic progenitor cells. Direct KIT agonists are currently being developed as mobilization agents.
Role in cancer
Activating mutations in this gene are associated with gastrointestinal stromal tumors, testicular seminoma, mast cell disease, melanoma, acute myeloid leukemia, while inactivating mutations are associated with the genetic defect piebaldism.
c-KIT plays an important role in regulating many mechanisms leading to tumor formation and progression of carcinomas. c-KIT has been proposed as a regulator of stemness in several cancers. Its expression has been linked to cancer stemness in ovarian cancer cells, colon cancer cells, non-small cell lung cancer cells, and prostate cancer cells. c-KIT has also been linked to the epithelial-mesenchymal transition (EMT), which is important for tumor aggressiveness and metastatic potential. Ectopic expression of c-KIT and EMT have been linked in denoid cystic carcinoma of the salivary gland, thymic carcinomas, ovarian cancer cells, and prostate cancer cells. Several lines of evidence suggest that SCF/c-KIT signaling plays an important role in the tumor microenvironment. For example, in mice high levels of c-KIT in mast cells as well as its presence in the tumor microenvironment promote angiogenesis, leading to increased tumor growth and metastasis.
Anti-KIT therapies
KIT is a proto-oncogene, meaning that overexpression or mutations of this protein can lead to cancer. Seminomas, a subtype of testicular germ cell tumors, frequently have activating mutations in exon 17 of KIT. In addition, the gene encoding KIT is frequently overexpressed and amplified in this tumor type, most commonly occurring as a single gene amplicon. Mutations of KIT have also been implicated in leukemia, a cancer of hematopoietic progenitors, melanoma, mast cell disease, and gastrointestinal stromal tumors (GISTs). The efficacy of imatinib (trade name Gleevec), a KIT inhibitor, is determined by the mutation status of KIT:
When the mutation has occurred in exon 11 (as is the case many times in GISTs), the tumors are responsive to imatinib. However, if the mutation occurs in exon 17 (as is often the case in seminomas and leukemias), the receptor is not inhibited by imatinib. In those cases other inhibitors such as dasatinib Avapritinib or nilotinib can be used. Researchers investigated the dynamic behavior of wild type and mutant D816H KIT receptor, and emphasized the extended A-loop (EAL) region (805-850) by conducting computational analysis. Their atomic investigation of mutant KIT receptor which emphasized on the EAL region provided a better insight into the understanding of the sunitinib resistance mechanism of the KIT receptor and could help to discover new therapeutics for KIT-based resistant tumor cells in GIST therapy.
The preclinical agent, KTN0182A, is an anti-KIT, pyrrolobenzodiazepine (PBD)-containing antibody-drug conjugate which shows anti-tumor activity in vitro and in vivo against a range of tumor types.
Diagnostic relevance
Antibodies to KIT are widely used in immunohistochemistry to help distinguish particular types of tumour in histological tissue sections. It is used primarily in the diagnosis of GISTs, which are positive for KIT, but negative for markers such as desmin and S-100, which are positive in smooth muscle and neural tumors, which have a similar appearance. In GISTs, KIT staining is typically cytoplasmic, with stronger accentuation along the cell membranes. KIT antibodies can also be used in the diagnosis of mast cell tumours and in distinguishing seminomas from embryonal carcinomas.
Interactions
KIT has been shown to interact with:
APS,
BCR,
CD63,
CD81,
CD9,
CRK,
CRKL,
DOK1,
FES,
GRB10,
Grb2,
KITLG,
LNK,
LYN,
MATK,
MPDZ,
PIK3R1,
PTPN11,
PTPN6,
STAT1,
SOCS1,
SOCS6,
SRC, and
TEC.
See also
Cytokine receptor
List of genes mutated in pigmented cutaneous lesions
References
Further reading
External links
C-kit receptor entry in the public domain NCI Dictionary of Cancer Terms
Immunoglobulin superfamily cytokine receptors
EC 2.7.10
Tyrosine kinase receptors | KIT (gene) | [
"Chemistry"
] | 1,915 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
9,494,086 | https://en.wikipedia.org/wiki/Marine%20larval%20ecology | Marine larval ecology is the study of the factors influencing dispersing larvae, which many marine invertebrates and fishes have. Marine animals with a larva typically release many larvae into the water column, where the larvae develop before metamorphosing into adults.
Marine larvae can disperse over long distances, although determining the actual distance is challenging, because of their size and the lack of a good tracking method. Knowing dispersal distances is important for managing fisheries, effectively designing marine reserves, and controlling invasive species.
Theories on the evolution of a biphasic life history
Larval dispersal is one of the most important topics in marine ecology, today. Many marine invertebrates and many fishes have a bi-phasic life cycle with a pelagic larva or pelagic eggs that can be transported over long distances, and a demersal or benthic adult. There are several theories behind why these organisms have evolved this biphasic life history:
Larvae use different food sources than adults, which decreases competition between life stages.
Pelagic larvae can disperse large distances, colonize new territory, and move away from habitats that has become overcrowded or otherwise unsuitable.
A long pelagic larval phase can help a species to break its parasite cycles.
Pelagic larvae avoid benthic predators.
Dispersing as pelagic larvae can be risky. For example, while larvae do avoid benthic predators, they are still exposed to pelagic predators in the water column.
Larval development strategies
Marine larvae develop via one of three strategies: Direct, lecithotrophic, or planktotrophic. Each strategy has risks of predation and the difficulty of finding a good settlement site.
Direct developing larvae look like the adult. They have typically very low dispersal potential, and are known as "crawl-away larvae", because they crawl away from their egg after hatching. Some species of frogs and snails hatch this way.
Lecithotrophic larvae have greater dispersal potential than direct developers. Many fish species and some benthic invertebrates have lecithotrophic larvae, which have yolk droplets or a yolk sac for nutrition during dispersal. Though some lecithotrophic species can feed in the water column, too. But many, such as tunicates, cannot, and so must settle before depleting their yolk. Consequently, these species have short pelagic larval durations and do not disperse long distances.
Planktotrophic larvae feed while they are in the water column and can be over a long time pelagic and so disperse over long distances. This disperse ability is a key adaptation of benthic marine invertebrates. Planktotrophic larvae feed on phytoplankton and small zooplankton, including other larvae. Planktotrophic development is the most common type of larval development, especially among benthic invertebrates.
Because planktotrophic larvae are for a long time in the water column and recruit successfully with low probability, early researchers developed the "lottery hypothesis", which states that animals release huge numbers of larvae to increase the chances that at least one will survive, and that larvae cannot influence their probability of success. This hypothesis views larval survival and successful recruitment as chance events, which numerous studies on larval behavior and ecology have since shown to be false. Though it has been generally disproved, the larval lottery hypothesis represents an important understanding of the difficulties faced by larvae during their time in the water column.
Predator defense
Predation is a major threat to marine larvae, which are an important food source for many organisms. Invertebrate larvae in estuaries are particularly at risk because estuaries are nursery grounds for planktivorous fishes. Larvae have evolved strategies to cope with this threat, including direct defense and avoidance.
Direct defense
Direct defense can include protective structures and chemical defenses. Most planktivorous fishes are gape-limited predators, meaning their prey is determined by the width of their open mouths, making larger larvae difficult to ingest. One study proved that spines serve a protective function by removing spines from estuarine crab larvae and monitoring differences in predation rates between de-spined and intact larvae. The study also showed that predator defense is also behavioral, as they can keep spines relaxed but erect them in the presence of predators.
Avoidance
Larvae can avoid predators on small and large spatial scales. Some larvae do this by sinking when approached by a predator. A more common avoidance strategy is to become active at night and remain hidden during the day to avoid visual predators. Most larvae and plankton undertake diel vertical migrations between deeper waters with less light and fewer predators during the day and shallow waters in the photic zone at night, where microalgae is abundant. Estuarine invertebrate larvae avoid predators by developing in the open ocean, where there are fewer predators. This is done using reverse tidal vertical migrations. Larvae use tidal cycles and estuarine flow regimes to aid their departure to the ocean, a process that is well-studied in many estuarine crab species.
An example of reverse tidal migration performed by crab species would begin with larvae being released on a nocturnal spring high tide to limit predation by planktivorous fishes. As the tide begins to ebbs, larvae swim to the surface to be carried away from the spawning site. When the tide begins to flood, larvae swim to the bottom, where water moves more slowly due to the boundary layer. When the tide again changes back to ebb, the larvae swim to the surface waters and resume their journey to the ocean. Depending on the length of the estuary and the speed of the currents, this process can take anywhere from one tidal cycle to several days.
Dispersal and settlement
The most widely accepted theory explaining the evolution of a pelagic larval stage is the need for long-distance dispersal ability. Sessile and sedentary organisms such as barnacles, tunicates, and mussels require a mechanism to move their young into new territory, since they cannot move long distances as adults. Many species have relatively long pelagic larval durations on the order of weeks or months. During this time, larvae feed and grow, and many species metamorphose through several stages of development. For example, barnacles molt through six naupliar stages before becoming a cyprid and seeking appropriate settlement substrate.
This strategy can be risky. Some larvae have been shown to be able to delay their final metamorphosis for a few days or weeks, and most species cannot delay it at all. If these larvae metamorphose far from a suitable settlement site, they perish. Many invertebrate larvae have evolved complex behaviors and endogenous rhythms to ensure successful and timely settlement.
Many estuarine species exhibit swimming rhythms of reverse tidal vertical migration to aid in their transport away from their hatching site. Individuals can also exhibit tidal vertical migrations to reenter the estuary when they are competent to settle.
As larvae reach their final pelagic stage, they become much more tactile; clinging to anything larger than themselves. One study observed crab postlarvae and found that they would swim vigorously until they encountered a floating object, which they would cling to for the remainder of the experiment. It was hypothesized that by clinging to floating debris, crabs can be transported towards shore due to the oceanographic forces of internal waves, which carry floating debris shoreward regardless of the prevailing currents.
Once returning to shore, settlers encounter difficulties concerning their actual settlement and recruitment into the population. Space is a limiting factor for sessile invertebrates on rocky shores. Settlers must be wary of adult filter feeders, which cover substrate at settlement sites and eat particles the size of larvae. Settlers must also avoid becoming stranded out of water by waves, and must select a settlement site at the proper tidal height to prevent desiccation and avoid competition and predation. To overcome many of these difficulties, some species rely on chemical cues to assist them in selecting an appropriate settlement site. These cues are usually emitted by adult conspecifics, but some species cue on specific bacterial mats or other qualities of the substrate.
Larval sensory systems
Although with a pelagic larva, many species can increase their dispersal range and decrease the risk of inbreeding, a larva comes with challenges: Marine larvae risk being washed away without finding a suitable habitat for settlement. Therefore, they have evolved many sensory systems:
Sensory systems
Magnetic fields
Far from shore, larvae are able to use magnetic fields to orient themselves towards the coast over large spatial scales. There is additional evidence that species can recognize anomalies in the magnetic field to return to the same location multiple times throughout their life. Though the mechanisms that these species use is poorly understood, it appears that magnetic fields play an important role in larval orientation offshore, where other cues such as sound and chemicals may be difficult to detect.
Vision and non-visual light perception
Phototaxis (ability to differentiate between light and dark areas) is important to find a suitable habitat. Phototaxis evolved relatively quickly and taxa that lack developed eyes, such as schyphozoans, use phototaxis to find shaded areas to settle away from predators.
Phototaxis is not the only mechanism that guides larvae by light. The larvae of the annelid Platynereis dumerilii do not only show positive and negative phototaxis over a broad range of the light spectrum, but swim down to the center of gravity when they are exposed to non-directional UV-light. This behavior is a UV-induced positive gravitaxis. This gravitaxis and negative phototaxis induced by light coming from the water surface form a ratio-metric depth-gauge. Such a depth gauge is based on the different attenuation of light across the different wavelengths in water. In clear water blue light (470 nm) penetrates the deepest. And so the larvae need only to compare the two wavelength ranges UV/violet (< 420 nm) and the other wavelengths to find their preferred depth.
Species that produce more complex larvae, such as fish, can use full vision to find a suitable habitat on small spatial scales. Larvae of damselfish use vision to find and settle near adults of their species.
Sound
Marine larvae use sound and vibrations to find a good habitat where they can settle and metamorphose into juveniles. This behavior has been seen in fish as well as in the larvae of scleractinian corals. Many families of coral reef fish are particularly attracted to high-frequency sounds produced by invertebrates, which larvae use as an indicator of food availability and complex habitat where they may be protected from predators. It is thought that larvae avoid low frequency sounds because they may be associated with transient fish or predators and is therefore not a reliable indicator of safe habitat.
The spatial range at which larvae detect and use sound waves is still uncertain, though some evidence suggests that it may only be reliable at very small scales. There is concern that changes in community structure in nursery habitats, such as seagrass beds, kelp forests, and mangroves, could lead to a collapse in larval recruitment due to a decrease in sound-producing invertebrates. Other researchers argue that larvae may still successfully find a place to settle even if one cue is unreliable.
Olfaction
Many marine organisms use olfaction (chemical cues in the form of scent) to locate a safe area to metamorphose at the end of their larval stage. This has been shown in both vertebrates and invertebrates. Research has shown that larvae are able to distinguish between water from the open ocean and water from more suitable nursery habitats such as lagoons and seagrass beds. Chemical cues can be extremely useful for larvae, but may not have a constant presence, as water input can depend on currents and tidal flow.
Human impacts on sensory systems
Recent research in the field of larval sensory biology has begun focusing more on how human impacts and environmental disturbance affect settlement rates and larval interpretation of different habitat cues. Ocean acidification due to anthropogenic climate change and sedimentation have become areas of particular interest.
Ocean acidification
Although several behaviours of coral reef fish, including larvae, has been found to be detrimentally affected from projected end-of-21st-century ocean acidification in previous experiments, a 2020 replication study found that "end-of-century ocean acidification levels have negligible effects on [three] important behaviours of coral reef fishes" and with "data simulations, [showed] that the large effect sizes and small within-group variances that have been reported in several previous studies are highly improbable". In 2021, it emerged that some of the previous studies about coral reef fish behaviour changes have been accused of being fraudulent. Furthermore, effect sizes of studies assessing ocean acidification effects on fish behaviour have declined dramatically over a decade of research on this topic, with effects appearing negligible since 2015.
Ocean acidification has been shown to alter the way that pelagic larvae are able to process information and production of the cues themselves. Acidification can alter larval interpretations of sounds, particularly in fish, leading to settlement in suboptimal habitat. Though the mechanism for this process is still not fully understood, some studies indicate that this breakdown may be due to a decrease in size or density of their otoliths. Furthermore, sounds produced by invertebrates that larvae rely on as an indicator of habitat quality can also change due to acidification. For example, snapping shrimp produce different sounds that larvae may not recognize under acidified conditions due to differences in shell calcification.
Hearing is not the only sense that may be altered under future ocean chemistry conditions. Evidence also suggests that larval ability to process olfactory cues was also affected when tested under future pH conditions. Red color cues that coral larvae use to find crustose coralline algae, with which they have a commensal relationship, may also be in danger due to algal bleaching.
Sedimentation
Sediment runoff, from natural storm events or human development, can also impact larval sensory systems and survival. One study focusing on red soil found that increased turbidity due to runoff negatively influenced the ability of fish larvae to interpret visual cues. More unexpectedly, they also found that red soil can also impair olfactory capabilities.
Self-recruitment
Marine ecologists are often interested in the degree of self-recruitment in populations. Historically, larvae were considered passive particles that were carried by ocean currents to faraway locations. This led to the belief that all marine populations were demographically open, connected by long distance larval transport. Recent work has shown that many populations are self-recruiting, and that larvae and juveniles are capable of purposefully returning to their natal sites.
Researchers take a variety of approaches to estimating population connectivity and self-recruitment, and several studies have demonstrated their feasibility. Jones et al. and Swearer et al., for example, investigated the proportion of fish larvae returning to their natal reef. Both studies found higher than expected self-recruitment in these populations using mark, release, and recapture sampling. These studies were the first to provide conclusive evidence of self-recruitment in a species with the potential to disperse far from its natal site, and laid the groundwork for numerous future studies.
Conservation
Ichthyoplankton have a high mortality rate as they transition their food source from yolk sac to zooplankton. It is proposed that this mortality rate is related to food supply as well as an inability to move through the water effectively at this stage of development, leading to starvation. Turbidity of water can also impact the organisms' ability to feed even when there is a high density of prey. Reducing hydrodynamic constraints on cultivated populations could lead to higher yields for repopulation efforts and has been proposed as a means of conserving fish populations by acting at the larval level.
A network of marine reserves has been initiated for the conservation of the world's marine larval populations. These areas restrict fishing and therefore increase the number of otherwise fished species. This leads to a healthier ecosystem and affects the number of overall species within the reserve as compared to nearby fished areas; however, the full effect of an increase in larger predator fish on larval populations is not currently known. Also, the potential for utilizing the motility of fish larvae to repopulate the water surrounding the reserve is not fully understood. Marine reserves are a part of a growing conservation effort to combat overfishing; however, reserves still only comprise about 1% of the world's oceans. These reserves are also not protected from other human-derived threats, such as chemical pollutants, so they cannot be the only method of conservation without certain levels of protection for the water around them as well.
For effective conservation, it is important to understand the larval dispersal patterns of the species in danger, as well as the dispersal of invasive species and predators which could impact their populations. Understanding these patterns is an important factor when creating protocol for governing fishing and creating reserves. A single species may have multiple dispersal patterns. The spacing and size of marine reserves must reflect this variability to maximize their beneficial effect. Species with shorter dispersal patterns are more likely to be affected by local changes and require higher priority for conservation because of the separation of subpopulations.
Implications
The principles of marine larval ecology can be applied in other fields, too whether marine or not. Successful fisheries management relies heavily on understanding population connectivity and dispersal distances, which are driven by larvae. Dispersal and connectivity must also be considered when designing natural reserves. If populations are not self-recruiting, reserves may lose their species assemblages. Many invasive species can disperse over long distances, including the seeds of land plants and larvae of marine invasive species. Understanding the factors influencing their dispersal is key to controlling their spread and managing established populations.
See also
Crustacean larvae
Ichthyoplankton
References
Marine biology | Marine larval ecology | [
"Biology"
] | 3,666 | [
"Marine biology"
] |
2,227,485 | https://en.wikipedia.org/wiki/Recursive%20data%20type | In computer programming languages, a recursive data type (also known as a recursively-defined, inductively-defined or inductive data type) is a data type for values that may contain other values of the same type. Data of recursive types are usually viewed as directed graphs.
An important application of recursion in computer science is in defining dynamic data structures such as Lists and Trees. Recursive data structures can dynamically grow to an arbitrarily large size in response to runtime requirements; in contrast, a static array's size requirements must be set at compile time.
Sometimes the term "inductive data type" is used for algebraic data types which are not necessarily recursive.
Example
An example is the list type, in Haskell:
data List a = Nil | Cons a (List a)
This indicates that a list of a's is either an empty list or a cons cell containing an 'a' (the "head" of the list) and another list (the "tail").
Another example is a similar singly linked type in Java:
class List<E> {
E value;
List<E> next;
}
This indicates that non-empty list of type E contains a data member of type E, and a reference to another List object for the rest of the list (or a null reference to indicate that this is the end of the list).
Mutually recursive data types
Data types can also be defined by mutual recursion. The most important basic example of this is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically:
f: [t[1], ..., t[k]]
t: v f
A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types.
This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest:
t: v [t[1], ..., t[k]]
A tree t consists of a pair of a value v and a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list another, which require disentangling to prove results about.
In Standard ML, the tree and forest data types can be mutually recursively defined as follows, allowing empty trees:
datatype 'a tree = Empty | Node of 'a * 'a forest
and 'a forest = Nil | Cons of 'a tree * 'a forestIn Haskell, the tree and forest data types can be defined similarly:data Tree a = Empty
| Node (a, Forest a)
data Forest a = Nil
| Cons (Tree a) (Forest a)
Theory
In type theory, a recursive type has the general form μα.T where the type variable α may appear in the type T and stands for the entire type itself.
For example, the natural numbers (see Peano arithmetic) may be defined by the Haskell datatype:
data Nat = Zero | Succ Nat
In type theory, we would say: where the two arms of the sum type represent the Zero and Succ data constructors. Zero takes no arguments (thus represented by the unit type) and Succ takes another Nat (thus another element of ).
There are two forms of recursive types: the so-called isorecursive types, and equirecursive types. The two forms differ in how terms of a recursive type are introduced and eliminated.
Isorecursive types
With isorecursive types, the recursive type and its expansion (or unrolling) (where the notation indicates that all instances of Z are replaced with Y in X) are distinct (and disjoint) types with special term constructs, usually called roll and unroll, that form an isomorphism between them. To be precise: and , and these two are inverse functions.
Equirecursive types
Under equirecursive rules, a recursive type and its unrolling are equal – that is, those two type expressions are understood to denote the same type. In fact, most theories of equirecursive types go further and essentially specify that any two type expressions with the same "infinite expansion" are equivalent. As a result of these rules, equirecursive types contribute significantly more complexity to a type system than isorecursive types do. Algorithmic problems such as type checking and type inference are more difficult for equirecursive types as well. Since direct comparison does not make sense on an equirecursive type, they can be converted into a canonical form in O(n log n) time, which can easily be compared.
Isorecursive types capture the form of self-referential (or mutually referential) type definitions seen in nominal object-oriented programming languages, and also arise in type-theoretic semantics of objects and classes. In functional programming languages, isorecursive types (in the guise of datatypes) are common too.
Recursive type synonyms
In TypeScript, recursion is allowed in type aliases. Thus, the following example is allowed.
type Tree = number | Tree[];
let tree: Tree = [1, [2, 3]];
However, recursion is not allowed in type synonyms in Miranda, OCaml (unless -rectypes flag is used or it's a record or variant), or Haskell; so, for example the following Haskell types are illegal:
type Bad = (Int, Bad)
type Evil = Bool -> Evil
Instead, they must be wrapped inside an algebraic data type (even if they only has one constructor):
data Good = Pair Int Good
data Fine = Fun (Bool -> Fine)
This is because type synonyms, like typedefs in C, are replaced with their definition at compile time. (Type synonyms are not "real" types; they are just "aliases" for convenience of the programmer.) But if this is attempted with a recursive type, it will loop infinitely because no matter how many times the alias is substituted, it still refers to itself, e.g. "Bad" will grow indefinitely: Bad → (Int, Bad) → (Int, (Int, Bad)) → ... .
Another way to see it is that a level of indirection (the algebraic data type) is required to allow the isorecursive type system to figure out when to roll and unroll.
See also
Recursive definition
Algebraic data type
Inductive type
Node (computer science)
References
Sources
Data types
Type theory | Recursive data type | [
"Mathematics"
] | 1,488 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
2,227,503 | https://en.wikipedia.org/wiki/Langmuir%E2%80%93Blodgett%20film | A Langmuir–Blodgett (LB) film is an emerging kind of 2D materials to fabricate heterostructures for nanotechnology, formed when Langmuir films—or Langmuir monolayers (LM)—are transferred from the liquid-gas interface to solid supports during the vertical passage of the support through the monolayers. LB films can contain one or more monolayers of an organic material, deposited from the surface of a liquid onto a solid by immersing (or emersing) the solid substrate into (or from) the liquid. A monolayer is adsorbed homogeneously with each immersion or emersion step, thus films with very accurate thickness can be formed. This thickness is accurate because the thickness of each monolayer is known and can therefore be added to find the total thickness of a Langmuir–Blodgett film.
The monolayers are assembled vertically and are usually composed either of amphiphilic molecules (see chemical polarity) with a hydrophilic head and a hydrophobic tail (example: fatty acids) or nowadays commonly of nanoparticles.
Langmuir–Blodgett films are named after Irving Langmuir and Katharine B. Blodgett, who invented this technique while working in Research and Development for General Electric Co.
Historical background
Advances to the discovery of LB and LM films began with Benjamin Franklin in 1773 when he dropped about a teaspoon of oil onto a pond. Franklin noticed that the waves were calmed almost instantly and that the calming of the waves spread for about half an acre. What Franklin did not realize was that the oil had formed a monolayer on top of the pond surface. Over a century later, Lord Rayleigh quantified what Benjamin Franklin had seen. Knowing that the oil, oleic acid, had spread evenly over the water, Rayleigh calculated that the thickness of the film was 1.6 nm by knowing the volume of oil dropped and the area of coverage.
With the help of her kitchen sink, Agnes Pockels showed that area of films can be controlled with barriers. She added that surface tension varies with contamination of water. She used different oils to deduce that surface pressure would not change until area was confined to about 0.2 nm2. This work was originally written as a letter to Lord Rayleigh who then helped Agnes Pockels become published in the journal, Nature, in 1891.
Agnes Pockels’ work set the stage for Irving Langmuir who continued to work and confirmed Pockels’ results. Using Pockels’ idea, he developed the Langmuir (or Langmuir–Blodgett) trough. His observations indicated that chain length did not impact the affected area since the organic molecules were arranged vertically.
Langmuir’s breakthrough did not occur until he hired Katherine Blodgett as his assistant. Blodgett initially went to seek for a job at General Electric (GE) with Langmuir during her Christmas break of her senior year at Bryn Mawr College, where she received a BA in Physics. Langmuir advised to Blodgett that she should continue her education before working for him. She thereafter attended University of Chicago for her MA in Chemistry. Upon her completion of her Master's, Langmuir hired her as his assistant. However, breakthroughs in surface chemistry happened after she received her PhD degree in 1926 from Cambridge University.
While working for GE, Langmuir and Blodgett discovered that when a solid surface is inserted into an aqueous solution containing organic moieties, the organic molecules will deposit a monolayer homogeneously over the surface. This is the Langmuir–Blodgett film deposition process. Through this work in surface chemistry and with the help of Blodgett, Langmuir was awarded the Nobel Prize in 1932. In addition, Blodgett used Langmuir–Blodgett film to create 99% transparent anti-reflective glass by coating glass with fluorinated organic compounds, forming a simple anti-reflective coating.
Physical insight
Langmuir films are formed when amphiphilic (surfactants) molecules or nanoparticles are spread on the water at an air–water interface. Surfactants (or surface-acting agents) are molecules with hydrophobic 'tails' and hydrophilic 'heads'. When surfactant concentration is less than the minimum surface concentration of collapse and it is completely insoluble in water, the surfactant molecules arrange themselves as shown in Figure 1 below. This tendency can be explained by surface-energy considerations. Since the tails are hydrophobic, their exposure to air is favoured over that to water. Similarly, since the heads are hydrophilic, the head–water interaction is more favourable than head-air interaction. The overall effect is reduction in the surface energy (or equivalently, surface tension of water).
For very small concentrations, far from the surface density compatible with the collapse of the monolayer (which leads to polylayers structures) the surfactant molecules execute a random motion on the water–air interface. This motion can be thought to be similar to the motion of ideal-gas molecules enclosed in a container. The corresponding thermodynamic variables for the surfactant system are, surface pressure (), surface area (A) and number of surfactant molecules (N). This system behaves similar to a gas in a container. The density of surfactant molecules as well as the surface pressure increases upon reducing the surface area A ('compression' of the 'gas'). Further compression of the surfactant molecules on the surface shows behavior similar to phase transitions. The ‘gas’ gets compressed into ‘liquid’ and ultimately into a perfectly closed packed array of the surfactant molecules on the surface corresponding to a ‘solid’ state. The liquid state is usually separated in the liquid-expanded and liquid-condensed states. All the Langmuir film states are classified according to the compressionality factor of the films, defined as , usually related to the in-plane elasticity of the monolayer.
The condensed Langmuir films (in surface pressures usually higher than 15 mN/m – typically 30 mN/m) can be subsequently transferred onto a solid substrate to create highly organized thin film coatings. Langmuir–Blodgett troughs
Besides LB film from surfactants depicted in Figure 1, similar monolayers can also be made from inorganic nanoparticles.
Pressure–area characteristics
Adding a monolayer to the surface reduces the surface tension, and the surface pressure, is given by the following equation:
where is equal to the surface tension of the water and is the surface tension due to the monolayer. But the concentration-dependence of surface tension (similar to Langmuir isotherm) is as follows:
Thus,
or
The last equation indicates a relationship similar to ideal gas law. However, the concentration-dependence of surface tension is valid only when the solutions are dilute and concentrations are low. Hence, at very low concentrations of the surfactant, the molecules behave like ideal gas molecules.
Experimentally, the surface pressure is usually measured using the Wilhelmy plate. A pressure sensor/electrobalance arrangement detects the pressure exerted by the monolayer. Also monitored is the area to the side of the barrier which the monolayer resides.
Figure 2. A Wilhelmy plate
A simple force balance on the plate leads to the following equation for the surface pressure:
only when . Here, and are the dimensions of the plate, and is the difference in forces. The Wilhelmy plate measurements give pressure – area isotherms that show phase transition-like behaviour of the LM films, as mentioned before (see figure below). In the gaseous phase, there is minimal pressure increase for a decrease in area. This continues until the first transition occurs and there is a proportional increase in pressure with decreasing area. Moving into the solid region is accompanied by another sharp transition to a more severe area dependent pressure. This trend continues up to a point where the molecules are relatively close packed and have very little room to move. Applying an increasing pressure at this point causes the monolayer to become unstable and destroy the monolayer forming polylayer structures towards the air phase. The surface pressure during the monolayer collapse may remain approximately constant (in a process near the equilibrium) or may decay abruptly (out of equilibrium - when the surface pressure was over-increased because lateral compression was too fast for monomolecular rearrangements).
Figure 3. (i) Surface pressure – Area isotherms. (ii) Molecular configuration in the three regions marked in the -A curve; (a) gaseous phase, (b) liquid-expanded phase, and (c) condensed phase. (Adapted from Osvaldo N. Oliveira Jr., Brazilian Journal of Physics, vol. 22, no. 2, June 1992)
Applications
Many possible applications have been suggested over years for LM and LB films. Their characteristics are extremely thin films and high degree of structural order. These films have different optical, electrical and biological properties which are composed of some specific organic compounds. Organic compounds usually have more positive responses than inorganic materials for outside factors (pressure, temperature or gas change). LM films can be used also as models for half a cellular membrane.
LB films consisting of nanoparticles can be used for example to create functional coatings, sophisticated sensor surfaces and to coat silicon wafers.
LB films can be used as passive layers in MIS (metal-insulator-semiconductor) which have more open structure than silicon oxide, and they allow gases to penetrate to the interface more effectively.
LB films also can be used as biological membranes. Lipid molecules with the fatty acid moiety of long carbon chains attached to a polar group have received extended attention because of being naturally suited to the Langmuir method of film production. This type of biological membrane can be used to investigate: the modes of drug action, the permeability of biologically active molecules, and the chain reactions of biological systems.
Also, it is possible to propose field effect devices for observing the immunological response and enzyme-substrate reactions by collecting biological molecules such as antibodies and enzymes in insulating LB films.
Anti-reflective glass can be produced with successive layers of fluorinated organic film.
The glucose biosensor can be made of poly(3-hexyl thiopene) as Langmuir–Blodgett film, which entraps glucose-oxide and transfers it to a coated indium-tin-oxide glass plate.
UV resists can be made of poly(N-alkylmethacrylamides) Langmuir–Blodgett film.
UV light and conductivity of a Langmuir–Blodgett film.
Langmuir–Blodgett films are inherently 2D-structures and can be built up layer by layer, by dipping hydrophobic or hydrophilic substrates into a liquid sub-phase.
Langmuir–Blodgett patterning is a new paradigm for large-area patterning with mesostructured features
Recently, it has been demonstrated that Langmuir–Blodgett is an effective technique even to produce ultra-thin films of emerging two-dimensional layered materials on a large scale.
See also
References
Bibliography
R. W. Corkery, Langmuir, 1997, 13 (14), 3591–3594
Osvaldo N. Oliveira Jr., Brazilian Journal of Physics, vol. 22, no. 2, June 1992
Roberts G G, Pande K P and Barlow, Phys. Technol., Vol. 12, 1981
Singhal, Rahul. Poly-3-Hexyl Thiopene Langmuir-Blodgett Films for Application to Glucose Biosensor. National Physics Laboratory: Biotechnology and Bioengineering, p 277-282, February 5, 2004. John and Wiley Sons Inc.
Guo, Yinzhong. Preparation of poly(N-alkylmethacrylamide) Langmuir–Blodgett films for the application to a novel dry-developed positive deep UV resist. Macromolecules, p1115-1118, February 23, 1999. ACS
Franklin, Benjamin, Of the stilling of Waves by means of Oil. Letter to William Brownrigg and the Reverend Mr. Farish. London, November 7, 1773.
Pockels, A., Surface Tension, Nature, 1891, 43, 437.
Blodgett, Katherine B., Use of Interface to Extinguish Reflection of Light from Glass. Physical Review, 1939, 55,
A. Ulman, An Introduction to Ultrathin Organic Films From Langmuir-Blodgett to Self-Assembly, Academic Press, Inc.: San Diego (1991).
I.R. Peterson, "Langmuir Blodgett Films ", J. Phys. D 23, 4, (1990) 379–95.
I.R. Peterson, "Langmuir Monolayers", in T.H. Richardson, Ed., Functional Organic and Polymeric Materials Wiley: NY (2000).
L.S. Miller, D.E. Hookes, P.J. Travers and A.P. Murphy, "A New Type of Langmuir-Blodgett Trough", J. Phys. E 21 (1988) 163–167.
I.R.Peterson, J.D.Earls. I.R.Girling and G.J.Russell, "Disclinations and Annealing in Fatty-Acid Monolayers", Mol. Cryst. Liq. Cryst. 147 (1987) 141–147.
Syed Arshad Hussain, D. Bhattacharjee, "Langmuir-Blodgett Films and Molecular Electronics", Modern Physics Letters B vol. 23 No. 27 (2009) 3437–3451.
Nanotechnology
Phases of matter
Thin films
ja:Langmuir-Blodgett膜 | Langmuir–Blodgett film | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 2,913 | [
"Phases of matter",
"Materials science",
"Nanotechnology",
"Planes (geometry)",
"Thin films",
"Matter"
] |
2,227,640 | https://en.wikipedia.org/wiki/Image-forming%20optical%20system | In optics, an image-forming optical system is a system capable of being used for imaging. The diameter of the aperture of the main objective is a common criterion for comparison among optical systems, such as large telescopes.
The two traditional optical systems are mirror-systems (catoptrics) and lens-systems (dioptrics). However, in the late twentieth century, optical fiber was introduced as a technology for transmitting images over long distances. Catoptrics and dioptrics have a focal point that concentrates light onto a specific point, while optical fiber the transfer of an image from one plane to another without the need for an optical focus.
Isaac Newton is reported to have designed what he called a catadioptrical phantasmagoria, which can be interpreted to mean an elaborate structure of both mirrors and lenses.
Catoptrics and optical fiber have no chromatic aberration, while dioptrics need to have this error corrected. Newton believed that such correction was impossible, because he thought the path of the light depended only on its color. In 1757 John Dollond made an achromatised dioptric, which was the forerunner of the lenses used in all popular photographic equipment today.
Lower-energy X-rays are the highest-energy electromagnetic radiation that can be focused into an image, using a Wolter telescope. There are three types of Wolter telescopes. Near-infrared is typically the longest wavelength that are handled optically, such as in some large telescopes.
References
Optics
Telescopes | Image-forming optical system | [
"Physics",
"Chemistry",
"Astronomy"
] | 307 | [
"Applied and interdisciplinary physics",
"Optics",
"Telescopes",
" molecular",
"Astronomical instruments",
"Atomic",
" and optical physics"
] |
2,227,994 | https://en.wikipedia.org/wiki/Oxygen%20tank | An oxygen tank is an oxygen storage vessel, which is either held under pressure in gas cylinders, referred to in the industry as high pressure oxygen cylinders, or as liquid oxygen in a cryogenic storage tank.
Uses
Oxygen tanks are used to store gas for:
medical breathing (oxygen therapy) at medical facilities and at home (high pressure cylinder)
breathing at altitude in aviation, either in a decompression emergency, or constantly (as in unpressurized aircraft), usually in high pressure cylinders
oxygen first aid sets, in small portable high pressure cylinders
gas blending, for mixing breathing gases such as nitrox, trimix and heliox
open-circuit scuba sets - mainly used for accelerated decompression in technical diving, in high pressure cylinders
some types of diving rebreather: oxygen rebreathers and fully closed circuit rebreathers, usually in high pressure cylinders
use in climbing, "Bottled oxygen" refers to oxygen in lightweight high pressure cylinders for mountaineering
industrial processes, including the manufacture of steel and monel
oxyacetylene welding equipment, glass lampworking torches, and some gas cutting torches, usually in high pressure cylinders
use as liquid rocket propellants for rocket engines, usually as liquid oxygen at ambient pressure
athletes, specifically on American football sidelines, to expedite recovery after exertion, in high-pressure cylinders.
Breathing oxygen is delivered from the storage tank to users by use of the following methods: oxygen mask, nasal cannula, full face diving mask, diving helmet, demand valve, oxygen rebreather, built in breathing system (BIBS), oxygen tent, and hyperbaric oxygen chamber.
Contrary to popular belief most scuba divers do not carry oxygen tanks. The vast majority of divers breathe air or nitrox stored in a diving cylinder. A small minority breathe trimix, heliox or other exotic gases. Some may carry pure oxygen for accelerated decompression or as supply gas to a rebreather. Some shallow divers, particularly naval combat divers, use oxygen rebreathers, and they use a small oxygen cylinder to provide the gas.
Oxygen is rarely held at pressures higher than , due to the risks of fire triggered by high temperatures caused by adiabatic heating when the gas changes pressure when moving from one vessel to another. Medical use liquid oxygen airgas tanks are typically .
All equipment coming into contact with high pressure oxygen must be "oxygen clean" and "oxygen compatible", to reduce the risk of fire. "Oxygen clean" means the removal of any substance that could act as a source of ignition. "Oxygen compatible" means that internal components must not burn readily or degrade easily in a high pressure oxygen environment.
In some countries there are legal and insurance requirements and restrictions on the use, storage and transport of pure oxygen. Oxygen tanks are normally stored in well-ventilated locations, far from potential sources of fire and concentrations of people.
See also
Bottled gas
Gas cylinder
Dewar flask
References
Underwater breathing apparatus
Decompression equipment
Pressure vessels
Tank
Gas technologies | Oxygen tank | [
"Physics",
"Chemistry",
"Engineering"
] | 622 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
2,228,126 | https://en.wikipedia.org/wiki/E2F | E2F is a group of genes that encodes a family of transcription factors (TF) in higher eukaryotes. Three of them are activators: E2F1, 2 and E2F3a. Six others act as repressors: E2F3b, E2F4-8. All of them are involved in the cell cycle regulation and synthesis of DNA in mammalian cells. E2Fs as TFs bind to the TTTCCCGC (or slight variations of this sequence) consensus binding site in the target promoter sequence.
E2F family
Schematic diagram of the amino acid sequences of E2F family members (N-terminus to the left, C-terminus to the right) highlighting the relative locations of functional domains within each member:
Genes
Homo sapiens E2F1 mRNA or
E2F1 protein sequences from NCBI protein and nucleotide database.
Structure
X-ray crystallographic analysis has shown that the E2F family of transcription factors has a fold similar to the winged-helix DNA-binding motif.
Role in the cell cycle
E2F family members play a major role during the G1/S transition in mammalian and plant cell cycle (see KEGG cell cycle pathway). DNA microarray analysis reveals unique sets of target promoters among E2F family members suggesting that each protein has a unique role in the cell cycle. Among E2F transcriptional targets are cyclins, CDKs, checkpoints regulators, DNA repair and replication proteins. Nonetheless, there is a great deal of redundancy among the family members. Mouse embryos lacking E2F1, E2F2, and one of the E2F3 isoforms, can develop normally when either E2F3a or E2F3b, is expressed.
The E2F family is generally split by function into two groups: transcription activators and repressors. Activators such as E2F1, E2F2, E2F3a promote and help carryout the cell cycle, while repressors inhibit the cell cycle. Yet, both sets of E2F have similar domains. E2F1-6 have DP1,2 heterodimerization domain which allows them to bind to DP1 or DP2, proteins distantly related to E2F. Binding with DP1,2 provides a second DNA binding site, increasing E2F binding stability. Most E2F have a pocket protein binding domain. Pocket proteins such as pRB and related proteins p107 and p130, can bind to E2F when hypophosphorylated. In activators, E2F binding with pRB has been shown to mask the transactivation domain responsible for transcription activation. In repressors E2F4 and E2F5, pocket protein binding (more often p107 and p130 than pRB) mediates recruitment of repression complexes to silence target genes. E2F6, E2F7, and E2F8 do not have pocket protein binding sites and their mechanism for gene silencing is unclear. Cdk4(6)/cyclin D and cdk2/cyclin E phosphorylate pRB and related pocket proteins allowing them to disassociate from E2F. Activator E2F proteins can then transcribe S phase promoting genes. In REF52 cells, overexpression of activator E2F1 is able to push quiescent cells into S phase. While repressors E2F4 and 5 do not alter cell proliferation, they mediate G1 arrest.
E2F activator levels are cyclic, with maximal expression during G1/S. In contrast, E2F repressors stay constant, especially since they are often expressed in quiescent cells. Specifically, E2F5 is only expressed in terminally differentiated cells in mice. The balance between repressor and activator E2F regulate cell cycle progression. When activator E2F family proteins are knocked out, repressors become active to inhibit E2F target genes.
E2F/pRb complexes
The Rb tumor suppressor protein (pRb) binds to the E2F1 transcription factor preventing it from interacting with the cell's transcription machinery. In the absence of pRb, E2F1 (along with its binding partner DP1) mediates the trans-activation of E2F1 target genes that facilitate the G1/S transition and S-phase. E2F targets genes that encode proteins involved in DNA replication (for example DNA polymerase, thymidine kinase, dihydrofolate reductase and cdc6), and chromosomal replication (replication origin-binding protein HsOrc1 and MCM5). When cells are not proliferating, E2F DNA binding sites contribute to transcriptional repression. In vivo footprinting experiments obtained on Cdc2 and B-myb promoters demonstrated E2F DNA binding site occupation during G0 and early G1, when E2F is in transcriptional repressive complexes with the pocket proteins.
pRb is one of the targets of the oncogenic human papilloma virus protein E7, and human adenovirus protein E1A. By binding to pRB, they stop the regulation of E2F transcription factors and drive the cell cycle to enable virus genome replication.
Activators: E2F1, E2F2, E2F3a
Activators are maximally expressed late in G1 and can be found in association with E2F regulated promoters during the G1/S transition. The activation of E2F-3a genes follows upon the growth factor stimulation and the subsequent phosphorylation of the E2F inhibitor retinoblastoma protein, pRB. The phosphorylation of pRB is initiated by cyclin D/cdk4, cdk6 complex and continued by cyclin E/cdk2. Cyclin D/cdk4,6 itself is activated by the MAPK signaling pathway.
When bound to E2F-3a, pRb can directly repress E2F-3a target genes by recruiting chromatin remodeling complexes and histone modifying activities (e.g. histone deacetylase, HDAC) to the promoter.
Inhibitors: E2F3b, E2F4, E2F5, E2F6, E2F7, E2F8
E2F3b, E2F4, E2F5 are expressed in quiescent cells and can be found associated with E2F-binding elements on E2F-target promoters during G0-phase. E2F-4 and 5 preferentially bind to p107/p130.
E2F-6 acts as a transcriptional repressor, but through a distinct, pocket protein independent manner. E2F-6 mediates repression by direct binding to polycomb-group proteins or via the formation of a large multimeric complex containing Mga and Max proteins.
The repressor genes E2F7/E2F8, located on chromosome 7, are transcription factors responsible for protein coding cell cycle regulation. Together, they are essential for the development of an intact, organized, and functional placental structure during embryonic development. While the specific molecular pathways remain unknown, researchers have used placental and fetal lineage specific cre mice to determine the functions of the synergistic E2F7 and E2Fhe8 genes. Knockout mice, deplete of E2F7 and E2F8, result in abnormal trophoblastic proliferation accompanied by advanced cellular apoptosis. Phenotypically, the placenta presents with disruptions in cellular architecture to include large clusters of undifferentiated trophoblastic cells, which have failed to invade the maternal decidua. E2F7 and E2F8 proteins can function as repressors independently of DP interaction. They are unique in having a duplicated conserved E2F-like DNA-binding domain and in lacking a DP1,2-dimerization domain. They also appear to play a role in angiogenesis through the activation of vascular endothelial growth factor A. Using zebrafish, severe vascular defects of the head and somatic vessels were discovered when animals were depleted of E2F7 and E2F8. Antagonized by E2F3a, a transcriptional program has been discovered that functions through the coordination of multiple genes in the E2F family in order to ensure proper development of the placenta.
Transcriptional targets
Cell cycle: CCNA1,2, CCND1,2, CDK2, MYB, E2F1,2,3, TFDP1, CDC25A
Negative regulators: E2F7, RB1, TP107, TP21
Checkpoints: TP53, BRCA1,2, BUB1
Apoptosis: TP73, APAF1, CASP3,7,8, MAP3K5,14
Nucleotide synthesis: thymidine kinase (tk), thymidylate synthase (ts), DHFR
DNA repair: BARD1, RAD51, UNG1,2, FANCA, FANCC, FANCJ
DNA replication: PCNA, histone H2A, DNA pol and , RPA1,2,3, CDC6, MCM2,3,4,5,6,7
See also
Transcription factor DP
Type 3c (Pancreatogenic) Diabetes
References
External links
Drosophila E2F transcription factor - The Interactive Fly
Drosophila E2F transcription factor 2 - The Interactive Fly
Cell cycle
Transcription factors | E2F | [
"Chemistry",
"Biology"
] | 2,070 | [
"Gene expression",
"Signal transduction",
"Cellular processes",
"Induced stem cells",
"Cell cycle",
"Transcription factors"
] |
2,228,245 | https://en.wikipedia.org/wiki/Quadrupole%20ion%20trap | In experimental physics, a quadrupole ion trap or paul trap is a type of ion trap that uses dynamic electric fields to trap charged particles. They are also called radio frequency (RF) traps or Paul traps in honor of Wolfgang Paul, who invented the device and shared the Nobel Prize in Physics in 1989 for this work. It is used as a component of a mass spectrometer or a trapped ion quantum computer.
Overview
A charged particle, such as an atomic or molecular ion, feels a force from an electric field. It is not possible to create a static configuration of electric fields that traps the charged particle in all three directions (this restriction is known as Earnshaw's theorem). It is possible, however, to create an average confining force in all three directions by use of electric fields that change in time. To do so, the confining and anti-confining directions are switched at a rate faster than it takes the particle to escape the trap. The traps are also called "radio frequency" traps because the switching rate is often at a radio frequency.
The quadrupole is the simplest electric field geometry used in such traps, though more complicated geometries are possible for specialized devices. The electric fields are generated from electric potentials on metal electrodes. A pure quadrupole is created from hyperbolic electrodes, though cylindrical electrodes are often used for ease of fabrication. Microfabricated ion traps exist where the electrodes lie in a plane with the trapping region above the plane. There are two main classes of traps, depending on whether the oscillating field provides confinement in three or two dimensions. In the two-dimension case (a so-called "linear RF trap"), confinement in the third direction is provided by static electric fields.
Theory
The 3D trap itself generally consists of two hyperbolic metal electrodes with their foci facing each other and a hyperbolic ring electrode halfway between the other two electrodes. The ions are trapped in the space between these three electrodes by AC (oscillating) and DC (static) electric fields. The AC radio frequency voltage oscillates between the two hyperbolic metal end cap electrodes if ion excitation is desired; the driving AC voltage is applied to the ring electrode. The ions are first pulled up and down axially while being pushed in radially. The ions are then pulled out radially and pushed in axially (from the top and bottom). In this way the ions move in a complex motion that generally involves the cloud of ions being long and narrow and then short and wide, back and forth, oscillating between the two states. Since the mid-1980s most 3D traps (Paul traps) have used ~1 mTorr of helium. The use of damping gas and the mass-selective instability mode developed by Stafford et al. led to the first commercial 3D ion traps.
The quadrupole ion trap has two main configurations: the three-dimensional form described above and the linear form made of 4 parallel electrodes. A simplified rectilinear configuration is also used. The advantage of the linear design is its greater storage capacity (in particular of Doppler-cooled ions) and its simplicity, but this leaves a particular constraint on its modeling. The Paul trap is designed to create a saddle-shaped field to trap a charged ion, but with a quadrupole, this saddle-shaped electric field cannot be rotated about an ion in the centre. It can only 'flap' the field up and down. For this reason, the motions of a single ion in the trap are described by Mathieu equations, which can only be solved numerically by computer simulations.
The intuitive explanation and lowest order approximation is the same as strong focusing in accelerator physics. Since the field affects the acceleration, the position lags behind (to lowest order by half a period). So the particles are at defocused positions when the field is focusing and vice versa. Being farther from center, they experience a stronger field when the field is focusing than when it is defocusing.
Equations of motion
Ions in a quadrupole field experience restoring forces that drive them back toward the center of the trap. The motion of the ions in the field is described by solutions to the Mathieu equation. When written for ion motion in a trap, the equation is
where represents the x, y and z coordinates, is a dimensionless variable given by , and and are dimensionless trapping parameters. The parameter is the radial frequency of the potential applied to the ring electrode. By using the chain rule, it can be shown that
Substituting into the Mathieu yields
Multiplying by m and rearranging terms shows us that
By Newton's laws of motion, the above equation represents the force on the ion. This equation can be exactly solved using the Floquet theorem or the standard techniques of multiple scale analysis. The particle dynamics and time averaged density of charged particles in a Paul trap can also be obtained by the concept of ponderomotive force.
The forces in each dimension are not coupled, thus the force acting on an ion in, for example, the x dimension is
Here, is the quadrupolar potential, given by
where is the applied electric potential and , , and are weighting factors, and is a size parameter constant. In order to satisfy Laplace's equation, , it can be shown that
For an ion trap, and and for a quadrupole mass filter, and .
Transforming Equation 6 into a cylindrical coordinate system with , , and and applying the Pythagorean trigonometric identity gives
The applied electric potential is a combination of RF and DC given by
where and is the applied frequency in hertz.
Substituting into with gives
Substituting Equation 9 into Equation 5 leads to
Comparing terms on the right hand side of Equation 1 and Equation 10 leads to
and
Further ,
and
The trapping of ions can be understood in terms of stability regions in and space. The boundaries of the shaded regions in the figure are the boundaries of stability in the two directions (also known as boundaries of bands). The domain of overlap of the two regions is the trapping domain. For calculation of these boundaries and similar diagrams as above see Müller-Kirsten.
Linear ion trap
The linear ion trap uses a set of quadrupole rods to confine ions radially and a static electrical potential on-end electrodes to confine the ions axially. The linear form of the trap can be used as a selective mass filter, or as an actual trap by creating a potential well for the ions along the axis of the electrodes. Advantages of the linear trap design are increased ion storage capacity, faster scan times, and simplicity of construction (although quadrupole rod alignment is critical, adding a quality control constraint to their production. This constraint is additionally present in the machining requirements of the 3D trap).
Cylindrical ion trap
The cylindrical ion trap (CIT) emerged as a derivative of the quadrupole ion trap with simpler geometric structure in which the electrodes are arranged in a cylindrical shape rather than the traditional hyperbolic or linear configuration.
The cylindrical ion trap consists of a central cylindrical electrode (ring electrode) and two end-cap electrodes. By applying a combination of static (DC) and oscillating (RF) voltages to these electrodes, a three-dimensional quadrupole field is generated. The ions are trapped in the center of this field due to the restoring forces created by the electric fields, which confine the ions along the axis and radial directions.
Ion traps with a cylindrical rather than a hyperbolic ring electrode have been developed and microfabricated in arrays to develop miniature mass spectrometers for chemical detection in medical diagnosis and other fields. However, the reduction in ion storage volumes remains a problem in small ion traps.
Planar ion trap
Quadrupole traps can also be "unfolded" to create the same effect using a set of planar electrodes. This trap geometry can be made using standard micro-fabrication techniques, including the top metal layer in a standard CMOS microelectronics process, and is a key technology for scaling trapped ion quantum computers to useful numbers of qubits.
Combined radio frequency trap
A combined radio frequency trap is a combination of a Paul ion trap and a Penning trap. One of the main bottlenecks of a quadrupole ion trap is that it can confine only single-charged species or multiple species with similar masses. But in certain applications like antihydrogen production it is important to confine two species of charged particles of widely varying masses. To achieve this objective, a uniform magnetic field is added in the axial direction of the quadrupole ion trap.
Digital ion trap
The digital ion trap (DIT) is a quadrupole ion trap (linear or 3D) that differs from conventional traps by the driving waveform. A DIT is driven by digital signals, typically rectangular waveforms that are generated by switching rapidly between discrete voltage levels. Major advantages of the DIT are its versatility and virtually unlimited mass range. The digital ion trap has been developed mainly as a mass analyzer.
See also
Quadrupole magnet
References
Bibliography
W. Paul Electromagnetic Traps for Charged and Neutral Particles Taken from Proceedings of the International School of Physics <<Enrico Fermi>> Course CXVIII “Laser Manipulation of Atoms and Ions”, (North Holland, New York, 1992) p. 497-517
R.I. Thompson, T.J. Harmon, and M.G. Ball, The rotating-saddle trap: a mechanical analogy to RF-electric-quadrupole ion trapping? (Canadian Journal of Physics, 2002: 80 12) p. 1433–1448
M. Welling, H.A. Schuessler, R.I. Thompson, H. Walther Ion/Molecule Reactions, Mass Spectrometry and Optical Spectroscopy in a Linear Ion Trap (International Journal of Mass Spectrometry and Ion Processes, 1998: 172) p. 95-114.
K. Shah and H. Ramachandran, Analytic, nonlinearly exact solutions for an rf confined plasma, Phys. Plasmas 15, 062303 (2008), Pradip K. Ghosh, Ion Traps, International Series of Monographs in Physics, Oxford University Press (1995), https://web.archive.org/web/20111102190045/http://www.oup.com/us/catalog/general/subject/Physics/AtomicMolecularOpticalphysics/?view=usa
Patents
External links
Nobel Prize in Physics 1989
Mass spectrometry
Measuring instruments
German inventions
Particle traps | Quadrupole ion trap | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,183 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Measuring instruments",
"Particle traps",
"Mass spectrometry",
"Matter"
] |
2,229,292 | https://en.wikipedia.org/wiki/Stirling%20numbers%20of%20the%20second%20kind | In mathematics, particularly in combinatorics, a Stirling number of the second kind (or Stirling partition number) is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by or . Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions. They are named after James Stirling.
The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the second kind. Identities linking the two kinds appear in the article on Stirling numbers.
Definition
The Stirling numbers of the second kind, written or or with other notations, count the number of ways to partition a set of labelled objects into nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with precisely equivalence classes that can be defined on an element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. Obviously,
for n ≥ 0, and for n ≥ 1,
as the only way to partition an n-element set into n parts is to put each element of the set into its own part, and the only way to partition a nonempty set into one part is to put all of the elements in the same part. Unlike Stirling numbers of the first kind, they can be calculated using a one-sum formula:
The Stirling numbers of the first kind may be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials
(In particular, (x)0 = 1 because it is an empty product.)
Stirling numbers of the second kind satisfy the relation
Notation
Various notations have been used for Stirling numbers of the second kind. The brace notation was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers.<ref>Antonio Salmeri, Introduzione alla teoria dei coefficienti fattoriali, Giornale di Matematiche di Battaglini 90 (1962), pp. 44–54.</ref> This led Knuth to use it, as shown here, in the first volume of The Art of Computer Programming (1968).Donald E. Knuth, Fundamental Algorithms, Reading, Mass.: Addison–Wesley, 1968. According to the third edition of The Art of Computer Programming, this notation was also used earlier by Jovan Karamata in 1935.Jovan Karamata, Théorèmes sur la sommabilité exponentielle et d'autres sommabilités s'y rattachant, Mathematica (Cluj) 9 (1935), pp, 164–178. The notation S(n, k) was used by Richard Stanley in his book Enumerative Combinatorics and also, much earlier, by many other writers.
The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources.
Relation to Bell numbers
Since the Stirling number counts set partitions of an n-element set into k parts, the sum
over all values of k is the total number of partitions of a set with n members. This number is known as the nth Bell number.
Analogously, the ordered Bell numbers can be computed from the Stirling numbers of the second kind via
Table of values
Below is a triangular array of values for the Stirling numbers of the second kind :
As with the binomial coefficients, this table could be extended to , but the entries would all be 0.
Properties
Recurrence relation
Stirling numbers of the second kind obey the recurrence relation
with initial conditions
For instance, the number 25 in column k = 3 and row n = 5 is given by 25 = 7 + (3×6), where 7 is the number above and to the left of 25, 6 is the number above 25 and 3 is the column containing the 6.
To prove this recurrence, observe that a partition of the objects into k nonempty subsets either contains the -th object as a singleton or it does not. The number of ways that the singleton is one of the subsets is given by
since we must partition the remaining objects into the available subsets. In the other case the -th object belongs to a subset containing other objects. The number of ways is given by
since we partition all objects other than the -th into k subsets, and then we are left with k choices for inserting object . Summing these two values gives the desired result.
Another recurrence relation is given by
which follows from evaluating at .
Simple identities
Some simple identities include
This is because dividing n elements into sets necessarily means dividing it into one set of size 2 and sets of size 1. Therefore we need only pick those two elements;
and
To see this, first note that there are 2 ordered pairs of complementary subsets A and B. In one case, A is empty, and in another B is empty, so ordered pairs of subsets remain. Finally, since we want unordered pairs rather than ordered pairs we divide this last number by 2, giving the result above.
Another explicit expansion of the recurrence-relation gives identities in the spirit of the above example.
Identities
The table in section 6.1 of Concrete Mathematics provides a plethora of generalized forms of finite sums involving the Stirling numbers. Several particular finite sums relevant to this article include
Explicit formula
The Stirling numbers of the second kind are given by the explicit formula:
This can be derived by using inclusion-exclusion to count the surjections from n to k and using the fact that the number of such surjections is .
Additionally, this formula is a special case of the kth forward difference of the monomial evaluated at x = 0:
Because the Bernoulli polynomials may be written in terms of these forward differences, one immediately obtains a relation in the Bernoulli numbers:
The evaluation of incomplete exponential Bell polynomial Bn,k(x1,x2,...) on the sequence of ones equals a Stirling number of the second kind:
Another explicit formula given in the NIST Handbook of Mathematical Functions is
Parity
The parity of a Stirling number of the second kind is same as the parity of a related binomial coefficient:
where
This relation is specified by mapping n and k coordinates onto the Sierpiński triangle.
More directly, let two sets contain positions of 1's in binary representations of results of respective expressions:
One can mimic a bitwise AND operation by intersecting these two sets:
to obtain the parity of a Stirling number of the second kind in O(1) time. In pseudocode:
where is the Iverson bracket.
The parity of a central Stirling number of the second kind is odd if and only if is a fibbinary number, a number whose binary representation has no two consecutive 1s.
Generating functions
For a fixed integer n, the ordinary generating function for Stirling numbers of the second kind is given by
where are Touchard polynomials. If one sums the Stirling numbers against the falling factorial instead, one can show the following identities, among others:
and
which has special case
For a fixed integer k, the Stirling numbers of the second kind have rational ordinary generating function
and have an exponential generating function given by
A mixed bivariate generating function for the Stirling numbers of the second kind is
Lower and upper bounds
If and , then
Asymptotic approximation
For fixed value of the asymptotic value of the Stirling numbers of the second kind as is given by
If (where o denotes the little o notation) then
A uniformly valid approximation also exists: for all such that , one has
where , and is the unique solution to . Relative error is bounded by about .
Unimodality
For fixed , is unimodal, that is, the sequence increases and then decreases. The maximum is attained for at most two consecutive values of k. That is, there is an integer such that
Looking at the table of values above, the first few values for are
When is large
and the maximum value of the Stirling number can be approximated with
Applications
Moments of the Poisson distribution
If X is a random variable with a Poisson distribution with expected value λ, then its n-th moment is
In particular, the nth moment of the Poisson distribution with expected value 1 is precisely the number of partitions of a set of size n, i.e., it is the nth Bell number (this fact is Dobiński's formula).
Moments of fixed points of random permutations
Let the random variable X be the number of fixed points of a uniformly distributed random permutation of a finite set of size m. Then the nth moment of X is
Note: The upper bound of summation is m, not n.
In other words, the nth moment of this probability distribution is the number of partitions of a set of size n into no more than m parts.
This is proved in the article on random permutation statistics, although the notation is a bit different.
Rhyming schemes
The Stirling numbers of the second kind can represent the total number of rhyme schemes for a poem of n lines. gives the number of possible rhyming schemes for n lines using k unique rhyming syllables. As an example, for a poem of 3 lines, there is 1 rhyme scheme using just one rhyme (aaa), 3 rhyme schemes using two rhymes (aab, aba, abb), and 1 rhyme scheme using three rhymes (abc).
Variants
r-Stirling numbers of the second kind
The r-Stirling number of the second kind counts the number of partitions of a set of n objects into k non-empty disjoint subsets, such that the first r elements are in distinct subsets. These numbers satisfy the recurrence relation
Some combinatorial identities and a connection between these numbers and context-free grammars can be found in
Associated Stirling numbers of the second kind
An r-associated Stirling number of the second kind is the number of ways to partition a set of n objects into k subsets, with each subset containing at least r elements. It is denoted by and obeys the recurrence relation
The 2-associated numbers appear elsewhere as "Ward numbers" and as the magnitudes of the coefficients of Mahler polynomials.
Reduced Stirling numbers of the second kind
Denote the n objects to partition by the integers 1, 2, ..., n. Define the reduced Stirling numbers of the second kind, denoted , to be the number of ways to partition the integers 1, 2, ..., n into k nonempty subsets such that all elements in each subset have pairwise distance at least d. That is, for any integers i and j in a given subset, it is required that . It has been shown that these numbers satisfy
(hence the name "reduced"). Observe (both by definition and by the reduction formula), that , the familiar Stirling numbers of the second kind.
See also
Stirling number
Stirling numbers of the first kind
Bell number – the number of partitions of a set with n'' members
Stirling polynomials
Twelvefold way
References
.
.
Calculator for Stirling Numbers of the Second Kind
Set Partitions: Stirling Numbers
Permutations
Factorial and binomial topics
Triangles of numbers
Operations on numbers
pl:Liczby Stirlinga#Liczby Stirlinga II rodzaju | Stirling numbers of the second kind | [
"Mathematics"
] | 2,350 | [
"Functions and mappings",
"Factorial and binomial topics",
"Permutations",
"Mathematical objects",
"Combinatorics",
"Arithmetic",
"Mathematical relations",
"Triangles of numbers",
"Operations on numbers"
] |
2,229,421 | https://en.wikipedia.org/wiki/Diisobutylaluminium%20hydride | Diisobutylaluminium hydride (DIBALH, DIBAL, DIBAL-H or DIBAH) is a reducing agent with the formula (i-Bu2AlH)2, where i-Bu represents isobutyl (-CH2CH(CH3)2). This organoaluminium compound is a reagent in organic synthesis.
Properties
Like most organoaluminum compounds, the compound's structure is most probably more than that suggested by its empirical formula. A variety of techniques, not including X-ray crystallography, suggest that the compound exists as a dimer and a trimer, consisting of tetrahedral aluminium centers sharing bridging hydride ligands. Hydrides are small and, for aluminium derivatives, are highly basic, thus they bridge in preference to the alkyl groups.
DIBAL can be prepared by heating triisobutylaluminium (itself a dimer) to induce β-hydride elimination:
(i-Bu3Al)2 → (i-Bu2AlH)2 + 2 (CH3)2C=CH2
Although DIBAL can be purchased commercially as a colorless liquid, it is more commonly purchased and dispensed as a solution in an organic solvent such as toluene or hexane.
Use in organic synthesis
DIBAL reacts slowly with electron-poor compounds and more quickly with electron-rich compounds. Thus, it is an electrophilic reducing agent whereas LiAlH4 can be thought of as a nucleophilic reducing agent.
DIBAL is useful in organic synthesis for a variety of reductions, including converting carboxylic acids, their derivatives, and nitriles to aldehydes. DIBAL efficiently reduces α-β unsaturated esters to the corresponding allylic alcohol. By contrast, LiAlH4 reduces esters and acyl chlorides to primary alcohols, and nitriles to primary amines [using Fieser work-up procedure]. Similarly, DIBAL reduces lactones to hemiacetals (the equivalent of an aldehyde).
Although DIBAL reliably reduces nitriles to aldehydes, the reduction of esters to aldehydes is infamous for often producing large quantities of alcohols. Nevertheless, it is possible to avoid these unwanted byproducts through careful control of the reaction conditions using continuous flow chemistry.
DIBALH was investigated originally as a cocatalyst for the polymerization of alkenes.
Safety
DIBAL, like most alkylaluminium compounds, reacts violently with air and water, potentially leading to explosion.
References
External links
Isobutyl compounds
Metal hydrides
Organoaluminium compounds
Reducing agents | Diisobutylaluminium hydride | [
"Chemistry"
] | 572 | [
"Inorganic compounds",
"Metal hydrides",
"Redox",
"Reducing agents"
] |
2,229,780 | https://en.wikipedia.org/wiki/Zig-zag%20bridge | A zig-zag bridge is a pedestrian bridge composed of short segments, each set at an angle relative to its neighbors and usually with an alternating right and left turn required when traveling across the bridge. It is used in standard crossings for structural stability; and in traditional and contemporary Asian and Western landscape design across water gardens.
When constructed of wood, each segment is formed from planks and is supported by posts. When constructed of stone, the bridge will use short or long rectilinear slabs set upon stone footings.
Garden and ceremonial bridge
A zig-zag bridge is often seen in the Chinese garden, Japanese garden, and Zen rock garden. It may be made of stone slabs or planks as part of a pond design and is frequently seen in rustic gardens. It is also used in high art modern fountain gardens, often in public urban park and botanic garden landscapes.
The objective in employing such a bridge, constructed according to Zen philosophy and teachings, is to focus the walker's attention to the mindfulness of the current place and time moment – "being here, now". As it often has no railings, it is quite possible for an inattentive walker to simply fall off an end into the water.
The zig-zag of paths and bridges also follows a principle of Chinese Feng Shui.
Standard bridge
The post and plank version has an advantage when employed as a crossing of a muddy bottom or marsh: It is structurally stable, where a straight bridge might tend to tip due to the posts moving in the soft mud. Each segment of walkway mutually supports the next from twisting and tipping by being securely fastened to it. This is the same advantage possessed by a zig-zag split rail fence.
See also
Footbridge
Moon bridge
S bridge
Step-stone bridge
References
External links
Footbridges
Stone bridges
Garden features
Stonemasonry
Chinese gardening styles
Japanese style of gardening
Bridges | Zig-zag bridge | [
"Engineering"
] | 386 | [
"Structural engineering",
"Stonemasonry",
"Construction",
"Bridges"
] |
2,230,309 | https://en.wikipedia.org/wiki/List%20of%20computer%20algebra%20systems | The following tables provide a comparison of computer algebra systems (CAS). A CAS is a package comprising a set of algorithms for performing symbolic manipulations on algebraic objects, a language to implement them, and an environment in which to use the language. A CAS may include a user interface and graphics capability; and to be effective may require a large library of algorithms, efficient data structures and a fast kernel.
General
These computer algebra systems are sometimes combined with "front end" programs that provide a better user interface, such as the general-purpose GNU TeXmacs.
Functionality
Below is a summary of significantly developed symbolic functionality in each of the systems.
via SymPy
<li> via qepcad optional package
Those which do not "edit equations" may have a GUI, plotting, ASCII graphic formulae and math font printing. The ability to generate plaintext files is also a sought-after feature because it allows a work to be understood by people who do not have a computer algebra system installed.
Operating system support
The software can run under their respective operating systems natively without emulation. Some systems must be compiled first using an appropriate compiler for the source language and target platform. For some platforms, only older releases of the software may be available.
Graphing calculators
Some graphing calculators have CAS features.
See also
:Category:Computer algebra systems
Comparison of numerical-analysis software
Comparison of statistical packages
List of information graphics software
List of numerical-analysis software
List of numerical libraries
List of statistical software
Mathematical software
Web-based simulation
References
External links
Comparisons of mathematical software
Mathematics-related lists | List of computer algebra systems | [
"Mathematics"
] | 323 | [
"Computer algebra systems",
"Comparisons of mathematical software",
"Mathematical software"
] |
2,230,778 | https://en.wikipedia.org/wiki/Supersonic%20fracture | Supersonic fractures are fractures where the fracture propagation velocity is higher than the speed of sound in the material. This phenomenon was first discovered by scientists from the Max Planck Institute for Metals Research in Stuttgart (Markus J. Buehler and Huajian Gao) and IBM Almaden Research Center in San Jose, California (Farid F. Abraham).
The issues of intersonic and supersonic fracture become the frontier of dynamic fracture mechanics. The work of Burridge initiated the exploration for intersonic crack growth (when the crack tip velocity V is between the shear in wave speed C^8 and the longitudinal wave speed C^1.
Supersonic fracture was a phenomenon totally unexplained by the classical theories of fracture. Molecular dynamics simulations by the group around Abraham and Gao have shown the existence of intersonic mode I and supersonic mode II cracks. This motivated a continuum mechanics analysis of supersonic mode III cracks by Yang. Recent progress in the theoretical understanding of hyperelasticity in dynamic fracture has shown that supersonic crack propagation can only be understood by introducing a new length scale, called χ; which governs the process of energy transport near a crack tip. The crack dynamics is completely dominated by material properties inside a zone surrounding the crack tip with characteristic size equal to χ. When the material inside this characteristic zone is stiffened due to hyperelastic properties, cracks propagate faster than the longitudinal wave speed. The research group of Gao has used this concept to simulate the Broberg problem of crack propagation inside a stiff strip embedded in a soft elastic matrix. These simulations confirmed the existence of an energy characteristic length. This study also had implications for dynamic crack propagation in composite materials. If the characteristic size of the composite microstructure is larger than the energy characteristic length, χ; models that homogenize the materials into an effective continuum would be in significant error. The challenge arises of designing experiments and interpretative simulations to verify the energy characteristic length. Confirmation of the concept must be sought in the comparison of experiments on supersonic cracks and the predictions of the simulations and analysis. While much excitement rightly centres on the relatively new activity related to intersonic cracking, an old but interesting possibility remains to be incorporated in the modern work: for an interface between elastically dissimilar materials, crack propagation that is subsonic but exceeds the Rayleigh wave speed has been predicted for at least some combinations of the elastic properties of the two materials.
See also
Characteristic energy length scale
References
Fracture mechanics | Supersonic fracture | [
"Physics",
"Materials_science",
"Engineering"
] | 500 | [
"Structural engineering",
"Fracture mechanics",
"Classical mechanics stubs",
"Classical mechanics",
"Materials science",
"Materials degradation"
] |
2,230,989 | https://en.wikipedia.org/wiki/Space%20Shuttle%20thermal%20protection%20system | The Space Shuttle thermal protection system (TPS) is the barrier that protected the Space Shuttle Orbiter during the extreme heat of atmospheric reentry. A secondary goal was to protect from the heat and cold of space while in orbit.
Materials
The TPS covered essentially the entire orbiter surface, and consisted of seven different materials in varying locations based on amount of required heat protection:
Reinforced carbon–carbon (RCC), used in the nose cap, the chin area between the nose cap and nose landing gear doors, the arrowhead aft of the nose landing gear door, and the wing leading edges. Used where reentry temperature exceeded .
High-temperature reusable surface insulation (HRSI) tiles, used on the orbiter underside. Made of coated LI-900 silica ceramics. Used where reentry temperature was below 1,260 °C.
Fibrous refractory composite insulation (FRCI) tiles, used to provide improved strength, durability, resistance to coating cracking and weight reduction. Some HRSI tiles were replaced by this type.
Flexible Insulation Blankets (FIB), a quilted, flexible blanket-like surface insulation. Used where reentry temperature was below .
Low-temperature Reusable Surface Insulation (LRSI) tiles, formerly used on the upper fuselage, but were mostly replaced by FIB. Used in temperature ranges roughly similar to FIB.
Toughened unipiece fibrous insulation (TUFI) tiles, a stronger, tougher tile which came into use in 1996. Used in high and low temperature areas.
Felt reusable surface insulation (FRSI). White Nomex felt blankets on the upper payload bay doors, portions of the mid fuselage and aft fuselage sides, portions of the upper wing surface and a portion of the OMS/RCS pods. Used where temperatures stayed below .
Each type of TPS had specific heat protection, impact resistance, and weight characteristics, which determined the locations where it was used and the amount used.
The shuttle TPS had three key characteristics that distinguished it from the TPS used on previous spacecraft:
Reusable Previous spacecraft generally used ablative heat shields which burned off during reentry and so could not be reused. This insulation was robust and reliable, and the single-use nature was appropriate for a single-use vehicle. By contrast, the reusable shuttle required a reusable thermal protection system.
Lightweight Previous ablative heat shields were very heavy. For example, the ablative heat shield on the Apollo Command Module comprised about 15% of the vehicle weight. The winged shuttle had much more surface area than previous spacecraft, so a lightweight TPS was crucial.
Fragile The only known technology in the early 1970s with the required thermal and weight characteristics was also so fragile, due to the very low density, that one could easily crush a TPS tile by hand.
Purpose
[[File:Ststpstile.jpg|thumb|Discovery'''s under wing surfaces are protected by thousands of High-Temperature Reusable Insulation tiles.]]
The orbiter's aluminum structure could not withstand temperatures over without structural failure.
Aerodynamic heating during reentry would push the temperature well above this level in areas, so an effective insulator was needed.
Reentry heating
Reentry heating differs from the normal atmospheric heating associated with jet aircraft, and this governed TPS design and characteristics. The skin of high-speed jet aircraft can also become hot, but this is from frictional heating due to atmospheric friction, similar to warming one's hands by rubbing them together. The orbiter reentered the atmosphere as a blunt body by having a very high (40°) angle of attack, with its broad lower surface facing the direction of flight. Over 80% of the heating the orbiter experiences during reentry is caused by compression of the air ahead of the hypersonic vehicle, in accordance with the basic thermodynamic relation between pressure and temperature. A hot shock wave was created in front of the vehicle, which deflected most of the heat and prevented the orbiter's surface from directly contacting the peak heat. Therefore, reentry heating was largely convective heat transfer between the shock wave and the orbiter's skin through superheated plasma. The key to a reusable shield against this type of heating is very low-density material, similar to how a thermos bottle inhibits convective heat transfer.
Some high-temperature metal alloys can withstand reentry heat; they simply get hot and re-radiate the absorbed heat. This technique, called heat sink thermal protection, was planned for the X-20 Dyna-Soar winged space vehicle. However, the amount of high-temperature metal required to protect a large vehicle like the Space Shuttle Orbiter would have been very heavy and entailed a severe penalty to the vehicle's performance. Similarly, ablative TPS would be heavy, possibly disturb vehicle aerodynamics as it burned off during reentry, and require significant maintenance to reapply after each mission.
Unfortunately, TPS tile, which was originally specified never to take debris strikes during launch, in practice also needed to be closely inspected and repaired after each landing, due to damage potentially incurred during ascent, even before new on-orbit inspection policies were established following the loss of Space Shuttle Columbia. However, the average replacement rate was still low, with Discovery for example still having about 18,000 of its 24,000 tiles be the original at the end of its career.
Detailed description
The TPS was a system of different protection types, not just silica tiles. They are in two basic categories: tile TPS and non-tile TPS. The main selection criteria used the lightest weight protection capable of handling the heat in a given area. However, in some cases a heavier type was used if additional impact resistance was needed. The FIB blankets were primarily adopted for reduced maintenance, not for thermal or weight reasons.
Much of the shuttle was covered with LI-900 silica tiles, made from essentially very pure quartz sand. The insulation prevented heat transfer to the underlying orbiter aluminium skin and structure. These tiles were such poor heat conductors that one could hold one by the edges while it was still red hot.
There were about 24,300 unique tiles individually fitted on the vehicle, for which the orbiter has been called "the flying brickyard". Researchers at University of Minnesota and Pennsylvania State University are performing the atomistic simulations to obtain accurate description of interactions between atomic and molecular oxygen with silica surfaces to develop better high-temperature oxidation-protection systems for leading edges on hypersonic vehicles.
The tiles were not mechanically fastened to the vehicle, but glued. Since the brittle tiles could not flex with the underlying vehicle skin, they were glued to Nomex felt Strain Isolation Pads (SIPs) with room temperature vulcanizing (RTV) silicone adhesive, which were in turn glued to the orbiter skin. These isolated the tiles from the orbiter's structural deflections and expansions. Gluing on the 24,300 tiles required nearly two man-years of work for every flight, partly due to the fact that the glue dried quickly and new batches needed to be produced after every couple of tiles. An ad-hoc remedy that involved technicians spitting in the glue to slow down the drying process was common practice until 1988, when a tile-hazard study revealed that spit weakened the adhesive's bonding strength.
Tile types
High-temperature reusable surface insulation (HRSI)
The black HRSI tiles provided protection against temperatures up to . There were 20,548 HRSI tiles which covered the landing gear doors, external tank umbilical connection doors, and the rest of the orbiter's under surfaces. They were also used in areas on the upper forward fuselage, parts of the orbital maneuvering system pods, vertical stabilizer leading edge, elevon trailing edges, and upper body flap surface. They varied in thickness from , depending upon the heat load encountered during reentry. Except for closeout areas, these tiles were normally square. The HRSI tile was composed of high purity silica fibers. Ninety percent of the volume of the tile was empty space, giving it a very low density () making it light enough for spaceflight. The uncoated tiles were bright white in appearance and looked more like a solid ceramic than the foam-like material that they were.
The black coating on the tiles was Reaction Cured Glass (RCG) of which tetraboron silicide and borosilicate glass were some of several ingredients. RCG was applied to all but one side of the tile to protect the porous silica and to increase the heat sink properties. The coating was absent from a small margin of the sides adjacent to the uncoated (bottom) side. To waterproof the tile, dimethylethoxysilane was injected into the tiles by syringe. Densifying the tile with tetraethyl orthosilicate (TEOS) also helped to protect the silica and added additional waterproofing.
An uncoated HRSI tile held in the hand feels like a very light foam, less dense than styrofoam, and the delicate, friable material must be handled with extreme care to prevent damage. The coating feels like a thin, hard shell and encapsulates the white insulating ceramic to resolve its friability, except on the uncoated side. Even a coated tile feels very light, lighter than a same-sized block of styrofoam. As expected for silica, they are odorless and inert.
HRSI was primarily designed to withstand transition from areas of extremely low temperature (the void of space, about ) to the high temperatures of re-entry (caused by interaction, mostly compression at the hypersonic shock, between the gases of the upper atmosphere & the hull of the Space Shuttle, typically around ).
Fibrous Refractory Composite Insulation Tiles (FRCI)
The black FRCI tiles provided improved durability, resistance to coating cracking and weight reduction. Some HRSI tiles were replaced by this type.
Toughened unipiece fibrous insulation (TUFI)
A stronger, tougher tile which came into use in 1996. TUFI tiles came in high temperature black versions for use in the orbiter's underside, and lower temperature white versions for use on the upper body. While more impact resistant than other tiles, white versions conducted more heat which limited their use to the orbiter's upper body flap and main engine area. Black versions had sufficient heat insulation for the orbiter underside but had greater weight. These factors restricted their use to specific areas.
Low-temperature reusable surface insulation (LRSI)
White in color, these covered the upper wing near the leading edge. They were also used in selected areas of the forward, mid, and aft fuselage, vertical tail, and the OMS/RCS pods. These tiles protected areas where reentry temperatures are below . The LRSI tiles were manufactured in the same manner as the HRSI tiles, except that the tiles were square and had a white RCG coating made of silica compounds with shiny aluminium oxide. The white color was by design and helped to manage heat on orbit when the orbiter was exposed to direct sunlight.
These tiles were reusable for up to 100 missions with refurbishment (100 missions was also the design lifetime of each orbiter). They were carefully inspected in the Orbiter Processing Facility after each mission, and damaged or worn tiles were immediately replaced before the next mission. Fabric sheets known as gap fillers were also inserted between tiles where necessary. These allowed for a snug fit between tiles, preventing excess plasma from penetrating between them, yet allowing for thermal expansion and flexing of the underlying vehicle skin.
Prior to the introduction of FIB blankets, LRSI tiles occupied all of the areas now covered by the blankets, including the upper fuselage and the whole surface of the OMS pods. This TPS configuration was only used on Columbia and Challenger.
Non-tile TPS
Flexible Insulation Blankets/Advanced Flexible Reusable Insulation (FIB/AFRSI)
Developed after the initial delivery of Columbia and first used on the OMS pods of Challenger. This white low-density fibrous silica batting material had a quilt-like appearance, and replaced the vast majority of the LRSI tiles. They required much less maintenance than LRSI tiles yet had about the same thermal properties. After their limited use on Challenger, they were used much more extensively beginning with Discovery and replaced many of the LRSI tiles on Columbia after the loss of Challenger.
Reinforced carbon-carbon (RCC)
The light gray material which withstood reentry temperatures up to protected the wing leading edges and nose cap. Each of the orbiters' wings had 22 RCC panels about thick. T-seals between each panel allowed for thermal expansion and lateral movement between these panels and the wing.
RCC was a laminated composite material made from carbon fibres impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate was pyrolized to convert the resin to pure carbon. This was then impregnated with furfural alcohol in a vacuum chamber, then cured and pyrolized again to convert the furfural alcohol to carbon. This process was repeated three times until the desired carbon-carbon properties were achieved.
To provide oxidation resistance for reuse capability, the outer layers of the RCC were coated with silicon carbide. The silicon-carbide coating protected the carbon-carbon from oxidation. The RCC was highly resistant to fatigue loading that was experienced during ascent and entry. It was stronger than the tiles and was also used around the socket of the forward attach point of the orbiter to the External Tank to accommodate the shock loads of the explosive bolt detonation. RCC was the only TPS material that also served as structural support for part of the orbiter's aerodynamic shape: the wing leading edges and the nose cap. All other TPS components (tiles and blankets) were mounted onto structural materials that supported them, mainly the aluminium frame and skin of the orbiter.
Nomex Felt Reusable Surface Insulation (FRSI)
This white, flexible fabric offered protection at up to . FRSI covered the orbiter's upper wing surfaces, upper payload bay doors, portions of the OMS/RCS pods, and aft fuselage.
Gap fillers
Gap fillers were placed at doors and moving surfaces to minimize heating by preventing the formation of vortices. Doors and moving surfaces created open gaps in the heat protection system that had to be protected from heat. Some of these gaps were safe, but there were some areas on the heat shield where surface pressure gradients caused a crossflow of boundary layer air in those gaps.
The filler materials were made of either white AB312 fibers or black AB312 cloth covers (which contain alumina fibers). These materials were used around the leading edge of the nose cap, windshields, side hatch, wing, trailing edge of elevons, vertical stabilizer, the rudder/speed brake, body flap, and heat shield of the shuttle's main engines.
On STS-114, some of this material was dislodged and determined to pose a potential safety risk. It was possible that the gap filler could cause turbulent airflow further down the fuselage, which would result in much higher heating, potentially damaging the orbiter. The cloth was removed during a spacewalk during the mission.
Weight considerations
While reinforced carbon–carbon had the best heat protection characteristics, it was also much heavier than the silica tiles and FIBs, so it was limited to relatively small areas. In general the goal was to use the lightest weight insulation consistent with the required thermal protection. Density of each TPS type:
Total area and weight of each TPS type (used on Orbiter 102, pre-1996):
Early TPS problems
Slow tile application
Tiles often fell off and caused much of the delay in the launch of STS-1, the first shuttle mission, which was originally scheduled for 1979 but did not occur until April 1981. NASA was unused to lengthy delays in its programs, and was under great pressure from the government and military to launch soon. In March 1979 it moved the incomplete Columbia, with 7,800 of the 31,000 tiles missing, from the Rockwell International plant in Palmdale, California to Kennedy Space Center in Florida. Beyond creating the appearance of progress in the program, NASA hoped that the tiling could be finished while the rest of the orbiter was prepared. This was a mistake; some of the Rockwell tilers disliked Florida and soon returned to California, and the Orbiter Processing Facility was not designed for manufacturing and was too small for its 400 workers.
Each tile used cement that required 16 hours to cure. After the tile was affixed to the cement, a jack held it in place for another 16 hours. In March 1979 it took each worker 40 hours to install one tile; by using young, efficient college students during the summer the pace sped up to 1.8 tiles per worker per week. Thousands of tiles failed stress tests and had to be replaced. By fall NASA realized that the speed of tiling would determine the launch date. The tiles were so problematic that officials would have switched to any other thermal protection method, but none other existed.
Because it had to be ferried without all tiles the gaps were filled with material to maintain the Shuttle's aerodynamics while in transit.
Concern over "zipper effect"
The tile TPS was an area of concern during shuttle development, mainly concerning adhesion reliability. Some engineers thought a failure mode could exist whereby one tile could detach, and resulting aerodynamic pressure would create a "zipper effect" stripping off other tiles. Whether during ascent or reentry, the result would be disastrous.
Concern over debris strikes
Another problem was ice or other debris impacting the tiles during ascent. This had never been fully and thoroughly solved, as the debris had never been eliminated, and the tiles remained susceptible to damage from it. NASA's final strategy for mitigating this problem was to aggressively inspect for, assess, and address any damage that may occur, while on orbit and before reentry, in addition to on the ground between flights.
Early tile repair plans
These concerns were sufficiently great that NASA did significant work developing an emergency-use tile repair kit which the STS-1 crew could use before deorbiting. By December 1979, prototypes and early procedures were completed, most of which involved equipping the astronauts with a special in-space repair kit and a jet pack called the Manned Maneuvering Unit, or MMU, developed by Martin Marietta.
Another element was a maneuverable work platform which would secure an MMU-propelled spacewalking astronaut to the fragile tiles beneath the orbiter. The concept used electrically controlled adhesive cups which would lock the work platform into position on the featureless tile surface. About one year before the 1981 STS-1 launch, NASA decided the repair capability was not worth the additional risk and training, so discontinued development. There were unresolved problems with the repair tools and techniques; also further tests indicated the tiles were unlikely to come off. The first shuttle mission did suffer several tile losses, but they were in non-critical areas, and no "zipper effect" occurred.
Columbia accident and aftermath
On February 1, 2003, the Space Shuttle Columbia was destroyed on reentry due to a failure of the TPS. The investigation team found and reported that the probable cause of the accident was that during launch, a piece of foam debris punctured an RCC panel on the left wing's leading edge and allowed hot gases from the reentry to enter the wing and disintegrate the wing from within, leading to eventual loss of control and breakup of the shuttle.
The Space Shuttle's thermal protection system received a number of controls and modifications after the disaster. They were applied to the three remaining shuttles, Discovery, Atlantis and Endeavour in preparation for subsequent mission launches into space.
On 2005's STS-114 mission, in which Discovery made the first flight to follow the Columbia accident, NASA took a number of steps to verify that the TPS was undamaged. The Orbiter Boom Sensor System, a new extension to the Remote Manipulator System, was used to perform laser imaging of the TPS to inspect for damage. Prior to docking with the International Space Station, Discovery performed a Rendezvous Pitch Maneuver, simply a 360° backflip rotation, allowing all areas of the vehicle to be photographed from ISS. Two gap fillers were protruding from the orbiter's underside more than the nominally allowed distance, and the agency cautiously decided it would be best to attempt to remove the fillers or cut them flush rather than risk the increased heating they would cause. Even though each one protruded less than , it was believed that leaving them could cause heating increases of 25% upon reentry.
Because the orbiter did not have any handholds on its underside (as they would cause much more trouble with reentry heating than the protruding gap fillers of concern), astronaut Stephen K. Robinson worked from the ISS's robotic arm, Canadarm2. Because the TPS tiles were quite fragile, there had been concern that anyone working under the vehicle could cause more damage to the vehicle than was already there, but NASA officials felt that leaving the gap fillers alone was a greater risk. In the event, Robinson was able to pull the gap fillers free by hand, and caused no damage to the TPS on Discovery.
Tile donations
, with the impending Space Shuttle retirement, NASA was donating TPS tiles to schools, universities, and museums for the cost of shipping—US$23.40 each. About 7000 tiles were available on a first-come, first-served basis, but limited to one each per institution.
See also
Space Shuttle program
Space Shuttle Columbia disaster
Columbia Accident Investigation Board
References
"When the Space Shuttle finally flies", article written by Rick Gore. National Geographic (pp. 316–347. Vol. 159, No. 3. March 1981). http://www.datamanos2.com/columbia/natgeomar81.htmlSpace Shuttle Operator's Manual, by Kerry Mark Joels and Greg Kennedy (Ballantine Books, 1982).The Voyages of Columbia: The First True Spaceship, by Richard S. Lewis (Columbia University Press, 1984).A Space Shuttle Chronology, by John F. Guilmartin and John Mauer (NASA Johnson Space Center, 1988).Space Shuttle: The Quest Continues, by George Forres (Ian Allan, 1989).Information Summaries: Countdown! NASA Launch Vehicles and Facilities, (NASA PMS 018-B (KSC), October 1991).Space Shuttle: The History of Developing the National Space Transportation System, by Dennis Jenkins (Walsworth Publishing Company, 1996).U.S. Human Spaceflight: A Record of Achievement, 1961–1998. NASA – Monographs in Aerospace History No. 9, July 1998.Space Shuttle Thermal Protection System'' by Gary Milgrom. February, 2013. Free iTunes ebook download. https://itunes.apple.com/us/book/space-shuttle-thermal-protection/id591095660?mt=11
Notes
External links
https://web.archive.org/web/20060909094330/http://www-pao.ksc.nasa.gov/kscpao/nasafact/tps.htm
https://web.archive.org/web/20110707103505/http://ww3.albint.com/about/research/Pages/protectionSystems.aspx
http://science.ksc.nasa.gov/shuttle/technology/sts-newsref/sts_sys.html
https://web.archive.org/web/20160307090308/http://science.ksc.nasa.gov/shuttle/nexgen/Nexgen_Downloads/Shuttle_Gordon_TPS-PUBLIC_Appendix.pdf
Space Shuttle program
Thermal protection
Atmospheric entry | Space Shuttle thermal protection system | [
"Engineering"
] | 5,026 | [
"Atmospheric entry",
"Aerospace engineering"
] |
2,231,059 | https://en.wikipedia.org/wiki/Superheavy%20element | Superheavy elements, also known as transactinide elements, transactinides, or super-heavy elements, or superheavies for short, are the chemical elements with atomic number greater than 104. The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements, i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series.
Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (though more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor.
Superheavies are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements has ever been produced. Superheavies are all named after physicists and chemists or important locations involved in the synthesis of the elements.
IUPAC defines an element to exist if its lifetime is longer than 10 seconds, which is the time it takes for the atom to form an electron cloud.
The known superheavies form part of the 6d and 7p series in the periodic table. Except for rutherfordium and dubnium (and lawrencium if it is included), even the longest-lived known isotopes of superheavies have half-lives of minutes or less. The element naming controversy involved elements 102–109. Some of these elements thus used systematic names for many years after their discovery was confirmed. (Usually the systematic names are replaced with permanent names proposed by the discoverers relatively soon after a discovery has been confirmed.)
Introduction
Synthesis of superheavy nuclei
A superheavy atomic nucleus is created in a nuclear reaction that combines two other nuclei of unequal size into one; roughly, the more unequal the two nuclei in terms of mass, the greater the possibility that the two react. The material made of the heavier nuclei is made into a target, which is then bombarded by the beam of lighter nuclei. Two nuclei can only fuse into one if they approach each other closely enough; normally, nuclei (all positively charged) repel each other due to electrostatic repulsion. The strong interaction can overcome this repulsion but only within a very short distance from a nucleus; beam nuclei are thus greatly accelerated in order to make such repulsion insignificant compared to the velocity of the beam nucleus. The energy applied to the beam nuclei to accelerate them can cause them to reach speeds as high as one-tenth of the speed of light. However, if too much energy is applied, the beam nucleus can fall apart.
Coming close enough alone is not enough for two nuclei to fuse: when two nuclei approach each other, they usually remain together for about 10 seconds and then part ways (not necessarily in the same composition as before the reaction) rather than form a single nucleus. This happens because during the attempted formation of a single nucleus, electrostatic repulsion tears apart the nucleus that is being formed. Each pair of a target and a beam is characterized by its cross section—the probability that fusion will occur if two nuclei approach one another expressed in terms of the transverse area that the incident particle must hit in order for the fusion to occur. This fusion may occur as a result of the quantum effect in which nuclei can tunnel through electrostatic repulsion. If the two nuclei can stay close past that phase, multiple nuclear interactions result in redistribution of energy and an energy equilibrium.
The resulting merger is an excited state—termed a compound nucleus—and thus it is very unstable. To reach a more stable state, the temporary merger may fission without formation of a more stable nucleus. Alternatively, the compound nucleus may eject a few neutrons, which would carry away the excitation energy; if the latter is not sufficient for a neutron expulsion, the merger would produce a gamma ray. This happens in about 10 seconds after the initial nuclear collision and results in creation of a more stable nucleus. The definition by the IUPAC/IUPAP Joint Working Party (JWP) states that a chemical element can only be recognized as discovered if a nucleus of it has not decayed within 10 seconds. This value was chosen as an estimate of how long it takes a nucleus to acquire electrons and thus display its chemical properties.
Decay and detection
The beam passes through the target and reaches the next chamber, the separator; if a new nucleus is produced, it is carried with this beam. In the separator, the newly produced nucleus is separated from other nuclides (that of the original beam and any other reaction products) and transferred to a surface-barrier detector, which stops the nucleus. The exact location of the upcoming impact on the detector is marked; also marked are its energy and the time of the arrival. The transfer takes about 10 seconds; in order to be detected, the nucleus must survive this long. The nucleus is recorded again once its decay is registered, and the location, the energy, and the time of the decay are measured.
Stability of a nucleus is provided by the strong interaction. However, its range is very short; as nuclei become larger, its influence on the outermost nucleons (protons and neutrons) weakens. At the same time, the nucleus is torn apart by electrostatic repulsion between protons, and its range is not limited. Total binding energy provided by the strong interaction increases linearly with the number of nucleons, whereas electrostatic repulsion increases with the square of the atomic number, i.e. the latter grows faster and becomes increasingly important for heavy and superheavy nuclei. Superheavy nuclei are thus theoretically predicted and have so far been observed to predominantly decay via decay modes that are caused by such repulsion: alpha decay and spontaneous fission. Almost all alpha emitters have over 210 nucleons, and the lightest nuclide primarily undergoing spontaneous fission has 238. In both decay modes, nuclei are inhibited from decaying by corresponding energy barriers for each mode, but they can be tunneled through.
Alpha particles are commonly produced in radioactive decays because the mass of an alpha particle per nucleon is small enough to leave some energy for the alpha particle to be used as kinetic energy to leave the nucleus. Spontaneous fission is caused by electrostatic repulsion tearing the nucleus apart and produces various nuclei in different instances of identical nuclei fissioning. As the atomic number increases, spontaneous fission rapidly becomes more important: spontaneous fission partial half-lives decrease by 23 orders of magnitude from uranium (element 92) to nobelium (element 102), and by 30 orders of magnitude from thorium (element 90) to fermium (element 100). The earlier liquid drop model thus suggested that spontaneous fission would occur nearly instantly due to disappearance of the fission barrier for nuclei with about 280 nucleons. The later nuclear shell model suggested that nuclei with about 300 nucleons would form an island of stability in which nuclei will be more resistant to spontaneous fission and will primarily undergo alpha decay with longer half-lives. Subsequent discoveries suggested that the predicted island might be further than originally anticipated; they also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei.
Alpha decays are registered by the emitted alpha particles, and the decay products are easy to determine before the actual decay; if such a decay or a series of consecutive decays produces a known nucleus, the original product of a reaction can be easily determined. (That all decays within a decay chain were indeed related to each other is established by the location of these decays, which must be in the same place.) The known nucleus can be recognized by the specific characteristics of decay it undergoes such as decay energy (or more specifically, the kinetic energy of the emitted particle). Spontaneous fission, however, produces various nuclei as products, so the original nuclide cannot be determined from its daughters.
The information available to physicists aiming to synthesize a superheavy element is thus the information collected at the detectors: location, energy, and time of arrival of a particle to the detector, and those of its decay. The physicists analyze this data and seek to conclude that it was indeed caused by a new element and could not have been caused by a different nuclide than the one claimed. Often, provided data is insufficient for a conclusion that a new element was definitely created and there is no other explanation for the observed effects; errors in interpreting data have been made.
History
Early predictions
The heaviest element known at the end of the 19th century was uranium, with an atomic mass of about 240 (now known to be 238) amu. Accordingly, it was placed in the last row of the periodic table; this fueled speculation about the possible existence of elements heavier than uranium and why A = 240 seemed to be the limit. Following the discovery of the noble gases, beginning with argon in 1895, the possibility of heavier members of the group was considered. Danish chemist Julius Thomsen proposed in 1895 the existence of a sixth noble gas with Z = 86, A = 212 and a seventh with Z = 118, A = 292, the last closing a 32-element period containing thorium and uranium. In 1913, Swedish physicist Johannes Rydberg extended Thomsen's extrapolation of the periodic table to include even heavier elements with atomic numbers up to 460, but he did not believe that these superheavy elements existed or occurred in nature.
In 1914, German physicist Richard Swinne proposed that elements heavier than uranium, such as those around Z = 108, could be found in cosmic rays. He suggested that these elements may not necessarily have decreasing half-lives with increasing atomic number, leading to speculation about the possibility of some longer-lived elements at Z = 98–102 and Z = 108–110 (though separated by short-lived elements). Swinne published these predictions in 1926, believing that such elements might exist in Earth's core, iron meteorites, or the ice caps of Greenland where they had been locked up from their supposed cosmic origin.
Discoveries
Work performed from 1961 to 2013 at four labs – Lawrence Berkeley National Laboratory in the US, the Joint Institute for Nuclear Research in the USSR (later Russia), the GSI Helmholtz Centre for Heavy Ion Research in Germany, and Riken in Japan – identified and confirmed the elements lawrencium to oganesson according to the criteria of the IUPAC–IUPAP Transfermium Working Groups and subsequent Joint Working Parties. These discoveries complete the seventh row of the periodic table. The next two elements, ununennium (Z = 119) and unbinilium (Z = 120), have not yet been synthesized. They would begin an eighth period.
List of elements
103 Lawrencium, Lr, for Ernest Lawrence; sometimes but not always included
104 Rutherfordium, Rf, for Ernest Rutherford
105 Dubnium, Db, for the town of Dubna, near Moscow
106 Seaborgium, Sg, for Glenn T. Seaborg
107 Bohrium, Bh, for Niels Bohr
108 Hassium, Hs, for Hassia (Hesse), location of Darmstadt
109 Meitnerium, Mt, for Lise Meitner
110 Darmstadtium, Ds, for Darmstadt)
111 Roentgenium, Rg, for Wilhelm Röntgen
112 Copernicium, Cn, for Nicolaus Copernicus
113 Nihonium, Nh, for Nihon (Japan), location of the Riken institute
114 Flerovium, Fl, for Russian physicist Georgy Flyorov
115 Moscovium, Mc, for Moscow
116 Livermorium, Lv, for Lawrence Livermore National Laboratory
117 Tennessine, Ts, for Tennessee, location of Oak Ridge National Laboratory
118 Oganesson, Og, for Russian physicist Yuri Oganessian
Characteristics
Due to their short half-lives (for example, the most stable known isotope of seaborgium has a half-life of 14 minutes, and half-lives decrease with increasing atomic number) and the low yield of the nuclear reactions that produce them, new methods have had to be created to determine their gas-phase and solution chemistry based on very small samples of a few atoms each. Relativistic effects become very important in this region of the periodic table, causing the filled 7s orbitals, empty 7p orbitals, and filling 6d orbitals to all contract inward toward the atomic nucleus. This causes a relativistic stabilization of the 7s electrons and makes the 7p orbitals accessible in low excitation states.
Elements 103 to 112, lawrencium to copernicium, form the 6d series of transition elements. Experimental evidence shows that elements 103–108 behave as expected for their position in the periodic table, as heavier homologs of lutetium through osmium. They are expected to have ionic radii between those of their 5d transition metal homologs and their actinide pseudohomologs: for example, Rf is calculated to have ionic radius 76 pm, between the values for Hf (71 pm) and Th (94 pm). Their ions should also be less polarizable than those of their 5d homologs. Relativistic effects are expected to reach a maximum at the end of this series, at roentgenium (element 111) and copernicium (element 112). Nevertheless, many important properties of the transactinides are still not yet known experimentally, though theoretical calculations have been performed.
Elements 113 to 118, nihonium to oganesson, should form a 7p series, completing the seventh period in the periodic table. Their chemistry will be greatly influenced by the very strong relativistic stabilization of the 7s electrons and a strong spin–orbit coupling effect "tearing" the 7p subshell apart into two sections, one more stabilized (7p, holding two electrons) and one more destabilized (7p, holding four electrons). Lower oxidation states should be stabilized here, continuing group trends, as both the 7s and 7p electrons exhibit the inert-pair effect. These elements are expected to largely continue to follow group trends, though with relativistic effects playing an increasingly larger role. In particular, the large 7p splitting results in an effective shell closure at flerovium (element 114) and a hence much higher than expected chemical activity for oganesson (element 118).
Element 118 is the last element that has been synthesized. The next two elements, 119 and 120, should form an 8s series and be an alkali and alkaline earth metal respectively. The 8s electrons are expected to be relativistically stabilized, so that the trend toward higher reactivity down these groups will reverse and the elements will behave more like their period 5 homologs, rubidium and strontium. The 7p orbital is still relativistically destabilized, potentially giving these elements larger ionic radii and perhaps even being able to participate chemically. In this region, the 8p electrons are also relativistically stabilized, resulting in a ground-state 8s8p valence electron configuration for element 121. Large changes are expected to occur in the subshell structure in going from element 120 to element 121: for example, the radius of the 5g orbitals should drop drastically, from 25 Bohr units in element 120 in the excited [Og] 5g 8s configuration to 0.8 Bohr units in element 121 in the excited [Og] 5g 7d 8s configuration, in a phenomenon called "radial collapse". Element 122 should add either a further 7d or a further 8p electron to element 121's electron configuration. Elements 121 and 122 should be similar to actinium and thorium respectively.
At element 121, the superactinide series is expected to begin, when the 8s electrons and the filling 8p, 7d, 6f, and 5g subshells determine the chemistry of these elements. Complete and accurate calculations are not available for elements beyond 123 because of the extreme complexity of the situation: the 5g, 6f, and 7d orbitals should have about the same energy level, and in the region of element 160 the 9s, 8p, and 9p orbitals should also be about equal in energy. This will cause the electron shells to mix so that the block concept no longer applies very well, and will also result in novel chemical properties that will make positioning these elements in a periodic table very difficult.
Beyond superheavy elements
It has been suggested that elements beyond Z = 126 be called beyond superheavy elements. Other sources refer to elements around Z = 164 as hyperheavy elements.
See also
Bose–Einstein condensate (also known as Superatom)
Island of stability
Notes
References
Bibliography
pp. 030001-1–030001-17, pp. 030001-18–030001-138, Table I. The NUBASE2016 table of nuclear and decay properties
Nuclear physics
Sets of chemical elements
Synthetic elements | Superheavy element | [
"Physics",
"Chemistry"
] | 3,635 | [
"Synthetic materials",
"Synthetic elements",
"Radioactivity",
"Nuclear physics"
] |
2,231,692 | https://en.wikipedia.org/wiki/Bohm%20diffusion | The diffusion of plasma across a magnetic field was conjectured to follow the Bohm diffusion scaling as indicated from the early plasma experiments of very lossy machines. This predicted that the rate of diffusion was linear with temperature and inversely linear with the strength of the confining magnetic field.
The rate predicted by Bohm diffusion is much higher than the rate predicted by classical diffusion, which develops from a random walk within the plasma. The classical model scaled inversely with the square of the magnetic field. If the classical model is correct, small increases in the field lead to much longer confinement times. If the Bohm model is correct, magnetically confined fusion would not be practical.
Early fusion energy machines appeared to behave according to Bohm's model, and by the 1960s there was a significant stagnation within the field. The introduction of the tokamak in 1968 was the first evidence that the Bohm model did not hold for all machines. Bohm predicts rates that are too fast for these machines, and classical too slow; studying these machines has led to the neoclassical diffusion concept.
Description
Bohm diffusion is characterized by a diffusion coefficient equal to
where B is the magnetic field strength, T is the electron gas temperature, e is the elementary charge, kB is the Boltzmann constant.
History
It was first observed in 1949 by David Bohm, E. H. S. Burhop, and Harrie Massey while studying magnetic arcs for use in isotope separation. It has since been observed that many other plasmas follow this law. Fortunately there are exceptions where the diffusion rate is lower, otherwise there would be no hope of achieving practical fusion energy. In Bohm's original work he notes that the fraction 1/16 is not exact; in particular "the exact value of [the diffusion coefficient] is uncertain within a factor of 2 or 3." Lyman Spitzer considered this fraction as a factor related to plasma instability.
Approximate derivation
Generally diffusion can be modeled as a random walk of steps of length and time . If the diffusion is collisional, then is the mean free path and is the inverse of the collision frequency. The diffusion coefficient D can be expressed variously as
where is the velocity between collisions.
In a magnetized plasma, the collision frequency is usually small compared to the gyrofrequency, so that the step size is the gyroradius and the step time is the collision time, , which is related to the collision frequency through , leading to (classical diffusion).
On the other hand, if the collision frequency is larger than the gyrofrequency, then the particles can be considered to move freely with the thermal velocity vth between collisions, and the diffusion coefficient takes the form . In this regime, the diffusion is maximum when the collision frequency is equal to the gyrofrequency, in which case . Substituting , and (the cyclotron frequency), we arrive at
which is the Bohm scaling. Considering the approximate nature of this derivation, the missing 1/16 in front is no cause for concern.
Bohm diffusion is typically greater than classical diffusion. The fact that classical diffusion and Bohm diffusion scale as different powers of the magnetic field is often used to distinguish between the two.
Further research
In light of the calculation above, it is tempting to think of Bohm diffusion as classical diffusion with an anomalous collision rate that maximizes the transport, but the physical picture is different. Anomalous diffusion is the result of turbulence. Regions of higher or lower electric potential result in eddies because the plasma moves around them with the E-cross-B drift velocity equal to E/B. These eddies play a similar role to the gyro-orbits in classical diffusion, except that the physics of the turbulence can be such that the decorrelation time is approximately equal to the turn-over time, resulting in Bohm scaling. Another way of looking at it is that the turbulent electric field is approximately equal to the potential perturbation divided by the scale length , and the potential perturbation can be expected to be a sizeable fraction of the kBT/e. The turbulent diffusion constant is then independent of the scale length and is approximately equal to the Bohm value.
The theoretical understanding of plasma diffusion especially the Bohm diffusion remained elusive until the 1970s when Taylor and McNamara put forward a 2d guiding center plasma model. The concepts of negative temperature state, and of the convective cells contributed much to the understanding of the diffusion. The underlying physics may be explained as follows. The process can be a transport driven by the thermal fluctuations, corresponding to the lowest possible random electric fields. The low-frequency spectrum will cause the E×B drift. Due to the long range nature of Coulomb interaction, the wave coherence time is long enough to allow virtually free streaming of particles across the field lines. Thus, the transport would be the only mechanism to limit the run of its own course and to result in a self-correction by quenching the coherent transport through the diffusive damping. To quantify these statements, we may write down the diffusive damping time as
where k⊥ is the wave number perpendicular to the magnetic field. Therefore, the step size is , and the diffusion coefficient is
It clearly yields for the diffusion a scaling law of B−1 for the two dimensional plasma. The thermal fluctuation is typically a small portion of the particle thermal energy. It is reduced by the plasma parameter
and is given by
where n0 is the plasma density, λD is the Debye length, and T is the plasma temperature. Taking and substituting the electric field by the thermal energy, we would have
The 2D plasma model becomes invalid when the parallel decoherence is significant.
An effective diffusion mechanism combining effects from the ExB drift and the cyclotron resonance was proposed, predicting a scaling law of B−3/2.
In 2015, new exact explanation for the original Bohm's experiment is reported, in which the cross-field diffusion measured at Bohm's experiment and Simon's experiment were explained by the combination of the ion gyro-center shift and the short circuit effect. The ion gyro-center shift occurs when an ion collides with a neutral to exchange the momentum; typical example is ion-neutral charge exchange reaction. The one directional shifts of gyro-centers take place when ions are in the perpendicular (to the magnetic field) drift motion such as diamagnetic drift. The electron gyro-center shift is relatively small since the electron gyro-radius is much smaller than ion's so it can be disregarded. Once ions move across the magnetic field by the gyro-center shift, this movement generates spontaneous electric unbalance between in and out of the plasma. However this electric unbalance is immediately compensated by the electron flow through the parallel path and conducting end wall, when the plasma is contained in the cylindrical structure as in Bohm's and Simon's experiments. Simon recognized this electron flow and named it as 'short circuit' effect in 1955. With the help of short circuit effect the ion flow induced by the diamagnetic drift now becomes whole plasma flux which is proportional to the density gradient since the diamagnetic drift includes pressure gradient. The diamagnetic drift can be described as , (here n is density) for approximately constant temperature over the diffusion region. When the particle flux is proportional to , the other part than is the diffusion coefficient. So naturally the diffusion is proportional to . The other front coefficient of this diffusion is a function of the ratio between the charge exchange reaction rate and the gyro frequency. A careful analysis tells this front coefficient for Bohm's experiment was in the range of 1/13 ~ 1/40. The gyro-center shift analysis also reported the turbulence induced diffusion coefficient which is responsible for the anomalous diffusion in many fusion devices; described as . This means different two diffusion mechanisms (the arc discharge diffusion such as Bohm's experiment and the turbulence induced diffusion such as in the tokamak) have been called by the same name of "Bohm diffusion".
See also
Classical diffusion
Plasma diffusion
References
Diffusion
Plasma phenomena | Bohm diffusion | [
"Physics",
"Chemistry"
] | 1,683 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Plasma physics",
"Plasma phenomena"
] |
17,446,349 | https://en.wikipedia.org/wiki/Eberlein%E2%80%93%C5%A0mulian%20theorem | In the mathematical field of functional analysis, the Eberlein–Šmulian theorem (named after William Frederick Eberlein and Witold Lwowitsch Schmulian) is a result that relates three different kinds of weak compactness in a Banach space.
Statement
Eberlein–Šmulian theorem: If X is a Banach space and A is a subset of X, then the following statements are equivalent:
each sequence of elements of A has a subsequence that is weakly convergent in X
each sequence of elements of A has a weak cluster point in X
the weak closure of A is weakly compact.
A set A (in any topological space) can be compact in three different ways:
Sequential compactness: Every sequence from A has a convergent subsequence whose limit is in A.
Limit point compactness: Every infinite subset of A has a limit point in A.
Compactness (or Heine-Borel compactness): Every open cover of A admits a finite subcover.
The Eberlein–Šmulian theorem states that the three are equivalent on a weak topology of a Banach space.
While this equivalence is true in general for a metric space, the weak topology is not metrizable in infinite dimensional vector spaces, and so the Eberlein–Šmulian theorem is needed.
Applications
The Eberlein–Šmulian theorem is important in the theory of PDEs, and particularly in Sobolev spaces. Many Sobolev spaces are reflexive Banach spaces and therefore bounded subsets are weakly precompact by Alaoglu's theorem. Thus the theorem implies that bounded subsets are weakly sequentially precompact, and therefore from every bounded sequence of elements of that space it is possible to extract a subsequence which is weakly converging in the space. Since many PDEs only have solutions in the weak sense, this theorem is an important step in deciding which spaces of weak solutions to use in solving a PDE.
See also
Banach–Alaoglu theorem
Bishop–Phelps theorem
Mazur's lemma
James' theorem
Goldstine theorem
References
Bibliography
.
.
.
Banach spaces
Compactness theorems
Theorems in functional analysis | Eberlein–Šmulian theorem | [
"Mathematics"
] | 459 | [
"Compactness theorems",
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical analysis stubs",
"Theorems in topology",
"Theorems in functional analysis"
] |
17,446,569 | https://en.wikipedia.org/wiki/Godement%20resolution | The Godement resolution of a sheaf is a construction in homological algebra that allows one to view global, cohomological information about the sheaf in terms of local information coming from its stalks. It is useful for computing sheaf cohomology. It was discovered by Roger Godement.
Godement construction
Given a topological space X (more generally, a topos X with enough points), and a sheaf F on X, the Godement construction for F gives a sheaf constructed as follows. For each point , let denote the stalk of F at x. Given an open set , define
An open subset clearly induces a restriction map , so is a presheaf. One checks the sheaf axiom easily. One also proves easily that is flabby, meaning each restriction map is surjective. The map can be turned into a functor because a map between two sheaves induces maps between their stalks. Finally, there is a canonical map of sheaves that sends each section to the 'product' of its germs. This canonical map is a natural transformation between the identity functor and .
Another way to view is as follows. Let be the set X with the discrete topology. Let be the continuous map induced by the identity. It induces adjoint direct and inverse image functors and . Then , and the unit of this adjunction is the natural transformation described above.
Because of this adjunction, there is an associated monad on the category of sheaves on X. Using this monad there is a way to turn a sheaf F into a coaugmented cosimplicial sheaf. This coaugmented cosimplicial sheaf gives rise to an augmented cochain complex that is defined to be the Godement resolution of F.
In more down-to-earth terms, let , and let denote the canonical map. For each , let denote , and let denote the canonical map. The resulting resolution is a flabby resolution of F, and its cohomology is the sheaf cohomology of F.
References
External links
Sheaf theory
Algebraic topology
Homological algebra | Godement resolution | [
"Mathematics"
] | 440 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Sheaf theory",
"Homological algebra"
] |
5,602,902 | https://en.wikipedia.org/wiki/Catenin%20beta-1 | Catenin beta-1, also known as β-catenin (beta-catenin), is a protein that in humans is encoded by the CTNNB1 gene.
β-Catenin is a dual function protein, involved in regulation and coordination of cell–cell adhesion and gene transcription. In humans, the CTNNB1 protein is encoded by the CTNNB1 gene. In Drosophila, the homologous protein is called armadillo. β-catenin is a subunit of the cadherin protein complex and acts as an intracellular signal transducer in the Wnt signaling pathway. It is a member of the catenin protein family and homologous to γ-catenin, also known as plakoglobin. β-Catenin is widely expressed in many tissues. In cardiac muscle, β-catenin localizes to adherens junctions in intercalated disc structures, which are critical for electrical and mechanical coupling between adjacent cardiomyocytes.
Mutations and overexpression of β-catenin are associated with many cancers, including hepatocellular carcinoma, colorectal carcinoma, lung cancer, malignant breast tumors, ovarian and endometrial cancer. Alterations in the localization and expression levels of β-catenin have been associated with various forms of heart disease, including dilated cardiomyopathy. β-Catenin is regulated and destroyed by the beta-catenin destruction complex, and in particular by the adenomatous polyposis coli (APC) protein, encoded by the tumour-suppressing APC gene. Therefore, genetic mutation of the APC gene is also strongly linked to cancers, and in particular colorectal cancer resulting from familial adenomatous polyposis (FAP).
Discovery
β-Catenin was initially discovered in the early 1990s as a component of a mammalian cell adhesion complex: a protein responsible for cytoplasmatic anchoring of cadherins. But very soon, it was realized that the Drosophila protein armadillo – implicated in mediating the morphogenic effects of Wingless/Wnt – is homologous to the mammalian β-catenin, not just in structure but also in function. Thus, β-catenin became one of the first examples of moonlighting: a protein performing more than one radically different cellular function.
Structure
Protein structure
The core of β-catenin consists of several very characteristic repeats, each approximately 40 amino acids long. Termed armadillo repeats, all these elements fold together into a single, rigid protein domain with an elongated shape – called armadillo (ARM) domain. An average armadillo repeat is composed of three alpha helices. The first repeat of β-catenin (near the N-terminus) is slightly different from the others – as it has an elongated helix with a kink, formed by the fusion of helices 1 and 2. Due to the complex shape of individual repeats, the whole ARM domain is not a straight rod: it possesses a slight curvature, so that an outer (convex) and an inner (concave) surface is formed. This inner surface serves as a ligand-binding site for the various interaction partners of the ARM domains.
The segments N-terminal and far C-terminal to the ARM domain do not adopt any structure in solution by themselves. Yet these intrinsically disordered regions play a crucial role in β-catenin function. The N-terminal disordered region contains a conserved short linear motif responsible for binding of TrCP1 (also known as β-TrCP) E3 ubiquitin ligase – but only when it is phosphorylated. Degradation of β-catenin is thus mediated by this N-terminal segment. The C-terminal region, on the other hand, is a strong transactivator when recruited onto DNA. This segment is not fully disordered: part of the C-terminal extension forms a stable helix that packs against the ARM domain, but may also engage separate binding partners. This small structural element (HelixC) caps the C-terminal end of the ARM domain, shielding its hydrophobic residues. HelixC is not necessary for β-catenin to function in cell–cell adhesion. On the other hand, it is required for Wnt signaling: possibly to recruit various coactivators, such as 14-3-3zeta. Yet its exact partners among the general transcription complexes are still incompletely understood, and they likely involve tissue-specific players. Notably, the C-terminal segment of β-catenin can mimic the effects of the entire Wnt pathway if artificially fused to the DNA binding domain of LEF1 transcription factor.
Plakoglobin (also called γ-catenin) has a strikingly similar architecture to that of β-catenin. Not only their ARM domains resemble each other in both architecture and ligand binding capacity, but the N-terminal β-TrCP-binding motif is also conserved in plakoglobin, implying common ancestry and shared regulation with β-catenin. However, plakoglobin is a very weak transactivator when bound to DNA – this is probably caused by the divergence of their C-terminal sequences (plakoglobin appears to lack the transactivator motifs, and thus inhibits the Wnt pathway target genes instead of activating them).
Partners binding to the armadillo domain
As sketched above, the ARM domain of β-catenin acts as a platform to which specific linear motifs may bind. Located in structurally diverse partners, the β-catenin binding motifs are typically disordered on their own, and typically adopt a rigid structure upon ARM domain engagement – as seen for short linear motifs. However, β-catenin interacting motifs also have a number of peculiar characteristics. First, they might reach or even surpass the length of 30 amino acids in length, and contact the ARM domain on an excessively large surface area. Another unusual feature of these motifs is their frequently high degree of phosphorylation. Such Ser/Thr phosphorylation events greatly enhance the binding of many β-catenin associating motifs to the ARM domain.
The structure of β-catenin in complex with the catenin binding domain of the transcriptional transactivation partner TCF provided the initial structural roadmap of how many binding partners of β-catenin may form interactions. This structure demonstrated how the otherwise disordered N-terminus of TCF adapted what appeared to be a rigid conformation, with the binding motif spanning many beta-catenin repeats. Relatively strong charged interaction "hot spots" were defined (predicted, and later verified, to be conserved for the β-catenin/E-cadherin interaction), as well as hydrophobic regions deemed important in the overall mode of binding and as potential therapeutic small molecule inhibitor targets against certain cancer forms. Furthermore, following studies demonstrated another peculiar characteristic, plasticity in the binding of the TCF N-terminus to beta-catenin.
Similarly, we find the familiar E-cadherin, whose cytoplasmatic tail contacts the ARM domain in the same canonical fashion. The scaffold protein axin (two closely related paralogs, axin 1 and axin 2) contains a similar interaction motif on its long, disordered middle segment. Although one molecule of axin only contains a single β-catenin recruitment motif, its partner the adenomatous polyposis coli (APC) protein contains 11 such motifs in tandem arrangement per protomer, thus capable to interact with several β-catenin molecules at once. Since the surface of the ARM domain can typically accommodate only one peptide motif at any given time, all these proteins compete for the same cellular pool of β-catenin molecules. This competition is the key to understand how the Wnt signaling pathway works.
However, this "main" binding site on the ARM domain β-catenin is by no means the only one. The first helices of the ARM domain form an additional, special protein-protein interaction pocket: This can accommodate a helix-forming linear motif found in the coactivator BCL9 (or the closely related BCL9L) – an important protein involved in Wnt signaling. Although the precise details are much less clear, it appears that the same site is used by alpha-catenin when β-catenin is localized to the adherens junctions. Because this pocket is distinct from the ARM domain's "main" binding site, there is no competition between alpha-catenin and E-cadherin or between TCF1 and BCL9, respectively. On the other hand, BCL9 and BCL9L must compete with α-catenin to access β-catenin molecules.
Function
Regulation of degradation through phosphorylation
The cellular level of β-catenin is mostly controlled by its ubiquitination and proteosomal degradation. The E3 ubiquitin ligase TrCP1 (also known as β-TrCP) can recognize β-catenin as its substrate through a short linear motif on the disordered N-terminus. However, this motif (Asp-Ser-Gly-Ile-His-Ser) of β-catenin needs to be phosphorylated on the two serines in order to be capable to bind β-TrCP. Phosphorylation of the motif is performed by Glycogen Synthase Kinase 3 alpha and beta (GSK3α and GSK3β). GSK3s are constitutively active enzymes implicated in several important regulatory processes. There is one requirement, though: substrates of GSK3 need to be pre-phosphorylated four amino acids downstream (C-terminally) of the actual target site. Thus it also requires a "priming kinase" for its activities. In the case of β-catenin, the most important priming kinase is Casein Kinase I (CKI). Once a serine-threonine rich substrate has been "primed", GSK3 can "walk" across it from C-terminal to N-terminal direction, phosphorylating every 4th serine or threonine residues in a row. This process will result in dual phosphorylation of the aforementioned β-TrCP recognition motif as well.
The beta-catenin destruction complex
For GSK3 to be a highly effective kinase on a substrate, pre-phosphorylation is not enough. There is one additional requirement: Similar to the mitogen-activated protein kinases (MAPKs), substrates need to associate with this enzyme through high-affinity docking motifs. β-Catenin contains no such motifs, but a special protein does: axin. What is more, its GSK3 docking motif is directly adjacent to a β-catenin binding motif. This way, axin acts as a true scaffold protein, bringing an enzyme (GSK3) together with its substrate (β-catenin) into close physical proximity.
But even axin does not act alone. Through its N-terminal regulator of G-protein signaling (RGS) domain, it recruits the adenomatous polyposis coli (APC) protein. APC is like a huge "Christmas tree": with a multitude of β-catenin binding motifs (one APC molecule alone possesses 11 such motifs ), it may collect as many β-catenin molecules as possible. APC can interact with multiple axin molecules at the same time as it has three SAMP motifs (Ser-Ala-Met-Pro) to bind the RGS domains found in axin. In addition, axin also has the potential to oligomerize through its C-terminal DIX domain. The result is a huge, multimeric protein assembly dedicated to β-catenin phosphorylation. This complex is usually called the beta-catenin destruction complex, although it is distinct from the proteosome machinery actually responsible for β-catenin degradation. It only marks β-catenin molecules for subsequent destruction.
Wnt signaling and the regulation of destruction
In resting cells, axin molecules oligomerize with each other through their C-terminal DIX domains, which have two binding interfaces. Thus they can build linear oligomers or even polymers inside the cytoplasm of cells. DIX domains are unique: the only other proteins known to have a DIX domain are Dishevelled and DIXDC1. (The single Dsh protein of Drosophila corresponds to three paralogous genes, Dvl1, Dvl2 and Dvl3 in mammals.) Dsh associates with the cytoplasmic regions of Frizzled receptors with its PDZ and DEP domains. When a Wnt molecule binds to Frizzled, it induces a poorly known cascade of events, that result in the exposure of dishevelled's DIX domain and the creation of a perfect binding site for axin. Axin is then titrated away from its oligomeric assemblies – the β-catenin destruction complex – by Dsh. Once bound to the receptor complex, axin will be rendered incompetent for β-catenin binding and GSK3 activity. Importantly, the cytoplasmic segments of the Frizzled-associated LRP5 and LRP6 proteins contain GSK3 pseudo-substrate sequences (Pro-Pro-Pro-Ser-Pro-x-Ser), appropriately "primed" (pre-phosphorylated) by CKI, as if it were a true substrate of GSK3. These false target sites greatly inhibit GSK3 activity in a competitive manner. This way receptor-bound axin will abolish mediating the phosphorylation of β-catenin. Since β-catenin is no longer marked for destruction, but continues to be produced, its concentration will increase. Once β-catenin levels rise high enough to saturate all binding sites in the cytoplasm, it will also translocate into the nucleus. Upon engaging the transcription factors LEF1, TCF1, TCF2 or TCF3, β-catenin forces them to disengage their previous partners: Groucho proteins. Unlike Groucho, that recruit transcriptional repressors (e.g. histone-lysine methyltransferases), β-catenin will bind transcriptional activators, switching on target genes.
Role in cell–cell adhesion
Cell–cell adhesion complexes are essential for the formation of complex animal tissues. β-catenin is part of a protein complex that form adherens junctions. These cell–cell adhesion complexes are necessary for the creation and maintenance of epithelial cell layers and barriers. As a component of the complex, β-catenin can regulate cell growth and adhesion between cells. It may also be responsible for transmitting the contact inhibition signal that causes cells to stop dividing once the epithelial sheet is complete. The E-cadherin – β-catenin – α-catenin complex is weakly associated to actin filaments. Adherens junctions require significant protein dynamics in order to link to the actin cytoskeleton,
thereby enabling mechanotransduction.
An important component of the adherens junctions are the cadherin proteins. Cadherins form the cell–cell junctional structures known as adherens junctions as well as the desmosomes. Cadherins are capable of homophilic interactions through their extracellular cadherin repeat domains, in a Ca2+-dependent manner; this can hold adjacent epithelial cells together. While in the adherens junction, cadherins recruit β-catenin molecules onto their intracellular regions. β-catenin, in turn, associates with another highly dynamic protein, α-catenin, which directly binds to the actin filaments. This is possible because α-catenin and cadherins bind at distinct sites to β-catenin. The β-catenin – α-catenin complex can thus physically form a bridge between cadherins and the actin cytoskeleton. Organization of the cadherin–catenin complex is additionally regulated through phosphorylation and endocytosis of its components.
Roles in development
β-Catenin has a central role in directing several developmental processes, as it can directly bind transcription factors and be regulated by a diffusible extracellular substance: Wnt. It acts upon early embryos to induce entire body regions, as well as individual cells in later stages of development. It also regulates physiological regeneration processes.
Early embryonic patterning
Wnt signaling and β-catenin dependent gene expression plays a critical role during the formation of different body regions in the early embryo. Experimentally modified embryos that do not express this protein will fail to develop mesoderm and initiate gastrulation.
Early embryos endomesoderm specification also involves the activation of the β-catenin dependent transcripional activity by the first morphogenetic movements of embryogenesis, though mechanotransduction processes. This feature being shared by vertebrate and arthropod bilateria, and by cnidaria, it was proposed to have been evolutionary inherited from its possible involvement in the endomesoderm specification of first metazoa.
During the blastula and gastrula stages, Wnt as well as BMP and FGF pathways will induce the antero-posterior axis formation, regulate the precise placement of the primitive streak (gastrulation and mesoderm formation) as well as the process of neurulation (central nervous system development).
In Xenopus oocytes, β-catenin is initially equally localized to all regions of the egg, but it is targeted for ubiquitination and degradation by the β-catenin destruction complex. Fertilization of the egg causes a rotation of the outer cortical layers, moving clusters of the Frizzled and Dsh proteins closer to the equatorial region. β-catenin will be enriched locally under the influence of Wnt signaling pathway in the cells that inherit this portion of the cytoplasm. It will eventually translocate to the nucleus to bind TCF3 in order to activate several genes that induce dorsal cell characteristics. This signaling results in a region of cells known as the grey crescent, which is a classical organizer of embryonic development. If this region is surgically removed from the embryo, gastrulation does not occur at all. β-Catenin also plays a crucial role in the induction of the blastopore lip, which in turn initiates gastrulation. Inhibition of GSK-3 translation by injection of antisense mRNA may cause a second blastopore and a superfluous body axis to form. A similar effect can result from the overexpression of β-catenin.
Asymmetric cell division
β-catenin has also been implicated in regulation of cell fates through asymmetric cell division in the model organism C. elegans. Similarly to the Xenopus oocytes, this is essentially the result of non-equal distribution of Dsh, Frizzled, axin and APC in the cytoplasm of the mother cell.
Stem cell renewal
One of the most important results of Wnt signaling and the elevated level of β-catenin in certain cell types is the maintenance of pluripotency. The rate of stem cells in the colon is for instance ensured by such accumulation of β-catenin, which can be stimulated by the Wnt pathway. High frequency peristaltic mechanical strains of the colon are also involved in the β-catenin dependent maintenance of homeostatic levels of colonic stem cells through processes of mechanotransduction. This feature is pathologically enhanced towards tumorigenic hyperproliferation in healthy cells compressed by pressure due genetically altered hyperproliferative tumorous cells.
In other cell types and developmental stages, β-catenin may promote differentiation, especially towards mesodermal cell lineages.
Epithelial-to-mesenchymal transition
β-Catenin also acts as a morphogen in later stages of embryonic development. Together with TGF-β, an important role of β-catenin is to induce a morphogenic change in epithelial cells. It induces them to abandon their tight adhesion and assume a more mobile and loosely associated mesenchymal phenotype. During this process, epithelial cells lose expression of proteins like E-cadherin, Zonula occludens 1 (ZO1), and cytokeratin. At the same time they turn on the expression of vimentin, alpha smooth muscle actin (ACTA2), and fibroblast-specific protein 1 (FSP1). They also produce extracellular matrix components, such as type I collagen and fibronectin. Aberrant activation of the Wnt pathway has been implicated in pathological processes such as fibrosis and cancer. In cardiac muscle development, β-catenin performs a biphasic role. Initially, the activation of Wnt/β-catenin is essential for committing mesenchymal cells to a cardiac lineage; however, in later stages of development, the downregulation of β-catenin is required.
Involvement in cardiac physiology
In cardiac muscle, β-catenin forms a complex with N-cadherin at adherens junctions within intercalated disc structures, which are responsible for electrical and mechanical coupling of adjacent cardiac cells. Studies in a model of adult rat ventricular cardiomyocytes have shown that the appearance and distribution of β-catenin is spatio-temporally regulated during the redifferentiation of these cells in culture. Specifically, β-catenin is part of a distinct complex with N-cadherin and alpha-catenin, which is abundant at adherens junctions in early stages following cardiomyocyte isolation for the reformation of cell–cell contacts. It has been shown that β-catenin forms a complex with emerin in cardiomyocytes at adherens junctions within intercalated discs; and this interaction is dependent on the presence of GSK 3-beta phosphorylation sites on β-catenin. Knocking out emerin significantly altered β-catenin localization and the overall intercalated disc architecture, which resembled a dilated cardiomyopathy phenotype.
In animal models of cardiac disease, functions of β-catenin have been unveiled. In a guinea pig model of aortic stenosis and left ventricular hypertrophy, β-catenin was shown to change subcellular localization from intercalated discs to the cytosol, despite no change in the overall cellular abundance of β-catenin. vinculin showed a similar profile of change. N-cadherin showed no change, and there was no compensatory upregulation of plakoglobin at intercalated discs in the absence of β-catenin. In a hamster model of cardiomyopathy and heart failure, cell–cell adhesions were irregular and disorganized, and expression levels of adherens junction/intercalated disc and nuclear pools of β-catenin were decreased. These data suggest that a loss of β-catenin may play a role in the diseased intercalated discs that have been associated with cardiac muscle hypertrophy and heart failure. In a rat model of myocardial infarction, adenoviral gene transfer of nonphosphorylatable, constitutively-active β-catenin decreased MI size, activated the cell cycle, and reduced the amount of apoptosis in cardiomyocytes and cardiac myofibroblasts. This finding was coordinate with enhanced expression of pro-survival proteins, survivin and Bcl-2, and vascular endothelial growth factor while promoting the differentiation of cardiac fibroblasts into myofibroblasts. These findings suggest that β-catenin can promote the regeneration and healing process following myocardial infarction. In a spontaneously-hypertensive heart failure rat model, investigators detected a shuttling of β-catenin from the intercalated disc/sarcolemma to the nucleus, evidenced by a reduction of β-catenin expression in the membrane protein fraction and an increase in the nuclear fraction. Additionally, they found a weakening in the association between glycogen synthase kinase-3β and β-catenin, which may indicate altered protein stability. Overall, results suggest that an enhanced nuclear localization of β-catenin may be important in the progression of cardiac hypertrophy.
Regarding the mechanistic role of β-catenin in cardiac hypertrophy, transgenic mouse studies have shown somewhat conflicting results regarding whether upregulation of β-catenin is beneficial or detrimental. A recent study using a conditional knockout mouse that either lacked β-catenin altogether or expressed a non-degradable form of β-catenin in cardiomyocytes reconciled a potential reason for these discrepancies. There appears to be strict control over the subcellular localization of β-catenin in cardiac muscle. Mice lacking β-catenin had no overt phenotype in the left ventricular myocardium; however, mice harboring a stabilized form of β-catenin developed dilated cardiomyopathy, suggesting that the temporal regulation of β-catenin by protein degradation mechanisms is critical for normal functioning of β-catenin in cardiac cells. In a mouse model harboring knockout of a desmosomal protein, plakoglobin, implicated in arrhythmogenic right ventricular cardiomyopathy, the stabilization of β-catenin was also enhanced, presumably to compensate for the loss of its plakoglobin homolog. These changes were coordinate with Akt activation and glycogen synthase kinase 3β inhibition, suggesting once again that the abnormal stabilization of β-catenin may be involved in the development of cardiomyopathy. Further studies employing a double knockout of plakoglobin and β-catenin showed that the double knockout developed cardiomyopathy, fibrosis and arrhythmias resulting in sudden cardiac death. Intercalated disc architecture was severely impaired and connexin 43-resident gap junctions were markedly reduced. Electrocardiogram measurements captured spontaneous lethal ventricular arrhythmias in the double transgenic animals, suggesting that the two catenins—β-catenin and plakoglobin—are critical and indispensable for mechanoelectrical coupling in cardiomyocytes.
Clinical significance
Role in depression
Whether or not a given individual's brain can deal effectively with stress, and thus their susceptibility to depression, depends on the β-catenin in each person's brain, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published November 12, 2014, in the journal Nature. Higher β-catenin signaling increases behavioral flexibility, whereas defective β-catenin signaling leads to depression and reduced stress management.
Role in cardiac disease
Altered expression profiles in β-catenin have been associated with dilated cardiomyopathy in humans. β-Catenin upregulation of expression has generally been observed in patients with dilated cardiomyopathy. In a particular study, patients with end-stage dilated cardiomyopathy showed almost doubled estrogen receptor alpha (ER-alpha) mRNA and protein levels, and the ER-alpha/beta-catenin interaction, present at intercalated discs of control, non-diseased human hearts was lost, suggesting that the loss of this interaction at the intercalated disc may play a role in the progression of heart failure. Together with BCL9 and PYGO proteins, β-catenin coordinates different aspects of heard development, and mutations in Bcl9 or Pygo in model organisms - such as the mouse and zebrafish - cause phenotypes that are very similar to human congenital heart disorders.
Involvement in cancer
β-Catenin is a proto-oncogene. Mutations of this gene are commonly found in a variety of cancers: in primary hepatocellular carcinoma, colorectal cancer, ovarian carcinoma, breast cancer, lung cancer and glioblastoma. It has been estimated that approximately 10% of all tissue samples sequenced from all cancers display mutations in the CTNNB1 gene. Most of these mutations cluster on a tiny area of the N-terminal segment of β-catenin: the β-TrCP binding motif. Loss-of-function mutations of this motif essentially make ubiquitinylation and degradation of β-catenin impossible. It will cause β-catenin to translocate to the nucleus without any external stimulus and continuously drive transcription of its target genes. Increased nuclear β-catenin levels have also been noted in basal cell carcinoma (BCC), head and neck squamous cell carcinoma (HNSCC), prostate cancer (CaP), pilomatrixoma (PTR) and medulloblastoma (MDB) These observations may or may not implicate a mutation in the β-catenin gene: other Wnt pathway components can also be faulty.
Similar mutations are also frequently seen in the β-catenin recruiting motifs of APC. Hereditary loss-of-function mutations of APC cause a condition known as familial adenomatous polyposis. Affected individuals develop hundreds of polyps in their large intestine. Most of these polyps are benign in nature, but they have the potential to transform into deadly cancer as time progresses. Somatic mutations of APC in colorectal cancer are also not uncommon. β-Catenin and APC are among the key genes (together with others, like K-Ras and SMAD4) involved in colorectal cancer development. The potential of β-catenin to change the previously epithelial phenotype of affected cells into an invasive, mesenchyme-like type contributes greatly to metastasis formation.
As a therapeutic target
Due to its involvement in cancer development, inhibition of β-catenin continues to receive significant attention. But the targeting of the binding site on its armadillo domain is not the simplest task, due to its extensive and relatively flat surface. However, for an efficient inhibition, binding to smaller "hotspots" of this surface is sufficient. This way, a "stapled" helical peptide derived from the natural β-catenin binding motif found in LEF1 was sufficient for the complete inhibition of β-catenin dependent transcription. Recently, several small-molecule compounds have also been developed to target the same, highly positively charged area of the ARM domain (CGP049090, PKF118-310, PKF115-584 and ZTM000990). In addition, β-catenin levels can also be influenced by targeting upstream components of the Wnt pathway as well as the β-catenin destruction complex. The additional N-terminal binding pocket is also important for Wnt target gene activation (required for BCL9 recruitment). This site of the ARM domain can be pharmacologically targeted by carnosic acid, for example. That "auxiliary" site is another attractive target for drug development. Despite intensive preclinical research, no β-catenin inhibitors are available as therapeutic agents yet. However, its function can be further examined by siRNA knockdown based on an independent validation. Another therapeutic approach for reducing β-catenin nuclear accumulation is via the inhibition of galectin-3. The galectin-3 inhibitor GR-MD-02 is currently undergoing clinical trials in combination with the FDA-approved dose of ipilimumab in patients who have advanced melanoma. The proteins BCL9 and BCL9L have been proposed as therapeutic targets for colorectal cancers which present hyper-activated Wnt signaling, because their deletion does not perturb normal homeostasis but strongly affects metastases behaviour.
Role in fetal alcohol syndrome
β-catenin destabilization by ethanol is one of two known pathways whereby alcohol exposure induces fetal alcohol syndrome (the other is ethanol-induced folate deficiency). Ethanol leads to β-catenin destabilization via a G-protein-dependent pathway, wherein activated Phospholipase Cβ hydrolyzes phosphatidylinositol-(4,5)-bisphosphate to diacylglycerol and inositol-(1,4,5)-trisphosphate. Soluble inositol-(1,4,5)-trisphosphate triggers calcium to be released from the endoplasmic reticulum. This sudden increase in cytoplasmic calcium activates Ca2+/calmodulin-dependent protein kinase (CaMKII). Activated CaMKII destabilizes β-catenin via a poorly characterized mechanism, but which likely involves β-catenin phosphorylation by CaMKII. The β-catenin transcriptional program (which is required for normal neural crest cell development) is thereby suppressed, resulting in premature neural crest cell apoptosis (cell death).
Interactions
β-Catenin has been shown to interact with:
APC,
AXIN1,
Androgen receptor,
CBY1,
CDH1,
CDH2,
CDH3,
CDK5R1,
CHUK,
CTNND1,
CTNNA1,
EGFR,
Emerin
ESR1
FHL2,
GSK3B,
HER2/neu,
HNF4A,
IKK2,
LEF1 including transgenically,
MAGI1,
MUC1,
NR5A1,
PCAF,
PHF17,
Plakoglobin,
PTPN14,
PTPRF,
PTPRK (PTPkappa),
PTPRT (PTPrho),
PTPRU (PCP-2),
PSEN1,
PTK7
RuvB-like 1,
SMAD7,
SMARCA4
SLC9A3R1,
USP9X, and
VE-cadherin.
XIRP1
See also
Catenin
References
Further reading
External links
"A diverse set of proteins modulate the canonical Wnt/β-catenin signaling pathway." at cancer.gov
"The role of β-catenin in signal transduction, cell fate determination and trans-differentiation" at nih.gov
"Researchers Offer First Direct Proof of How Arthritis Destroys Cartilage" at rochester.edu
Signal transduction
Catenins
Oncogenes
Armadillo-repeat-containing proteins | Catenin beta-1 | [
"Chemistry",
"Biology"
] | 7,446 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
5,605,079 | https://en.wikipedia.org/wiki/Proof%20of%20the%20Euler%20product%20formula%20for%20the%20Riemann%20zeta%20function | Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas (Various Observations about Infinite Series), published by St Petersburg Academy in 1737.
The Euler product formula
The Euler product formula for the Riemann zeta function reads
where the left hand side equals the Riemann zeta function:
and the product on the right hand side extends over all prime numbers p:
Proof of the Euler product formula
This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage:
Subtracting the second equation from the first we remove all elements that have a factor of 2:
Repeating for the next term:
Subtracting again we get:
where all elements having a factor of 3 or 2 (or both) are removed.
It can be seen that the right side is being sieved. Repeating infinitely for where is prime, we get:
Dividing both sides by everything but the ζ(s) we obtain:
This can be written more concisely as an infinite product over all primes p:
To make this proof rigorous, we need only to observe that when , the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for .
The case s = 1
An interesting result can be found for ζ(1), the harmonic series:
which can also be written as,
which is,
as,
thus,
While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g.,
.
Instead, the partial products (whose numerators are primorials) may be bounded, using , as
so that divergence is clear given the double-logarithmic divergence of the inverse prime series.
(Note that Euler's original proof for inverse prime series used just the converse direction to prove the divergence of the inverse prime series based on that of the Euler product and the harmonic series.)
Another proof
Each factor (for a given prime p) in the product above can be expanded to a geometric series consisting of the reciprocal of p raised to multiples of s, as follows
When , this series converges absolutely. Hence we may take a finite number of factors, multiply them together, and rearrange terms. Taking all the primes p up to some prime number limit q, we have
where σ is the real part of s. By the fundamental theorem of arithmetic, the partial product when expanded out gives a sum consisting of those terms n−s where n is a product of primes less than or equal to q. The inequality results from the fact that therefore only integers larger than q can fail to appear in this expanded out partial product. Since the difference between the partial product and ζ(s) goes to zero when σ > 1, we have convergence in this region.
See also
Euler product
Riemann zeta function
References
John Derbyshire, Prime Obsession: Bernhard Riemann and The Greatest Unsolved Problem in Mathematics, Joseph Henry Press, 2003,
Notes
Zeta and L-functions
Article proofs
Leonhard Euler
Infinite products | Proof of the Euler product formula for the Riemann zeta function | [
"Mathematics"
] | 693 | [
"Mathematical analysis",
"Article proofs",
"Infinite products"
] |
5,605,480 | https://en.wikipedia.org/wiki/Thermostability | In materials science and molecular biology, thermostability is the ability of a substance to resist irreversible change in its chemical or physical structure, often by resisting decomposition or polymerization, at a high relative temperature.
Thermostable materials may be used industrially as fire retardants. A thermostable plastic, an uncommon and unconventional term, is likely to refer to a thermosetting plastic that cannot be reshaped when heated, than to a thermoplastic that can be remelted and recast.
Thermostability is also a property of some proteins. To be a thermostable protein means to be resistant to changes in protein structure due to applied heat.
Thermostable proteins
Most life-forms on Earth live at temperatures of less than 50 °C, commonly from 15 to 50 °C. Within these organisms are macromolecules (proteins and nucleic acids) which form the three-dimensional structures essential to their enzymatic activity. Above the native temperature of the organism, thermal energy may cause the unfolding and denaturation, as the heat can disrupt the intramolecular bonds in the tertiary and quaternary structure. This unfolding will result in loss in enzymatic activity, which is understandably deleterious to continuing life-functions. An example of such is the denaturing of proteins in albumen from a clear, nearly colourless liquid to an opaque white, insoluble gel.
Proteins capable of withstanding such high temperatures compared to proteins that cannot, are generally from microorganisms that are hyperthermophiles. Such organisms can withstand above 50 °C temperatures as they usually live within environments of 85 °C and above. Certain thermophilic life-forms exist which can withstand temperatures above this, and have corresponding adaptations to preserve protein function at these temperatures. These can include altered bulk properties of the cell to stabilize all proteins, and specific changes to individual proteins. Comparing homologous proteins present in these thermophiles and other organisms reveal some differences in the protein structure. One notable difference is the presence of extra hydrogen bonds in the thermophile's proteins—meaning that the protein structure is more resistant to unfolding. Similarly, thermostable proteins are rich in salt bridges or/and extra disulfide bridges stabilizing the structure. Other factors of protein thermostability are compactness of protein structure, oligomerization, and strength interaction between subunits.
Uses and applications
Polymerase chain reactions
Thermostable DNA polymerases such as Taq polymerase and Pfu DNA polymerase are used in polymerase chain reactions (PCR) where temperatures of 94 °C or over are used to melt DNA strands in the denaturation step of PCR. This resistance to high temperature allows for DNA polymerase to elongate DNA with a desired sequence of interest with the presence of dNTPs.
Feed additives
Enzymes are often added to animal feed to improve the health and growth of farmed animals, particularly chickens and pigs. The feed is typically treated with high pressure steam to kill bacteria such as Salmonella. Therefore the added enzymes (e.g. phytase and xylanase) must be able to withstand this thermal challenge without being irreversibly inactivated.
Protein purification
Knowledge of an enzyme's resistance to high temperatures is especially beneficial in protein purification. In the procedure of heat denaturation, one can subject a mixture of proteins to high temperatures, which will result in the denaturation of proteins that are not thermostable, and the isolation of the protein that is thermodynamically stable. One notable example of this is found in the purification of alkaline phosphatase from the hyperthermophile Pyrococcus abyssi. This enzyme is known for being heat stable at temperatures greater than 95 °C, and therefore can be partially purified by heating when heterologously expressed in E. coli. The increase in temperature causes the E. coli proteins to precipitate, while the P. abyssi alkaline phosphatase remains stably in solution.
Glycoside hydrolases
Another important group of thermostable enzymes are glycoside hydrolases. These enzymes are responsible of the degradation of the major fraction of biomass, the polysaccharides present in starch and lignocellulose. Thus, glycoside hydrolases are gaining great interest in biorefining applications in the future bioeconomy. Some examples are the production of monosaccharides for food applications as well as use as carbon source for microbial conversion in fuels (ethanol) and chemical intermediates, production of oligosaccharides for prebiotic applications and production of surfactants alkyl glycoside type. All of these processes often involve thermal treatments to facilitate the polysaccharide hydrolysis, hence give thermostable variants of glycoside hydrolases an important role in this context.
Approaches to improve thermostability of proteins
Protein engineering can be used to enhance the thermostability of proteins. A number of site-directed and random mutagenesis techniques, in addition to directed evolution, have been used to increase the thermostability of target proteins. Comparative methods have been used to increase the stability of mesophilic proteins based on comparison to thermophilic homologs. Additionally, analysis of the protein unfolding by molecular dynamics can be used to understand the process of unfolding and then design stabilizing mutations. Rational protein engineering for increasing protein thermostability includes mutations which truncate loops, increase salt bridges or hydrogen bonds, introduced disulfide bonds. In addition, ligand binding can increase the stability of the protein, particularly when purified. There are various different forces that allow for the thermostability of a particular protein. These forces include hydrophobic interactions, electrostatic interactions, and the presence of disulfide bonds. The overall amount of hydrophobicity present in a particular protein is responsible for its thermostability. Another type of force that is responsible for thermostability of a protein is the electrostatic interactions between molecules. These interactions include salt bridges and hydrogen bonds. Salt bridges are unaffected by high temperatures, therefore, are necessary for protein and enzyme stability. A third force used to increase thermostability in proteins and enzymes is the presence of disulfide bonds. They present covalent cross-linkages between the polypeptide chains. These bonds are the strongest because they're covalent bonds, making them stronger than intermolecular forces. Glycosylation is another way to improve the thermostability of proteins. Stereoelectronic effects in stabilizing interactions between carbohydrate and protein can lead to the thermostabilization of the glycosylated protein.
Cyclizing enzymes by covalently linking the N-terminus to the C-terminus has been applied to increase the thermostability of many enzymes. Intein cyclization and SpyTag/SpyCatcher cyclization have often been employed.
Thermostable toxins
Certain poisonous fungi contain thermostable toxins, such as amatoxin found in the death cap and autumn skullcap mushrooms and patulin from molds. Therefore, applying heat to these will not remove the toxicity and is of particular concern for food safety.
See also
Thermophiles
Thermus thermophilus
Thermus aquaticus
Pyrococcus furiosus
References
External links
Thermostability of Proteins
Protein structure
Toxicology
Extremophiles | Thermostability | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,588 | [
"Toxicology",
"Organisms by adaptation",
"Extremophiles",
"Bacteria",
"Structural biology",
"Environmental microbiology",
"Protein structure"
] |
5,607,447 | https://en.wikipedia.org/wiki/Space%20medicine | Space Medicine is a subspecialty of Emergency Medicine (Fellowship Training Pathway) which evolved from the Aerospace Medicine specialty. Space Medicine is dedicated to the prevention and treatment of medical conditions that would limit success in space operations. Space medicine focuses specifically on prevention, acute care, emergency medicine, wilderness medicine, hyper/hypobaric medicine in order to provide medical care of astronauts and spaceflight participants. The spaceflight environment poses many unique stressors to the human body, including G forces, microgravity, unusual atmospheres such as low pressure or high carbon dioxide, and space radiation. Space medicine applies space physiology, preventive medicine, primary care, emergency medicine, acute care medicine, austere medicine, public health, and toxicology to prevent and treat medical problems in space. This expertise is additionally used to inform vehicle systems design to minimize the risk to human health and performance while meeting mission objectives.
Astronautical hygiene is the application of science and technology to the prevention or control of exposure to the hazards that may cause astronaut ill health. Both these sciences work together to ensure that astronauts work in a safe environment. Medical consequences such as possible visual impairment and bone loss have been associated with human spaceflight.
In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.
History
Hubertus Strughold (1898–1987), a former Nazi physician and physiologist, was brought to the United States after World War II as part of Operation Paperclip. He first coined the term "space medicine" in 1948 and was the first and only Professor of Space Medicine at the School of Aviation Medicine (SAM) at Randolph Air Force Base, Texas. In 1949, Strughold was made director of the Department of Space Medicine at the SAM (which is now the US Air Force School of Aerospace Medicine (USAFSAM) at Wright-Patterson Air Force Base, Ohio. He played an important role in developing the pressure suit worn by early American astronauts. He was a co-founder of the Space Medicine Branch of the Aerospace Medical Association in 1950. The aeromedical library at Brooks AFB was named after him in 1977, but later renamed because documents from the Nuremberg War Crimes Tribunal linked Strughold to medical experiments in which inmates of the Dachau concentration camp were tortured and killed.
Soviet research into Space Medicine was centered at the Scientific Research Testing Institute of Aviation Medicine (NIIAM). In 1949, A.M. Vasilevsky, the Minister of Defense of the USSR, gave instructions via the initiative of Sergei Korolev to NIIAM to conduct biological and medical research. In 1951, NIIAM began to work on the first research work entitled "Physiological and hygienic substantiation of flight capabilities in special conditions", which formulated the main research tasks, the necessary requirements for pressurized cabins, life support systems, rescue and control and recording equipment. At the Korolev design bureau, they created rockets for lifting animals within 200–250 km and 500–600 km, and then began to talk about developing artificial satellites and launching a man into space. Then in 1963 the Institute for Biomedical Problems (IMBP) was founded to undertake the study of space medicine.
Animal testing
Before sending humans, space agencies used animals to study the effects of space travel on the body. After several years of failed animal recoveries, an Aerobee rocket launch in September 1951 was the first safe return of a monkey and a group of mice from near space altitudes. On 3 November 1957, Sputnik 2 became the first mission to carry a living animal to space, a dog named Laika. This flight and others suggested the possibility of safely flying in space within a controlled environment, and provided data on how living beings react to space flight. Later flights with cameras to observe the animal subjects would show in flight conditions such as high-G and zero-G. Russian tests yielded more valuable physiological data from the animal tests.
On January 31, 1961, a chimpanzee named Ham was launched into a sub-orbital flight aboard a Mercury-Redstone Launch Vehicle. The flight was meant to model the planned mission of astronaut Alan Shepard. The mission planned to reach an altitude of 115 miles, and speeds up to 4400 miles per hour. However, the actual flight reached 157 miles and a maximum speed of 5857 miles per hour. During flight, Ham experienced 6.6 minutes of weightlessness. After splashing down in the Atlantic Ocean, Ham was recovered by the USS Donner. He suffered only limited injuries during flight, only receiving a bruised nose. Ham's vital signs were monitored and collected throughout the 16 minute flight, and used to develop life support systems for later human astronauts.
Animal testing in space continues currently, with mice, ants, and other animals regularly being sent to the International Space Station. In 2014, eight ant colonies were sent to the ISS to investigate the group behavior of ants in microgravity. The ISS allows for the investigation of animal behavior without sending them in specifically designed capsules.
North American X-15
Rocket-powered aircraft North American X-15 provided an early opportunity to study the effects of a near-space environment on human physiology. At its highest operational speed and altitude, the X-15 provided approximately five minutes of weightlessness. This opportunity allowed for the development of devices to facilitate working in low pressure, high acceleration environments such as pressure suits, and telemetering systems to collect physiological data. This data and technologies allowed for better mission planning for future space missions.
Project Mercury
Space medicine was a critical factor in the United States human space program, starting with Project Mercury. The main precaution taken by Mercury astronauts to defend against high G environments like launch and reentry was a couch with seat belts to make sure astronauts were not forcibly moved from their position. Additionally, experienced pilots proved to be better able to cope with high G scenarios. One of the pressing concerns with Project Mercury's mission environment was the isolated nature of the cabin. There were deeper concerns about psychological issues than there were about physiological health effects. Substantial animal testing proved beyond a reasonable doubt to NASA engineers that spaceflight could be done safely provided a climate controlled environment.
Project Gemini
The Gemini program primarily addressed the psychological issues from isolation in space with two crewmembers. Upon returning from space, it was recorded that crewmembers experienced a loss of balance and a decrease in anaerobic ability.
Project Apollo
The Apollo program began with a substantial basis of medical knowledge and precautions from both Mercury and Gemini. The understanding of high and low G environments was well documented and the effects of isolation had been addressed with Gemini and Apollo having multiple occupants in one capsule. The primary research of the Apollo Program focused on pre-flight and post-flight monitoring. Some Apollo mission plans were postponed or altered due to some or all crewmembers contracting a communicable disease. Apollo 14 instituted a form of quarantine for crewmembers so as to curb the passing of typical illnesses. While the efficacy of the Flight Crew Health Stabilization Program was questionable as some crewmembers still contracted diseases, the program showed enough results to maintain implementation with current space programs.
Effects of space-travel
In October 2018, NASA-funded researchers found that lengthy journeys into outer space, including travel to the planet Mars, may substantially damage the gastrointestinal tissues of astronauts. The studies support earlier work that found such journeys could significantly damage the brains of astronauts, and age them prematurely.
In November 2019, researchers reported that astronauts experienced serious blood flow and clot problems while on board the International Space Station, based on a six-month study of 11 healthy astronauts. The results may influence long-term spaceflight, including a mission to the planet Mars, according to the researchers.
Blood clots
Deep vein thrombosis of the internal jugular vein of the neck was first discovered in 2020 in an astronaut on a long duration stay on the ISS, requiring treatment with blood thinners. A subsequent study of eleven astronauts found slowed blood flow in the neck veins and even reversal of blood flow in two of the astronauts. NASA is currently conducting more research to study whether these abnormalities could predispose astronauts to blood clots.
Cardiac rhythms
Heart rhythm disturbances have been seen among astronauts. Most of these have been related to cardiovascular disease, but it is not clear whether this was due to pre-existing conditions or effects of space flight. It is hoped that advanced screening for coronary disease has greatly mitigated this risk. Other heart rhythm problems, such as atrial fibrillation, can develop over time, necessitating periodic screening of crewmembers’ heart rhythms. Beyond these terrestrial heart risks, some concern exists that prolonged exposure to microgravity may lead to heart rhythm disturbances. Although this has not been observed to date, further surveillance is warranted.
Decompression illness in spaceflight
In space, astronauts use a space suit, essentially a self-contained individual spacecraft, to do spacewalks, or extra-vehicular activities (EVAs). Spacesuits are generally inflated with 100% oxygen at a total pressure that is less than a third of normal atmospheric pressure. Eliminating inert atmospheric components such as nitrogen allows the astronaut to breathe comfortably, but also have the mobility to use their hands, arms, and legs to complete required work, which would be more difficult in a higher pressure suit.
After the astronaut dons the spacesuit, air is replaced by 100% oxygen in a process called a "nitrogen purge". In order to reduce the risk of decompression sickness, the astronaut must spend several hours "pre-breathing" at an intermediate nitrogen partial pressure, in order to let their body tissues outgas nitrogen slowly enough that bubbles are not formed. When the astronaut returns to the "shirt sleeve" environment of the spacecraft after an EVA, pressure is restored to whatever the operating pressure of that spacecraft may be, generally normal atmospheric pressure. Decompression illness in spaceflight consists of decompression sickness (DCS) and other injuries due to uncompensated changes in pressure, or barotrauma.
Decompression sickness
Decompression sickness is the injury to the tissues of the body resulting from the presence of nitrogen bubbles in the tissues and blood. This occurs due to a rapid reduction in ambient pressure causing the dissolved nitrogen to come out of solution as gas bubbles within the body. In space the risk of DCS is significantly reduced by using a technique to wash out the nitrogen in the body's tissues. This is achieved by breathing 100% oxygen for a specified period of time before donning the spacesuit, and is continued after a nitrogen purge. DCS may result from inadequate or interrupted pre-oxygenation time, or other factors including the astronaut's level of hydration, physical conditioning, prior injuries and age. Other risks of DCS include inadequate nitrogen purge in the EMU, a strenuous or excessively prolonged EVA, or a loss of suit pressure. Non-EVA crewmembers may also be at risk for DCS if there is a loss of spacecraft cabin pressure.
Symptoms of DCS in space may include chest pain, shortness of breath, cough or pain with a deep breath, unusual fatigue, lightheadedness, dizziness, headache, unexplained musculoskeletal pain, tingling or numbness, extremities weakness, or visual abnormalities.
Primary treatment principles consist of in-suit repressurization to re-dissolve nitrogen bubbles, 100% oxygen to re-oxygenate tissues, and hydration to improve the circulation to injured tissues.
Barotrauma
Barotrauma is the injury to the tissues of air filled spaces in the body as a result of differences in pressure between the body spaces and the ambient atmospheric pressure. Air filled spaces include the middle ears, paranasal sinuses, lungs and gastrointestinal tract. One would be predisposed by a pre-existing upper respiratory infection, nasal allergies, recurrent changing pressures, dehydration, or a poor equalizing technique.
Positive pressure in the air filled spaces results from reduced barometric pressure during the depressurization phase of an EVA. It can cause abdominal distension, ear or sinus pain, decreased hearing, and dental or jaw pain. Abdominal distension can be treated with extending the abdomen, gentle massage and encourage passing flatus. Ear and sinus pressure can be relieved with passive release of positive pressure. Pretreatment for susceptible individuals can include oral and nasal decongestants, or oral and nasal steroids.
Negative pressure in air fill spaces results from increased barometric pressure during repressurization after an EVA or following a planned restoration of a reduced cabin pressure. Common symptoms include ear or sinus pain, decreased hearing, and tooth or jaw pain.
Treatment may include active positive pressure equalization of ears and sinuses, oral and nasal decongestants, or oral and nasal steroids, and appropriate pain medication if needed.
Decreased immune system functioning
Astronauts in space have weakened immune systems, which means that in addition to increased vulnerability to new exposures, viruses already present in the body—which would normally be suppressed—become active. In space, T-cells do not reproduce properly, and the cells that do exist are less able to
fight off infection. NASA research is measuring the change in the immune systems of its astronauts as well as performing experiments with T-cells in space.
On April 29, 2013, scientists in Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence".
In March 2019, NASA reported that latent viruses in humans may be activated during space missions, adding possibly more risk to astronauts in future deep-space missions.
Increased infection risk
A 2006 Space Shuttle experiment found that Salmonella typhimurium, a bacterium that can cause food poisoning, became more virulent when cultivated in space. On April 29, 2013, scientists in Rensselaer Polytechnic Institute, funded by NASA, reported that, during spaceflight on the International Space Station, microbes seem to adapt to the space environment in ways "not observed on Earth" and in ways that "can lead to increases in growth and virulence". More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space. Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts.
Effects of fatigue
Human spaceflight often requires astronaut crews to endure long periods without rest. Studies have shown that lack of sleep can cause fatigue that leads to errors while performing critical tasks. Also, individuals who are fatigued often cannot determine the degree of their impairment.
Astronauts and ground crews frequently suffer from the effects of sleep deprivation and circadian rhythm disruption. Fatigue due to sleep loss, sleep shifting and work overload could cause performance errors that put space flight participants at risk of compromising mission objectives as well as the health and safety of those on board.
Loss of balance
Leaving and returning to Earth's gravity causes “space sickness,” dizziness, and loss of balance in astronauts. By studying how changes can affect balance in the human body—involving the senses, the brain, the inner ear, and blood pressure—NASA hopes to develop treatments that can be used on Earth and in space to correct balance disorders. Until then, NASA's astronauts must rely on a medication called Midodrine (an “anti-dizzy” pill that temporarily increases blood pressure), and/or promethazine to help carry out the tasks they need to do to return home safely.
Loss of bone density
Spaceflight osteopenia is the bone loss associated with human spaceflight. The metabolism of calcium is limited in microgravity and will cause calcium to leak out of bones. After a 3–4 month trip into space, it takes about 2–3 years to regain lost bone density. New techniques are being developed to help astronauts recover faster. Research in the following areas holds the potential to aid the process of growing new bone:
Diet and Exercise changes may reduce osteoporosis.
Vibration Therapy may stimulate bone growth.
Medication could trigger the body to produce more of the protein responsible for bone growth and formation.
Loss of muscle mass
In space, muscles in the legs, back, spine, and heart weaken and waste away because they no longer are needed to overcome gravity, just as people lose muscle when they age due to reduced physical activity. Astronauts rely on research in the following areas to build muscle and maintain body mass:
Exercise may build muscle if at least two hours a day is spent doing resistance training routines.
Neuromuscular Electrical Stimulation as a method to prevent muscle atrophy.
Impairment of eyesight
During long space flight missions, astronauts may develop ocular changes and visual impairment collectively known as the Space Associated Neuro-ocular Syndrome (SANS). Such vision problems may be a major concern for future deep space flight missions, including a human mission to Mars.
Loss of mental abilities and risk of Alzheimer's disease
On December 31, 2012, a NASA-supported study reported that human spaceflight may harm the brain of astronauts and accelerate the onset of Alzheimer's disease.
On 2 November 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes.
Orthostatic intolerance
Under the influence of the earth's gravity, blood and other body fluids are pulled towards the lower body when standing. When gravity is removed during space exploration, hydrostatic pressures throughout the body are removed and the resulting change in blood distribution may be similar lying down on Earth where hydrostatic differences are minimized. Upon return to earth, reduced blood volume from spaceflight results in orthostatic hypotension. Orthostatic tolerance after spaceflight has been greatly improved by fluid loading countermeasures taken by astronauts before landing.
Radiation effects
Soviet cosmonaut Valentin Lebedev, who spent 211 days in orbit during 1982 (an absolute record for stay in Earth's orbit), lost his eyesight to progressive cataract. Lebedev stated: “I suffered from a lot of radiation in space. It was all concealed back then, during the Soviet years, but now I can say that I caused damage to my health because of that flight.” On 31 May 2013, NASA scientists reported that a possible human mission to Mars may involve a great radiation risk based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011–2012.
Loss of kidney function
On 11 June 2024 researchers at the University College of London's Department of Renal Medicine reported that "Serious health risks emerge (with respect to the kidneys) the longer a person is exposed to Galactic Radiation and microgravity." In fact, based on their current research with mice, the researchers predicted that astronauts who have been exposed to micro-gravity, reduced gravity, and Galactic radiation for 3 years or so on a Mars mission may have to return to Earth while attached to dialysis machines.
Sleep disorders
Spaceflight has been observed to disrupt physiological processes that influence sleep patterns in human beings. Astronauts exhibit asynchronized cortisol rhythmicity, dampened diurnal fluctuations in body temperature, and diminished sleep quality. Sleep pattern disruption in astronauts is a form of extrinsic (environmentally caused) circadian rhythm sleep disorder.
Spaceflight analogues
Biomedical research in space is expensive and logistically and technically complicated, and thus limited. Conducting medical research in space alone will not provide humans with the depth of knowledge needed to ensure the safety of inter-planetary travelers. Complementary to research in space is the use of spaceflight analogues. Analogues are particularly useful for the study of immunity, sleep, psychological factors, human performance, habitability, and telemedicine. Examples of spaceflight analogues include confinement chambers (Mars-500), sub-aqua habitats (NEEMO), and Antarctic (Concordia Station) and Arctic FMARS and (Haughton–Mars Project) stations.
Space medicine careers
Physicians in space medicine generally work in operations or research at NASA or, more recently, space companies that are flying private or commercial astronauts or spaceflight participants.
Research physicians study specific space medical problems, such as the Space Associated Neuro-ocular Syndrome, or focus on medical capabilities for future deep space exploration missions. Research physicians do not have clinical responsibilities in the care of astronauts and thereby are often not specialty-trained in Space Medicine.
Related degrees, areas of specialization, and certifications
There are currently only 3 fellowships in Space Medicine: University of Texas at Houston, UCLA, and Harvard.
Please see Aerospace Medicine page for similar Aerospace Medicine preventative medicine training pathways.
All of the above training programs should include training in the following areas:
Acute Care Medicine
Commercial Spaceflight Training
Flight Medicine
Interventional Radiology Procedures
Human Life Support Systems for Space
Emergency Medicine
Aerospace studies
Global Health
Hyperbaric and Hypobaric Medicine
Public Health
Disaster medicine
Prehospital medicine
Wilderness and extreme medicine
Space nursing
Space nursing is the nursing specialty that studies how space travel impacts human response patterns. Similar to space medicine, the specialty also contributes to knowledge about nursing care of earthbound patients.
Medicine in flight
Sleep medicine
The use of hypnotic sleep aids is widespread among astronauts, with one 10 year long study finding that 75% and 78% of ISS and space shuttle crew members reported taking such medications while in space. Of astronauts who took hypnotic medications, frequency of use was 52% of all nights. NASA allocates 8.5 hours of 'downtime' for sleep per day for astronauts aboard the ISS, but the average duration of sleep is only 6 hours. Poor sleep quality and quantity can compromise the daytime performance and attentiveness of space crew. As such, improving nighttime sleep has been a topic of NASA-funded research for more than half a century. The following pharmacological and environmental strategies have been investigated in the context of sleep in space:
Light therapy involving exposure to visible light at varying intensities and wavelengths to entrain circadian rhythm, is key a topic of interest in NASA-funded research. Various photoreceptors in the human eye such as melanopsin, rhodopsin, and photopsin communicate with the suprachiasmatic nucleus (the master circadian pacemaker of the brain) to entrain circadian rhythm. Melanopsin photoreceptors are most sensitive to blue light wavelengths in the range of 470-490 nm (blue light). NASA has trialed and implemented rhythmic light panels on the ISS to assist entrain the circadian rhythms of astronauts. NASA is soon to test more advanced light panels that change their output light intensity and wavelengths according to time of day, with red-tinted lights (<600 nm) set to be used at night to provide visibility at 'night' and shorter wavelengths of high light intensity to be used in the 'morning' or at times where alertness and vigilance are needed.
Melatonin, a naturally occurring hormone secreted by pineal gland, has shown positive effects in reducing sleep latency in orbit.
Nonbenzodiazepines sedative-hypnotics (also known as "z drugs") such as Zolpidem, Zopiclone, and Zaleplon are the most commonly dispensed medications on the International Space Station. Despite their widespread use amongst astronauts, relatively little research has been conducted on nonbenzodiazepines in the context of spaceflight. Prior research suggests that nonbenzodiazepines may produce less residual impairment than most benzodiazepines. The shortest acting nonbenzodiazepine, Zaleplon, produces little to no cognitive impairment (at clinically relevant doses) even when dosed as little as an hour before awakening. Astronauts frequently take second doses of hypnotic drugs, the shorter duration of action of nonbenzodiazepines may be better suited to middle-of-the-night dosing
Benzodiazepines are frequently used medications in space, though less often than nonbenzodiazepine "z-drugs". The longer acting nature of some benzodiazepines used by astronauts, such as temazepam, has been cited as "non-ideal" for spaceflight use due to a high tendency of causing morning impairments.
Modafinil, a wakefulness drug, is available on the space station to mitigate the deleterious effects of sleep disruption and "optimise performance while fatigued". Modafinil has shown positive results in restoring cognitive function to baseline in the face of total sleep deprivation, though no studies examining modafinil's effects in astronauts have been conducted.
Ultrasound and space
Ultrasound is the main diagnostic imaging tool on ISS and for the foreseeable future missions. X-rays and CT scans involve radiation which is unacceptable in the space environment. Though MRI uses magnetics to create images, it is too large at present to consider as a viable option. Ultrasound, which uses sound waves to create images and comes in laptop size packages, provides imaging of a wide variety of tissues and organs. It is currently being used to look at the eyeball and the optic nerve to help determine the cause(s) of changes that NASA has noted mostly in long duration astronauts. NASA is also pushing the limits of ultrasound use regarding musculoskeletal problems as these are some of the most common and most likely problems to occur. Significant challenges to using ultrasounds on space missions is training the astronaut to use the equipment (ultrasound technicians spend years in training and developing the skills necessary to be "good" at their job) as well as interpreting the images that are captured. Much of ultrasound interpretation is done real-time but it is impractical to train astronauts to actually read/interpret ultrasounds. Thus, the data is currently being sent back to mission control and forwarded to medical personnel to read and interpret. Future exploration class missions will need to be autonomous due to transmission times taking too long for urgent/emergent medical conditions. The ability to be autonomous, or to use other equipment such as MRIs, is currently being researched.
Space Shuttle era
With the additional lifting capability presented by the Space Shuttle program, NASA designers were able to create a more comprehensive medical readiness kit. The SOMS consists of two separate packages: the Medications and Bandage Kit (MBK) and the Emergency Medical Kit (EMK). While the MBK contained capsulate medications (tablets, capsules, and suppositories), bandage materials, and topical medication, the EMK had medications to be administered by injection, items for performing minor surgeries, diagnostic/therapeutic items, and a microbiological test kit.
John Glenn, the first American astronaut to orbit the Earth, returned with much fanfare to space once again on STS-95 at 77 years of age to confront the physiological challenges preventing long-term space travel for astronauts—loss of bone density, loss of muscle mass, balance disorders, sleep disturbances, cardiovascular changes, and immune system depression—all of which are problems confronting aging people as well as astronauts.
Future investigations
Feasibility of Long Duration Space Flights
In the interest of creating the possibility of longer duration space flight, NASA has invested in the research and application of preventative space medicine, not only for medically preventable pathologies but trauma as well. Although trauma constitutes more of a life-threatening situation, medically preventable pathologies pose more of a threat to astronauts. "The involved crewmember is endangered because of mission stress and the lack of complete treatment capabilities on board the spacecraft, which could result in the manifestation of more severe symptoms than those usually associated with the same disease in the terrestrial environment. Also, the situation is potentially hazardous for the other crewmembers because the small, closed, ecological system of the spacecraft is conducive to disease transmission. Even if the disease is not transmitted, the safety of the other crewmembers may be jeopardized by the loss of the capabilities of the crewmember who is ill. Such an occurrence will be more serious and potentially hazardous as the durations of crewed missions increase and as operational procedures become more complex. Not only do the health and safety of the crewmembers become critical, but the probability of mission success is lessened if the illness occurs during flight. Aborting a mission to return an ill crewmember before mission goals are completed is costly and potentially dangerous." Treatment of trauma may involve surgery in zero-gravity, which is a challenging proposition given the need for blood sample containment. Diagnosis and monitoring of crew members is a particularly vital need. NASA tested the rHEALTH ONE to advance this capability for on-orbit, travel to Moon and Mars. This capability is mapped to Risk of Adverse Health Outcomes and Decrements in Performance Due to Medical Conditions that occur in Mission, as well as Long Term Health Outcomes Due to Mission Exposures. Without an approach to perform onboard medical monitoring, loss of crew members may jeopardize long duration missions.
Impact on science and medicine
Astronauts are not the only ones who benefit from space medicine research. Several medical products have been developed that are space spinoffs, which are practical applications for the field of medicine arising out of the space program. Because of joint research efforts between NASA, the National Institutes on Aging (a part of the National Institutes of Health), and other aging-related organizations, space exploration has benefited a particular segment of society, seniors. Evidence of aging related medical research conducted in space was most publicly noticeable during STS-95. These spin-offs are sometimes termed as "exomedicine".
Pre-Mercury through Apollo
Radiation therapy for the treatment of cancer: In conjunction with the Cleveland Clinic, the cyclotron at Glenn Research Center in Cleveland, Ohio was used in the first clinical trials for the treatment and evaluation of neutron therapy for cancer patients.
Foldable walkers: Made from a lightweight metal material developed by NASA for aircraft and spacecraft, foldable walkers are portable and easy to manage.
Personal alert systems: These are emergency alert devices that can be worn by individuals who may require emergency medical or safety assistance. When a button is pushed, the device sends a signal to a remote location for help. To send the signal, the device relies on telemetry technology developed at NASA.
CAT and MRI scans: These devices are used by hospitals to see inside the human body. Their development would not have been possible without the technology provided by NASA after it found a way to take better pictures of the Earth's moon.
Neuromuscular Electric Stimulation (NMES): A form of treatment originally developed to combat muscle atrophy in space that has been found to have applications outside of space. A prominent example of NMES being used outside of space medicine is muscle stimulator devices for paralyzed individuals. These devices can be used from up to half an hour per day to prevent muscle atrophy in paralyzed individuals. It provides electrical stimulation to muscles which is equal to jogging three miles per week. A well-known example is that Christopher Reeve used these in his therapy. Outside of paralyzed individuals, it also has applications in sports medicine, where it is used to manage or prevent potential damages that those high-intensity lifestyles have on athletes.
Orthopedic evaluation tools: equipment to evaluate posture, gait and balance disturbances was developed at NASA, along with a radiation-free way to measure bone flexibility using vibration.
Diabetic foot mapping: This technique was developed at NASA's center in Cleveland, Ohio to help monitor the effects of diabetes in feet.
Foam cushioning: special foam used for cushioning astronauts during liftoff is used in pillows and mattresses at many nursing homes and hospitals to help prevent ulcers, relieve pressure, and provide a better night's sleep.
Kidney dialysis machines: the Marquardt Corporation, an ancestor company with NASA, were developing a system that would purify and recycle water during space missions in the late 1960s. From this project, the Marquardt Corporation observed that these processes could be used in removing toxic waste from used dialysis fluid. This allowed the development of a kidney dialysis machine. These machines rely on technology developed by NASA in order to process and remove toxic waste from used dialysis fluid.
Talking wheelchairs: paralyzed individuals who have difficulty speaking may use a talking feature on their wheelchairs which was developed by NASA to create synthesized speech for aircraft. "Talking Wheelchairs" or The Versatile Portable Speech Prosthesis (VSP) is a technology that aids in the communication for non-verbal persons. The project started in May 1978 and finished in November 1981. Originally, this technology was created for people who were diagnosed with cerebral palsy who were using traditional electric wheelchairs. This technology is portable and versatile, as well as a highly successful speech prosthesis. However, the nickname "talking wheelchair" has created some separation from the wheelchair itself. The VSP is easily accessible to the person using it by operation of single or multiple switches or by keyboard, and uses a synthetic voice used for verbal speech. The synthetic voice provides communication opportunities that regular speaking persons have such as: communicating with people in a crowd, communicating in the dark, communicating with people who have vision problems, communicating with younger children, communicating when the listener's back is turned, etc. The synthetic voice also provides a sense of personal and individual communication as the keyboard can be programmed with “fun” words as well as “throw-away lines”. The first version of the versatile portable speech prosthesis was completed in May 1979. There were additions made to the VSP in November 1979 and provided more controls for speech. By November 1979, VSP was capable of taking English text and successful in putting out English speech. The user was also able to store and retrieve vocabulary, as well as edit and create new vocabulary. The controls and plugs on the VSP were versatile allowing plug-and-go ability. With the limitations of ASR systems, Portable Speech Prosthesis have moved to the use of Silent Speech Recognition (SSR). The goal of using SSR with VSP is to recognize information that is speech related with some modals such as surface electromyography (sEMG). Speech recognition models used algorithms for extracting speech-related features through the sEMG signals. The patterns of sEMG signals used grammar models to recognize sequences of words. Phoneme-based models were also used when recognizing vocabulary of previously untrained words. Multi-point sensors were used with these algorithms in which they could be arranged in a flexible way to record the measurements of sEMG signals from the small articular muscles found in the human face and neck.
Collapsible, lightweight wheelchairs: wheelchairs designed for portability that can be folded and put into trunks of cars. They rely on synthetic materials that NASA developed for its air and space craft
Surgically implantable heart pacemaker: these devices depend on technologies developed by NASA for use with satellites. They communicate information about the activity of the pacemaker, such as how much time remains before the batteries need to be replaced.
Implantable heart defibrillator: this tool continuously monitors heart activity and can deliver an electric shock to restore heartbeat regularity.
EMS communications: technology used to communicate telemetry between Earth and space was developed by NASA to monitor the health of astronauts in space from the ground. Ambulances use this same technology to send information—like EKG readings—from patients in transport to hospitals. This allows faster and better treatment.
Weightlessness therapy: The weightlessness of space can allow some individuals with limited mobility on Earth—even those normally confined to wheelchairs—the freedom to move about with ease. Physicist Stephen Hawking took advantage of weightlessness in NASA's Vomit Comet aircraft in 2007. This idea also led to the development of the Anti-Gravity Treadmill from NASA technology, which employs "differential air pressure to mimic...gravity".
Ultrasound microgravity
The Advanced Diagnostic Ultrasound in Microgravity Study is funded by the National Space Biomedical Research Institute and involves the use of ultrasound among Astronauts including former ISS Commanders Leroy Chiao and Gennady Padalka who are guided by remote experts to diagnose and potentially treat hundreds of medical conditions in space. This study has a widespread impact and has been extended to cover professional and Olympic sports injuries as well as medical students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations. Findings from this study were submitted for publication to the journal Radiology aboard the International Space Station; the first article submitted in space.
See also
Artificial gravity
Aviation medicine
Bioastronautics
Effect of spaceflight on the human body
Fatigue and sleep loss during spaceflight
Intervertebral disc damage and spaceflight
List of microorganisms tested in outer space
Mars analog habitat
Medical treatment during spaceflight
Microgravity University
Reduced-gravity aircraft
Renal stone formation in space
Spaceflight osteopenia
Spaceflight radiation carcinogenesis
Space food
Space nursing
Space Nursing Society
Space pharmacology
Team composition and cohesion in spaceflight missions
Visual impairment due to intracranial pressure
References
Notes
Sources
External links
Space Medicine Association
Description of space medicine
NASA History Series Publications (many of which are online)
Sleep in Space, Digital Sleep Recorder used by NASA in STS-90 and STS-95 missions
A Solution for Medical Needs and Cramped Quarters in Space – NASA
Human spaceflight programs
International Space Station experiments | Space medicine | [
"Engineering"
] | 7,790 | [
"Space programs",
"Human spaceflight programs"
] |
14,638,490 | https://en.wikipedia.org/wiki/Severe%20combined%20immunodeficiency%20%28non-human%29 | The severe combined immunodeficiency (SCID) is a severe immunodeficiency genetic disorder that is characterized by the complete inability of the adaptive immune system to mount, coordinate, and sustain an appropriate immune response, usually due to absent or atypical T and B lymphocytes. In humans, SCID is colloquially known as "bubble boy" disease, as victims may require complete clinical isolation to prevent lethal infection from environmental microbes.
Several forms of SCID occur in animal species. Not all forms of SCID have the same cause; different genes and modes of inheritance have been implicated in different species.
Horses
Equine SCID is an autosomal recessive disorder that affects the Arabian horse. Similar to the "bubble boy" condition in humans, an affected foal is born with no immune system, and thus generally dies of an opportunistic infection, usually within the first four to six months of life. There is a DNA test that can detect healthy horses who are carriers of the gene causing SCID, thus testing and careful, planned matings can now eliminate the possibility of an affected foal ever being born.
SCID is one of six genetic diseases known to affect horses of Arabian bloodlines, and the only one of the six for which there is a DNA test to determine if a given horse is a carrier of the allele. The only known form of horse SCID involves mutation in DNA-PKcs.
Unlike SCID in humans, which can be treated, for horses, to date, the condition remains a fatal disease. When a horse is heterozygous for the gene, it is a carrier, but perfectly healthy and has no symptoms at all. If two carriers are bred together, however, classic Mendelian genetics indicate that there is a 50% chance of any given mating producing a foal that is a carrier heterozygous for the gene, and a 25% risk of producing a foal affected by the disease. If a horse is found to carry the gene, the breeder can choose to geld a male or spay a female horse so that they cannot reproduce, or they can choose to breed the known carrier only to horses that have been tested and found to be "clear" of the gene. In either case, careful breeding practices can avoid ever producing an SCID-affected foal.
Dogs
There are two known types of SCID in dogs, an X chromosome-linked form that is very similar to X-SCID in humans, and an autosomal recessive form that is similar to the disease in Arabian horses and SCID mice.
X-SCID in dogs (caused by IL2RG mutation) is seen in Basset Hounds and Cardigan Welsh Corgis. Because it is an X-linked disease, females are carriers only and disease is seen in males exclusively. It is caused by a mutation in the gene for the cytokine receptor common gamma chain. Recurring infections are seen and affected animals usually do not live beyond three to four months. Characteristics include a poorly developed thymus gland, decreased T-lymphocytes and IgG, absent IgA, and normal quantities of IgM. A common cause of death is canine distemper, which develops following vaccination with a modified live distemper virus vaccine. Due to its similarity to X-SCID in humans, breeding colonies of affected dogs have been created in order to study the disease and test treatments, particularly bone marrow transplantation and gene therapy.
The autosomal recessive form of SCID has been identified in one line of Jack Russell Terriers. It is caused by a loss of DNA protein kinase (DNA-PKcs aka PRKDC), which leads to faulty V(D)J recombination. V(D)J recombination is necessary for recognition of a diverse range of antigens from bacteria, viruses, and parasites. It is characterized by nonfunctional T and B-lymphocytes and a complete lack of gammaglobulins. Death is secondary to infection. Differences between this disease and the form found in Bassets and Corgis include a complete lack of IgM and the presence of the disease in females.
Mice
SCID mice are routinely used as model organisms for research into the basic biology of the immune system, cell transplantation strategies, and the effects of disease on mammalian systems. They have been extensively used as hosts for normal and malignant tissue transplants. In addition, they are useful for testing the safety of new vaccines or therapeutic agents in immunocompromised individuals.
The condition is due to a rare recessive mutation on Chromosome 16 responsible for deficient activity of an enzyme involved in DNA repair (Prkdc or "protein kinase, DNA activated, catalytic polypeptide"). Because V(D)J recombination does not occur, the humoral and cellular immune systems fail to mature. As a result, SCID mice have an impaired ability to make T or B lymphocytes, may not activate some components of the complement system, and cannot efficiently fight infections, nor reject tumors and transplants.
In addition to the natural mutation form, SCID in mice can also be created by a targeted knockout of Prkdc. Other human forms of SCID can similarly be mimicked by mutation in genes such as IL2RG (creating a form similar to X-linked SCID). By crossing SCID mice with these other mice, more severely immunocompromised strains can be created to further aid research (e.g. by being less likely to reject transplants). The degree to which the various components of the immune system are compromised varies according to what other mutations the mice carry along with the SCID mutation.
Artificial models
In addition to the natural mutations above, humans have also engineered model organisms to have SCID.
Two laboratory rat models were created in 2022, one having Prkdc knocked out, the other having both Prkdc and Rag2 knocked out.
See also
Severe combined immunodeficiency, for a detailed overview of the condition in humans and an in-depth scientific explanation of the disease
Foal immunodeficiency syndrome
Animal testing on rodents
References
Mammal diseases
Severe combined
Combined T and B–cell immunodeficiencies
Horse diseases | Severe combined immunodeficiency (non-human) | [
"Biology"
] | 1,308 | [
"Model organisms",
"Animal models"
] |
14,640,519 | https://en.wikipedia.org/wiki/Fungal%20mating%20pheromone%20receptors | Fungal pheromone mating factor receptors form a distinct family of G-protein-coupled receptors.
Function
Mating factor receptors STE2 and STE3 are integral membrane proteins that may be involved in the response to mating factors on the cell membrane. The amino acid sequences of both receptors contain high proportions of hydrophobic residues grouped into 7 domains, in a manner reminiscent of the rhodopsins and other receptors believed to interact with G-proteins.
References
G protein-coupled receptors
Protein domains
Protein families
Membrane proteins | Fungal mating pheromone receptors | [
"Chemistry",
"Biology"
] | 103 | [
"Protein classification",
"Signal transduction",
"G protein-coupled receptors",
"Protein domains",
"Membrane proteins",
"Protein families"
] |
14,640,617 | https://en.wikipedia.org/wiki/Cyclic%20AMP%20receptors | Cyclic AMP receptors from slime molds are a distinct family of
G-protein coupled receptors. These receptors control development in
Dictyostelium discoideum.
In D. discoideum, the cyclic AMP receptors coordinate aggregation of individual cells into a multicellular organism, and regulate the expression of a large number of developmentally-regulated genes. The amino acid sequences of the receptors contain high proportions of hydrophobic residues grouped into 7 domains, in a manner reminiscent of the rhodopsins and other receptors believed to interact with G-proteins. However, while a similar 3D framework has been proposed to account for this, there is no significant sequence similarity between these families: the cAMP receptors thus bear their own unique '7TM' signature.
See also
cAMP receptor protein
References
G protein-coupled receptors
Protein domains
Protein families
Membrane proteins | Cyclic AMP receptors | [
"Chemistry",
"Biology"
] | 169 | [
"Protein classification",
"Signal transduction",
"G protein-coupled receptors",
"Protein domains",
"Membrane proteins",
"Protein families"
] |
14,641,222 | https://en.wikipedia.org/wiki/Longitudinal%20stability | In flight dynamics, longitudinal stability is the stability of an aircraft in the longitudinal, or pitching, plane. This characteristic is important in determining whether an aircraft pilot will be able to control the aircraft in the pitching plane without requiring excessive attention or excessive strength.
The longitudinal stability of an aircraft, also called pitch stability, refers to the aircraft's stability in its plane of symmetry about the lateral axis (the axis along the wingspan). It is an important aspect of the handling qualities of the aircraft, and one of the main factors determining the ease with which the pilot is able to maintain level flight.
Longitudinal static stability refers to the aircraft's initial tendency on pitching. Dynamic stability refers to whether oscillations tend to increase, decrease or stay constant.
Static stability
If an aircraft is longitudinally statically stable, a small increase in angle of attack will create a nose-down pitching moment on the aircraft, so that the angle of attack decreases. Similarly, a small decrease in angle of attack will create a nose-up pitching moment so that the angle of attack increases. This means the aircraft will self-correct longitudinal (pitch) disturbances without pilot input.
If an aircraft is longitudinally statically unstable, a small increase in angle of attack will create a nose-up pitching moment on the aircraft, promoting a further increase in the angle of attack.
If the aircraft has zero longitudinal static stability it is said to be statically neutral, and the position of its center of gravity is called the neutral point.
The longitudinal static stability of an aircraft depends on the location of its center of gravity relative to the neutral point. As the center of gravity moves increasingly forward, the pitching moment arm is increased, increasing stability. The distance between the center of gravity and the neutral point is defined as "static margin". It is usually given as a percentage of the mean aerodynamic chord. If the center of gravity is forward of the neutral point, the static margin is positive. If the center of gravity is aft of the neutral point, the static margin is negative. The greater the static margin, the more stable the aircraft will be.
Most conventional aircraft have positive longitudinal stability, providing the aircraft's center of gravity lies within the approved range. The operating handbook for every airplane specifies a range over which the center of gravity is permitted to move. If the center of gravity is too far aft, the aircraft will be unstable. If it is too far forward, the aircraft will be excessively stable, which makes the aircraft "stiff" in pitch and hard for the pilot to bring the nose up for landing. Required control forces will be greater.
Some aircraft have low stability to reduce trim drag. This has the benefit of reducing fuel consumption. Some aerobatic and fighter aircraft may have low or even negative stability to provide high manoeuvrability. Low or negative stability is called relaxed stability. An aircraft with low or negative static stability will typically have fly-by-wire controls with computer augmentation to assist the pilot. Otherwise, an aircraft with negative longitudinal stability will be more difficult to fly. It will be necessary for the pilot devote more effort, make more frequent inputs to the elevator control, and make larger inputs, in an attempt to maintain the desired pitch attitude.
For an aircraft to possess positive static stability, it is not necessary for its level to return to exactly what it was before the upset. It is sufficient that the speed and orientation do not continue to diverge but undergo at least a small change back towards the original speed and orientation.
The deployment of flaps will increase longitudinal stability.
Unlike motion about the other two axes, and in the other degrees of freedom of the aircraft (sideslip translation, rotation in roll, rotation in yaw), which are usually heavily coupled, motion in the longitudinal plane does not typically cause a roll or yaw.
A larger horizontal stabilizer, and a greater moment arm of the horizontal stabilizer about the neutral point, will increase longitudinal stability.
Tailless aircraft
For a tailless aircraft, the neutral point coincides with the aerodynamic center, and so for such aircraft to have longitudinal static stability, the center of gravity must lie ahead of the aerodynamic center.
For missiles with symmetric airfoils, the neutral point and the center of pressure are coincident and the term neutral point is not used.
An unguided rocket must have a large positive static margin so the rocket shows minimum tendency to diverge from the direction of flight given to it at launch. In contrast, guided missiles usually have a negative static margin for increased maneuverability.
Dynamic stability
Longitudinal dynamic stability of a statically stable aircraft refers to whether the aircraft will continue to oscillate after a disturbance, or whether the oscillations are damped. A dynamically stable aircraft will experience oscillations reducing to nil. A dynamically neutral aircraft will continue to oscillate around its original level, and dynamically unstable aircraft will experience increasing oscillations and displacement from its original level.
Dynamic stability is caused by damping. If damping is too great, the aircraft will be less responsive and less manoeuvrable.
Decreasing phugoid (long-period) oscillations can be achieved by building a smaller stabilizer on a longer tail, and by shifting the center of gravity to the rear.
An aircraft that is not statically stable cannot be dynamically stable.
Analysis
Near the cruise condition most of the lift force is generated by the wings, with ideally only a small amount generated by the fuselage and tail. We may analyse the longitudinal static stability by considering the aircraft in equilibrium under wing lift, tail force, and weight. The moment equilibrium condition is called trim, and we are generally interested in the longitudinal stability of the aircraft about this trim condition.
Equating forces in the vertical direction:
where W is the weight, is the wing lift and is the tail force.
For a thin airfoil at low angle of attack, the wing lift is proportional to the angle of attack:
where is the wing area is the (wing) lift coefficient, is the angle of attack. The term is included to account for camber, which results in lift at zero angle of attack. Finally is the dynamic pressure:
where is the air density and is the speed.
Trim
The force from the tail-plane is proportional to its angle of attack, including the effects of any elevator deflection and any adjustment the pilot has made to trim-out any stick force. In addition, the tail is located in the flow field of the main wing, and consequently experiences downwash, reducing its angle of attack.
In a statically stable aircraft of conventional (tail in rear) configuration, the tail-plane force may act upward or downward depending on the design and the flight conditions. In a typical canard aircraft both fore and aft planes are lifting surfaces. The fundamental requirement for static stability is that the aft surface must have greater authority (leverage) in restoring a disturbance than the forward surface has in exacerbating it. This leverage is a product of moment arm from the center of gravity and surface area. Correctly balanced in this way, the partial derivative of pitching moment with respect to changes in angle of attack will be negative: a momentary pitch up to a larger angle of attack makes the resultant pitching moment tend to pitch the aircraft back down. (Here, pitch is used casually for the angle between the nose and the direction of the airflow; angle of attack.) This is the "stability derivative" d(M)/d(alpha), described below.
The tail force is, therefore:
where is the tail area, is the tail force coefficient, is the elevator deflection, and is the downwash angle.
A canard aircraft may have its foreplane rigged at a high angle of incidence, which can be seen in a canard catapult glider from a toy store; the design puts the c.g. well forward, requiring nose-up lift.
Violations of the basic principle are exploited in some high performance "relaxed static stability" combat aircraft to enhance agility; artificial stability is supplied by active electronic means.
There are a few classical cases where this favorable response was not achieved, notably in T-tail configurations. A T-tail airplane has a higher horizontal tail that passes through the wake of the wing later (at a higher angle of attack) than a lower tail would, and at this point the wing has already stalled and has a much larger separated wake. Inside the separated wake, the tail sees little to no freestream and loses effectiveness. Elevator control power is also heavily reduced or even lost, and the pilot is unable to easily escape the stall. This phenomenon is known as 'deep stall'.
Taking moments about the center of gravity, the net nose-up moment is:
where is the location of the center of gravity behind the aerodynamic center of the main wing, is the tail moment arm.
For trim, this moment must be zero. For a given maximum elevator deflection, there is a corresponding limit on center of gravity position at which the aircraft can be kept in equilibrium. When limited by control deflection this is known as a 'trim limit'. In principle trim limits could determine the permissible forwards and rearwards shift of the center of gravity, but usually it is only the forward cg limit which is determined by the available control, the aft limit is usually dictated by stability.
In a missile context 'trim limit' more usually refers to the maximum angle of attack, and hence lateral acceleration which can be generated.
Static stability
The nature of stability may be examined by considering the increment in pitching moment with change in angle of attack at the trim condition. If this is nose up, the aircraft is longitudinally unstable; if nose down it is stable. Differentiating the moment equation with respect to :
Note: is a stability derivative.
It is convenient to treat total lift as acting at a distance h ahead of the centre of gravity, so that the moment equation may be written:
Applying the increment in angle of attack:
Equating the two expressions for moment increment:
The total lift is the sum of and so the sum in the denominator can be simplified and written as the derivative of the total lift due to angle of attack, yielding:
Where c is the mean aerodynamic chord of the main wing. The term:
is known as the tail volume ratio. Its coefficient, the ratio of the two lift derivatives, has values in the range of 0.50 to 0.65 for typical configurations. Hence the expression for h may be written more compactly, though somewhat approximately, as:
is known as the static margin. For stability it must be negative. (However, for consistency of language, the static margin is sometimes taken as , so that positive stability is associated with positive static margin.)
See also
Directional stability
Flight dynamics
Handling qualities
Phugoid
Yaw damper
References
Aerospace engineering
Aircraft aerodynamics
Flight control systems
Aviation science | Longitudinal stability | [
"Engineering"
] | 2,227 | [
"Aerospace engineering"
] |
14,642,717 | https://en.wikipedia.org/wiki/Tilt%20test%20%28vehicle%20safety%20test%29 | The tilt test is a type of safety test that certain government vehicle certification bodies require new vehicle designs to pass before being allowed on the road or rail track.
The test is an assessment of the weight distribution and hence the position of the centre of gravity of the vehicle, and can be carried out in a laden or unladen state, i.e. with or without passengers or freight. The test can be applied to automobiles, trucks, buses and rail vehicles.
The test involves tilting the vehicle in the notional direction of the side of the vehicle, on a movable platform. In order to pass the test, the vehicle must not tip over before a specified angle of tilt is reached by the table.
In the United Kingdom, double-decker buses have to: "be capable of leaning, fully laden on top, at an angle of 28 deg without toppling over before they are allowed on the road."
The same 28-degree requirement is in place in Hong Kong for double-decker buses. For single-deckers the requirement is 35 degrees.
See also
Vehicle metrics
Weight distribution
Moose test
References
Product safety
Vehicle design | Tilt test (vehicle safety test) | [
"Engineering"
] | 230 | [
"Vehicle design",
"Design"
] |
14,642,741 | https://en.wikipedia.org/wiki/Digital%20image%20correlation%20and%20tracking | Digital image correlation and tracking is an optical method that employs tracking and image registration techniques for accurate 2D and 3D measurements of changes in images. This method is often used to measure full-field displacement and strains, and it is widely applied in many areas of science and engineering. Compared to strain gauges and extensometers, digital image correlation methods provide finer details about deformation, due to the ability to provide both local and average data.
Overview
Digital image correlation (DIC) techniques have been increasing in popularity, especially in micro- and nano-scale mechanical testing applications due to their relative ease of implementation and use. Advances in computer technology and digital cameras have been the enabling technologies for this method and while white-light optics has been the predominant approach, DIC can be and has been extended to almost any imaging technology.
The concept of using cross-correlation to measure shifts in datasets has been known for a long time, and it has been applied to digital images since at least the early 1970s. The present-day applications are almost innumerable, including image analysis, image compression, velocimetry, and strain estimation. Much early work in DIC in the field of mechanics was led by researchers at the University of South Carolina in the early 1980s and has been optimized and improved in recent years. Commonly, DIC relies on finding the maximum of the correlation array between pixel intensity array subsets on two or more corresponding images, which gives the integer translational shift between them. It is also possible to estimate shifts to a finer resolution than the resolution of the original images, which is often called "sub-pixel" registration because the measured shift is smaller than an integer pixel unit. For sub-pixel interpolation of the shift, other methods do not simply maximize the correlation coefficient. An iterative approach can also be used to maximize the interpolated correlation coefficient by using non-linear optimization techniques. The non-linear optimization approach tends to be conceptually simpler and can handle large deformations more accurately, but as with most nonlinear optimization techniques , it is slower.
The two-dimensional discrete cross correlation can be defined in several ways, one possibility being:
Here f(m, n) is the pixel intensity or the gray-scale value at a point (m, n) in the original image, g(m, n) is the gray-scale value at a point (m, n) in the translated image, and are mean values of the intensity matrices f and g respectively.
However, in practical applications, the correlation array is usually computed using Fourier-transform methods, since the fast Fourier transform is a much faster method than directly computing the correlation.
Then taking the complex conjugate of the second result and multiplying the Fourier transforms together elementwise, we obtain the Fourier transform of the correlogram,:
where is the Hadamard product (entry-wise product). It is also fairly common to normalize the magnitudes to unity at this point, which results in a variation called phase correlation.
Then the cross-correlation is obtained by applying the inverse Fourier transform:
At this point, the coordinates of the maximum of give the integer shift:
Deformation mapping
For deformation mapping, the mapping function that relates the images can be derived from comparing a set of subwindow pairs over the whole images. (Figure 1). The coordinates or grid points (xi, yj) and (xi*, yj*) are related by the translations that occur between the two images. If the deformation is small and perpendicular to the optical axis of the camera, then the relation between (xi, yj) and (xi*, yj*) can be approximated by a 2D affine transformation such as:
Here u and v are translations of the center of the sub-image in the X and Y directions respectively. The distances from the center of the sub-image to the point (x, y) are denoted by and . Thus, the correlation coefficient rij is a function of displacement components (u, v) and displacement gradients
DIC has proven to be very effective at mapping deformation in macroscopic mechanical testing, where the application of specular markers (e.g. paint, toner powder) or surface finishes from machining and polishing provide the needed contrast to correlate images well. However, these methods for applying surface contrast do not extend to the application of free-standing thin films for several reasons. First, vapor deposition at normal temperatures on semiconductor grade substrates results in mirror-finish quality films with RMS roughnesses that are typically on the order of several nanometers. No subsequent polishing or finishing steps are required, and unless electron imaging techniques are employed that can resolve microstructural features, the films do not possess enough useful surface contrast to adequately correlate images. Typically this challenge can be circumvented by applying paint that results in a random speckle pattern on the surface, although the large and turbulent forces resulting from either spraying or applying paint to the surface of a free-standing thin film are too high and would break the specimens. In addition, the sizes of individual paint particles are on the order of μms, while the film thickness is only several hundred nanometers, which would be analogous to supporting a large boulder on a thin sheet of paper.
μDIC
Advances in pattern application and deposition at reduced length scales have exploited small-scale synthesis methods including nano-scale chemical surface restructuring and photolithography of computer-generated random specular patterns to produce suitable surface contrast for DIC. The application of very fine powder particles that electrostatically adhere to the surface of the specimen and can be digitally tracked is one approach. For Al thin films, fine alumina abrasive polishing powder was initially used since the particle sizes are relatively well controlled, although the adhesion to Al films was not very good and the particles tended to agglomerate excessively. The candidate that worked most effectively was a silica powder designed for a high temperature adhesive compound (Aremco, inc.), which was applied through a plastic syringe.
A light blanket of powder would coat the gage section of the tensile sample and the larger particles could be blown away gently. The remaining particles would be those with the best adhesion to the surface. While the resulting surface contrast is not ideal for DIC, the high intensity ratio between the particles and the background provide a unique opportunity to track the particles between consecutive digital images taken during deformation. This can be achieved quite straightforwardly using digital image processing techniques. Sub-pixel tracking can be achieved by a number of correlation techniques, or by fitting to the known intensity profiles of particles.
Photolithography and Electron Beam Lithography can be used to create micro tooling for micro speckle stamps, and the stamps can print speckle patterns onto the surface of the specimen. Stamp inks can be chosen which are appropriate for optical DIC, SEM-DIC, and simultaneous SEM-DIC/EBSD studies (the ink can be transparent to EBSD).
Digital volume correlation
Digital Volume Correlation (DVC, and sometimes called Volumetric-DIC) extends the 2D-DIC algorithms into three dimensions to calculate the full-field 3D deformation from a pair of 3D images. This technique is distinct from 3D-DIC, which only calculates the 3D deformation of an exterior surface using conventional optical images. The DVC algorithm is able to track full-field displacement information in the form of voxels instead of pixels. The theory is similar to above except that another dimension is added: the z-dimension. The displacement is calculated from the correlation of 3D subsets of the reference and deformed volumetric images, which is analogous to the correlation of 2D subsets described above.
DVC can be performed using volumetric image datasets. These images can be obtained using confocal microscopy, X-ray computed tomography, Magnetic Resonance Imaging or other techniques. Similar to the other DIC techniques, the images must exhibit a distinct, high-contrast 3D "speckle pattern" to ensure accurate displacement measurement.
DVC was first developed in 1999 to study the deformation of trabecular bone using X-ray computed tomography images. Since then, applications of DVC have grown to include granular materials, metals, foams, composites and biological materials. To date it has been used with images acquired by MRI imaging, Computer Tomography (CT), micro-CT, confocal microscopy, and lightsheet microscopy. DVC is currently considered to be ideal in the research world for 3D quantification of local displacements, strains, and stress in biological specimens. It is preferred because of the non-invasiveness of the method over traditional experimental methods.
Two of the key challenges are improving the speed and reliability of the DVC measurement. The 3D imaging techniques produce noisier images than conventional 2D optical images, which reduces the quality of the displacement measurement. Computational speed is restricted by the file sizes of 3D images, which are significantly larger than 2D images. For example, an 8-bit (1024x1024) pixel 2D image has a file size of 1 MB, while an 8-bit (1024x1024x1024) voxel 3D image has a file size of 1 GB. This can be partially offset using parallel computing.
Applications
Digital image correlation has demonstrated uses in the following industries:
Automotive
Aerospace
Biological
Industrial
Research and Education
Government and Military
Biomechanics
Robotics
Electronics
It has also been used for mapping earthquake deformation.
DIC Standardization
The International Digital Image Correlation Society () is a body composed of members from academia, government, and industry, and is involved in training and educating end-users about DIC systems and the standardization of DIC practice for general applications. Created in 2015, the iDIC has been focused on creating standardizations for DIC users.
See also
Optical flow
Stress
Strain
Displacement vector
Particle Image Velocimetry
Digital Image Correlation for Electronics
References
External links
Mathematica ImageCorrelate function
Using Digital Image Correlation to Measure Strain on a Tubine Blade
Image Systems DIC
DIC in Electronic Design
DIC Applications in Aerospace
3D Optical Strain Measurements
The International Digital Image Correlation Society (iDICs)
Continuum mechanics
Materials science
Optical metrology
Image processing | Digital image correlation and tracking | [
"Physics",
"Materials_science",
"Engineering"
] | 2,103 | [
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"nan"
] |
14,643,727 | https://en.wikipedia.org/wiki/Pirani%20gauge | The Pirani gauge is a robust thermal conductivity gauge used for the measurement of the pressures in vacuum systems. It was invented in 1906 by Marcello Pirani.
Marcello Stefano Pirani was a German physicist working for Siemens & Halske which was involved in the vacuum lamp industry. In 1905 their product was tantalum lamps which required a high vacuum environment for the filaments. The gauges that Pirani was using in the production environment were some fifty McLeod gauges, each filled with 2 kg of mercury in glass tubes.
Pirani was aware of the gas thermal conductivity investigations of Kundt and Warburg (1875) published thirty years earlier and the work of Marian Smoluchowski (1898). In 1906 he described his "directly indicating vacuum gauge" that used a heated wire to measure vacuum by monitoring the heat transfer from the wire by the vacuum environment.
Structure
The Pirani gauge consists of a metal sensor wire (usually gold plated tungsten or platinum) suspended in a tube which is connected to the system whose vacuum is to be measured. The wire is usually coiled to make the gauge more compact. The connection is usually made either by a ground glass joint or a flanged metal connector, sealed with an o-ring. The sensor wire is connected to an electrical circuit from which, after calibration, a pressure reading may be taken.
Mode of operation
In order to understand the technology, consider that in a gas filled system there are four ways that a heated wire transfers heat to its surroundings.
Gas conduction at high pressure (r representing the distance from the heated wire)
Gas transport at low pressure
Thermal radiation
End losses through the support structures
A heated metal wire (sensor wire, or simply sensor) suspended in a gas will lose heat to the gas as its molecules collide with the wire and remove heat. If the gas pressure is reduced, the number of molecules present will fall proportionately and the wire will lose heat more slowly. Measuring the heat loss is an indirect indication of pressure.
There are three possible schemes that can be done.
Keep the bridge voltage constant and measure the change in resistance as a function of pressure
Keep the current constant and measure the change in resistance as a function of pressure
Keep the temperature of the sensor wire constant and measure the voltage as a function of pressure
Note that keeping the temperature constant implies that the end losses (4.) and the thermal radiation losses (3.) are constant.
The electrical resistance of a wire varies with its temperature, so the resistance indicates the temperature of wire. In many systems, the wire is maintained at a constant resistance R by controlling the current I through the wire. The resistance can be set using a bridge circuit. The current required to achieve this balance is therefore a measure of the vacuum.
The gauge may be used for pressures between 0.5 Torr to 1×10−4 Torr. Below 5×10−4 Torr, a Pirani gauge has only one significant digit of resolution. The thermal conductivity and heat capacity of the gas affects the readout from the meter, and therefore the apparatus may need calibrating before accurate readings are obtainable. For lower pressure measurement, the thermal conductivity of the gas becomes increasingly smaller and more difficult to measure accurately, and other instruments such as a Penning gauge or Bayard–Alpert gauge are used instead.
Pulsed Pirani gauge
A special form of the Pirani gauge is the pulsed Pirani vacuum gauge where the sensor wire is not operated at a constant temperature, but is cyclically heated up to a certain temperature threshold by an increasing voltage ramp. When the threshold is reached, the heating voltage is switched off and the sensor cools down again. The required heat-up time is used as a measure of pressure.
For adequately low pressure, the following first-order dynamic thermal response model relating supplied heating power and sensor temperature T(t) applies:
where and are specific heat and emissivity of the sensor wire (material properties), and are surface area and mass of the sensor wire, and and are constants determined for each sensor in calibration.
Advantages and disadvantages of the pulsed gauge
Advantages
Significantly better resolution in the range above 75 Torr.
The power consumption is drastically reduced compared to continuously operated Pirani gauges.
The gauge's thermal influence on the real measurement is lowered considerably due to the low temperature threshold of 80 °C and the ramp heating in pulsed mode.
The pulsed mode can be efficiently implemented using modern microprocessors.
Disadvantages
Increased calibration effort
Longer heat-up phase
Alternative
An alternative to the Pirani gauge is the thermocouple gauge, which works on the same principle of detecting thermal conductivity of the gas by a change in temperature. In the thermocouple gauge, the temperature is sensed by a thermocouple rather than by the change in resistance of the heated wire.
References
External links
http://homepages.thm.de/~hg8831/vakuumlabor/litera.htm
Vacuum gauges
Pressure gauges | Pirani gauge | [
"Physics",
"Technology",
"Engineering"
] | 1,034 | [
"Vacuum",
"Measuring instruments",
"Vacuum gauges",
"Vacuum systems",
"Pressure gauges",
"Matter"
] |
14,646,706 | https://en.wikipedia.org/wiki/Charge%20ordering | Charge ordering (CO) is a (first- or second-order) phase transition occurring mostly in strongly correlated materials such as transition metal oxides or organic conductors. Due to the strong interaction between electrons, charges are localized on different sites leading to a disproportionation and an ordered superlattice. It appears in different patterns ranging from vertical to horizontal stripes to a checkerboard–like pattern
, and it is not limited to the two-dimensional case. The charge order transition is accompanied by symmetry breaking and may lead to ferroelectricity. It is often found in close proximity to superconductivity and colossal magnetoresistance.
This long range order phenomena was first discovered in magnetite (Fe3O4) by Verwey in 1939.
He observed an increase of the electrical resistivity by two orders of magnitude at TCO=120K, suggesting a phase transition which is now well known as the Verwey transition. He was the first to propose the idea of an ordering process in this context. The charge ordered structure of magnetite was solved in 2011 by a group led by Paul Attfield with the results published in Nature. Periodic lattice distortions associated with charge order were later mapped in the manganite lattice to reveal striped domains containing topological disorder.
Theoretical description
The extended one-dimensional Hubbard model delivers a good description of the charge order transition with the on-site and nearest neighbor Coulomb repulsion U and V. It emerged that V is a crucial parameter and important for developing the charge order state. Further model calculations try to take the temperature and an interchain interaction into account.
The extended Hubbard model for a single chain including inter-site and on-site interaction V and U as well as the parameter for a small dimerization which can be typically found in the (TMTTF)2X compounds is presented as follows:
where t describes the transfer integral or the kinetic energy of the electron and and are the creation and annihilation operator, respectively, for an electron with the spin at the th or th site. denotes the density operator. For non-dimerized systems, can be set to zero Normally, the on-site Coulomb repulsion U stays unchanged only t and V can vary with pressure.
Examples
Organic conductors
Organic conductors consist of donor and acceptor molecules building separated planar sheets or columns. The energy difference in the ionization energy acceptor and the electron affinity of the donor leads to a charge transfer and consequently to free carriers whose number is normally fixed. The carriers are delocalized throughout the crystal due to the overlap of the molecular orbitals being also reasonable for the high anisotropic conductivity. That is why it will be distinct between different dimensional organic conductors. They possess a huge variety of ground states, for instance, charge ordering, spin-Peierls, spin-density wave, antiferromagnetic state, superconductivity, charge-density wave to name only some of them.
Quasi-one-dimensional organic conductors
The model system of one-dimensional conductors is the Bechgaard-Fabre salts family, (TMTTF)2X and (TMTSF)2X, where in the latter one sulfur is substituted by selenium leading to a more metallic behavior over a wide temperature range and exhibiting no charge order. While the TMTTF compounds depending on the counterions X show the conductivity of a semiconductor at room temperature and are expected to be more one-dimensional than (TMTSF)2X.
The transition temperature TCO for the TMTTF subfamily was registered over two order of magnitudes for the centrosymmetric anions X = Br, PF6, AsF6, SbF6 and the non-centrosymmetric anions X= BF4 and ReO4.
In the middle of the eighties, a new "structureless transition" was discovered by Coulon et al. conducting transport and thermopower measurements. They observed a suddenly rise of the resistivity and the thermopower at TCO while x-ray measurements showed no evidence for a change in the crystal symmetry or a formation of a superstructure. The transition was later confirmed by 13C-NMR and dielectric measurements.
Different measurements under pressure reveal a decrease of the transition temperature TCO by increasing the pressure. According to the phase diagram of that family, an increasing pressure applied to the TMTTF compounds can be understood as a shift from the semiconducting state (at room temperature) to a higher dimensional and metallic state as you can find for TMTSF compounds without a charge order state.
Quasi-two-dimensional organic conductors
A dimensional crossover can be induced not only by applying pressure, but also be substituting the donor molecules by other ones. From a historical point of view, the main aim was to synthesize an organic superconductor with a high TC. The key to reach that aim was to increase the orbital overlap in two dimension. With the BEDT-TTF and its huge π-electron system, a new family of quasi-two-dimensional organic conductors were created exhibiting also a great variety of the phase diagram and crystal structure arrangements.
At the turn of the 20th century, first NMR measurements on the θ-(BEDT-TTF)2RbZn(SCN)4 compound uncovered the known metal to insulator transition at TCO= 195 K as an charge order transition.
Transition metal oxides
The most prominent transition metal oxide revealing a CO transition is the magnetite Fe3O4 being a mixed-valence oxide where the iron atoms have a statistical distribution of Fe3+ and Fe2+ above the transition temperature. Below 122 K, the combination of 2+ and 3+ species arrange themselves in a regular pattern, whereas above that transition temperature (also referred to as the Verwey temperature in this case) the thermal energy is large enough to destroy the order.
Alkali metal oxides
The alkali metal oxides rubidium sesquioxide (Rb4O6) and caesium sesquioxide (Cs4O6) display charge ordering.
Detection of charge order
NMR spectroscopy is a powerful tool to measure the charge disproportionation. To apply this method to a certain system, it has to be doped with nuclei, for instance 13C as it is the case for TMTTF compounds, being active for NMR. The local probe nuclei are very sensitive to the charge on the molecule observable in the Knight shift K and the chemical shift D. The Knight shift K is proportional to the spin spin susceptibility χSp on the molecule. The charge order or charge disproportionation appear as a splitting or broadening of the certain feature in the spectrum.
The X-ray diffraction technique allows to determine the atomic position, but the extinction effect hinders to receive a high resolution spectrum. In the case of the organic conductors, the charge per molecule is measured by the change of the bond length of the C=C double bonds in the TTF molecule. A further problem arising by irradiating the organic conductors with x-rays is the destruction of the CO state.
In the organic molecules like TMTTF, TMTSF or BEDT-TFF, there are charge-sensitive modes changing their frequency depending on the local charge. Especially the C=C double bonds are quite sensitive to the charge. If a vibrational mode is infrared active or only visible in the Raman spectrum depends on its symmetry. In the case of BEDT-TTF, the most sensitive ones are the Raman active ν3, ν2 and the infrared out of phase mode ν27. Their frequency is linearly associated to the charge per molecule giving the opportunity to determine the degree of disproportionation.
The charge order transition is also a metal to insulator transition being observable in transport measurements as a sharp rise in the resistivity. Transport measurements are therefore a good tool to get first evidences of a possible charge order transition.
References
Electric and magnetic fields in matter
Phase transitions | Charge ordering | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,665 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
14,647,723 | https://en.wikipedia.org/wiki/Disclination | In crystallography, a disclination is a line defect in which there is compensation of an angular gap. They were first discussed by Vito Volterra in 1907, who provided an analysis of the elastic strains of a wedge disclination. By analogy to dislocations in crystals, the term, disinclination, was first used by Frederick Charles Frank and since then has been modified to its current usage, disclination. They have since been analyzed in some detail particularly by Roland deWit.
Disclinations are characterized by an angular vector (called a Frank vector), and the line of the disclination. When the vector and the line are the same they are sometimes called wedge disclinations which are common in fiveling nanoparticles. When the Frank vector and the line of the disclination are at right angles they are called twist disclinations. As pointed out by John D. Eshelby there is an intricate connection between disclinations and dislocations, with dislocation motion moving the position of a disclination.
Disclinations occur in many different materials, ranging from liquid crystals to nanoparticles and in elastically distorted materials.
Example in two dimensions
In 2D, disclinations and dislocations are point defects instead of line defects as in 3D. They are topological defects and play a central role in melting of 2D crystals within the KTHNY theory, based on two Kosterlitz–Thouless transitions.
Equally sized discs (spheres, particles, atoms) form a hexagonal crystal as dense packing in two dimensions. In such a crystal, each particle has six nearest neighbors. Local strain and twist (for example induced by thermal motion) can cause configurations where discs (or particles) have a coordination number different of six, typically five or seven. Disclinations are topological defects, therefore (starting from a hexagonal array) they can only be created in pairs. Ignoring surface/border effects, this implies that there are always as many 5-folded as 7-folded disclinations present in a perfectly plane 2D crystal. A "bound" pair of 5-7-folded disclinations is a dislocation. If myriad dislocations are thermally dissociated into isolated disclinations, then the monolayer of particles becomes an isotropic fluid in two dimensions. A 2D crystal is free of disclinations.
To transform a section of a hexagonal array into a 5-folded disclination (colored green in the figure), a triangular wedge of hexagonal elements (blue triangle) has to be removed; to create a 7-folded disclination (orange), an identical wedge must be inserted. The figure illustrates how disclinations destroy orientational order, while dislocations only destroy translational order in the far field (portions of the crystal far from the center of the disclination).
Disclinations are topological defects because they cannot be created locally by an affine transformation without cutting the hexagonal array outwards to infinity (or the border of a finite crystal). The undisturbed hexagonal crystal has a 60° symmetry, but when a wedge is removed to create a 5-folded disclination, the crystal symmetry is stretched to 72° – for a 7-folded disclination, it is compressed to about 51,4°. Thus, disclinations store elastic energy by disturbing the director field.
See also
References
Further reading
Crystallographic defects
Mechanics
Materials science
Condensed matter physics | Disclination | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 726 | [
"Applied and interdisciplinary physics",
"Crystallographic defects",
"Phases of matter",
"Materials science",
"Crystallography",
"Mechanics",
"Condensed matter physics",
"nan",
"Mechanical engineering",
"Materials degradation",
"Matter"
] |
14,650,127 | https://en.wikipedia.org/wiki/Zygmunt%20Zawirski | Zygmunt Zawirski (29 July 1882 – 2 April 1948) was a Polish philosopher and logician.
His main field of study was philosophy of physics, history of science, multi-valued logic and relation of multi-valued logic to calculus of probability.
Biography
Zawirski was born on 29 July 1882 in the village of Berezowica Mała (Mala Berezovytsia) near Zbarazh (now Ukraine). In 1928 he became a professor of the Adam Mickiewicz University in Poznań and in 1937 professor of the Jagiellonian University in Kraków. In 1936 he became an editor of Kwartalnik Filozoficzny ("Philosophical Quarterly"). After 1945, he was president of the Krakowskie Towarzystwo Filozoficzne ("Kraków Philosophical Society").
He died on 2 April 1948 in Końskie, Poland.
Notable works
References
Further reading
1882 births
1948 deaths
Academic staff of Jagiellonian University
Historians of science
Mathematical logicians
Academic staff of Adam Mickiewicz University in Poznań
20th-century Polish historians
Polish male non-fiction writers
Polish logicians
People from Ternopil Oblast
20th-century Polish philosophers
Philosophers of physics | Zygmunt Zawirski | [
"Mathematics"
] | 253 | [
"Mathematical logic",
"Mathematical logicians"
] |
14,650,318 | https://en.wikipedia.org/wiki/Israeli%20Cassini%20Soldner | Israeli Cassini Soldner (ICS), commonly known as the Old Israeli Grid (OIG; Reshet Yisra'el Ha-Yeshana) is the old geographic coordinate system for Israel. The name is derived from the Cassini Soldner projection it uses and the fact that it is optimized for Israel. ICS has been mostly replaced by the new coordinate system Israeli Transverse Mercator (ITM), also known as the New Israeli Grid (NIG), but still referenced by older books and navigation software.
History
The Cassini Soldner projection was used by the British Mandate of Palestine, when it was called the Palestine grid. The Palestine grid reached as south as Beer-Sheba. To avoid the existence of negative coordinates in the southern Negev, the False Northing of ICS was increased by 1000000. As a result, coordinates in the south of Israel are higher than 800000.
Examples
An ICS coordinate is generally given as a pair of two numbers (excluding any digits behind a decimal point which may be used in very precise surveying). The first number is always the Easting and the second is the Northing. The easting and northing are in metres from the false origin. The easting is always a 6 digit number while the northing has 6 or 7 digits.
The ICS coordinate for the Western Wall at Jerusalem is:
E 172249 m
N 1131586 m
The first figure is the easting and means that the location is 172,249 meters east from the false origin (along the X axis). The second figure is the northing and puts the location 1,131,586 meters north of the false origin (along the Y axis). Also notice how the easting in this example is indicated with an “E” and likewise an “N” for the northing. The fact that the coordinate is in meters is indicated by the lowercase m.
The table below shows the same coordinate in 3 different grids:
Grid parameters
The ICS coordinate system is defined by the following parameters:
Projection: Cassini Soldner
Reference ellipsoid: Clarke 80 Modified
a(m): 6378300.789
1/f: 293.466
Main datum point values:
Latitude of origin (D-M-S): 31 44 2.748999999990644
Longitude of origin (D-M-S): 35 12 43.490000000012970
Main point grid values:
False Easting (m): 170251.5549999999
False Northing(m): 1126867.909
Grid scale factor: 1
References
Sources
Official ICS Grid Definition
MAPI (Mapping Center of Israel) official website (Hebrew).
Geography educational website of Haifa's university.
External links
MAPI (Mapping Center of Israel) official website (Hebrew).
Geography educational website of Haifa's university.
Geographic coordinate systems
Geography of Israel
Land surveying systems
Geodesy | Israeli Cassini Soldner | [
"Mathematics"
] | 624 | [
"Geographic coordinate systems",
"Applied mathematics",
"Geodesy",
"Coordinate systems"
] |
14,650,395 | https://en.wikipedia.org/wiki/Monadic%20second-order%20logic | In mathematical logic, monadic second-order logic (MSO) is the fragment of second-order logic where the second-order quantification is limited to quantification over sets. It is particularly important in the logic of graphs, because of Courcelle's theorem, which provides algorithms for evaluating monadic second-order formulas over graphs of bounded treewidth. It is also of fundamental importance in automata theory, where the Büchi–Elgot–Trakhtenbrot theorem gives a logical characterization of the regular languages.
Second-order logic allows quantification over predicates. However, MSO is the fragment in which second-order quantification is limited to monadic predicates (predicates having a single argument). This is often described as quantification over "sets" because monadic predicates are equivalent in expressive power to sets (the set of elements for which the predicate is true).
Variants
Monadic second-order logic comes in two variants. In the variant considered over structures such as graphs and in Courcelle's theorem, the formula may involve non-monadic predicates (in this case the binary edge predicate ), but quantification is restricted to be over monadic predicates only. In the variant considered in automata theory and the Büchi–Elgot–Trakhtenbrot theorem, all predicates, including those in the formula itself, must be monadic, with the exceptions of equality () and ordering () relations.
Computational complexity of evaluation
Existential monadic second-order logic (EMSO) is the fragment of MSO in which all quantifiers over sets must be existential quantifiers, outside of any other part of the formula. The first-order quantifiers are not restricted. By analogy to Fagin's theorem, according to which existential (non-monadic) second-order logic captures precisely the descriptive complexity of the complexity class NP, the class of problems that may be expressed in existential monadic second-order logic has been called monadic NP. The restriction to monadic logic makes it possible to prove separations in this logic that remain unproven for non-monadic second-order logic. For instance, in the logic of graphs, testing whether a graph is disconnected belongs to monadic NP, as the test can be represented by a formula that describes the existence of a proper subset of vertices with no edges connecting them to the rest of the graph; however, the complementary problem, testing whether a graph is connected, does not belong to monadic NP. The existence of an analogous pair of complementary problems, only one of which has an existential second-order formula (without the restriction to monadic formulas) is equivalent to the inequality of NP and coNP, an open question in computational complexity.
By contrast, when we wish to check whether a Boolean MSO formula is satisfied by an input finite tree, this problem can be solved in linear time in the tree, by translating the Boolean MSO formula to a tree automaton and evaluating the automaton on the tree. In terms of the query, however, the complexity of this process is generally nonelementary. Thanks to Courcelle's theorem, we can also evaluate a Boolean MSO formula in linear time on an input graph if the treewidth of the graph is bounded by a constant.
For MSO formulas that have free variables, when the input data is a tree or has bounded treewidth, there are efficient enumeration algorithms to produce the set of all solutions, ensuring that the input data is preprocessed in linear time and that each solution is then produced in a delay linear in the size of each solution, i.e., constant-delay in the common case where all free variables of the query are first-order variables (i.e., they do not represent sets). There are also efficient algorithms for counting the number of solutions of the MSO formula in that case.
Decidability and complexity of satisfiability
The satisfiability problem for monadic second-order logic is undecidable in general because this logic subsumes first-order logic.
The monadic second-order theory of the infinite complete binary tree, called S2S, is decidable. As a consequence of this result, the following theories are decidable:
The monadic second-order theory of trees.
The monadic second-order theory of under successor (S1S).
WS2S and WS1S, which restrict quantification to finite subsets (weak monadic second-order logic). Note that for binary numbers (represented by subsets), addition is definable even in WS1S.
For each of these theories (S2S, S1S, WS2S, WS1S), the complexity of the decision problem is nonelementary.
Use of satisfiability of MSO on trees in verification
Monadic second-order logic of trees has applications in formal verification. Decision procedures for MSO satisfiability have been used to prove properties of programs manipulating linked data structures, as a form of shape analysis, and for symbolic reasoning in hardware verification.
See also
Descriptive complexity theory
Monadic predicate calculus
Second-order logic
References
Mathematical logic | Monadic second-order logic | [
"Mathematics"
] | 1,122 | [
"Mathematical logic"
] |
169,552 | https://en.wikipedia.org/wiki/Fifth%20force | In physics, a fifth force refers to a hypothetical fundamental interaction (also known as fundamental force) beyond the four known interactions in nature: gravitational, electromagnetic, strong nuclear, and weak nuclear forces.
Some speculative theories have proposed a fifth force to explain various anomalous observations that do not fit existing theories. The specific characteristics of a putative fifth force depend on which hypothesis is being advanced. No evidence to support these models has been found.
The term is also used as "the Fifth force" when referring to a specific theory advanced by Ephraim Fischbach in 1971 to explain experimental deviations in the theory of gravity. Later analysis failed to reproduce those deviations.
History
The term fifth force originates in a 1986 paper by Ephraim Fischbach et al. who reanalyzed the data from the Eötvös experiment of Loránd Eötvös from earlier in the century; the reanalysis found a distance dependence to gravity that deviates from the inverse square law.
The reanalysis was sparked by theoretical work in 1971 by Fujii proposing a model that changes distance dependence with a Yukawa potential-like term:
The parameter characterizes the strength and the range of the interaction. Fischbach's paper found a strength around 1% of gravity and a range of a few hundred meters.
The effect of this potential can be described equivalently as exchange of vector and/or scalar bosons, that is a predicting as yet undetected new particles.
However, many subsequent attempts to reproduce the deviations have failed.
Theory
Theoretical proposals for a fifth-force are driven by inconsistencies between the existing models of general relativity and quantum field theory, and also between the hierarchy problem and the cosmological constant problem. Both issues suggest the possibility of corrections to the gravitational potential around .
The accelerating expansion of the universe has been attributed to a form of energy called dark energy. Some physicists speculate that a form of dark energy called quintessence could be a fifth force.
Experimental approaches
There are at least three kinds of searches that can be undertaken, which depend on the kind of force being considered, and its range.
Equivalence principle
One way to search for a fifth force is with tests of the strong equivalence principle, one of the most powerful tests of general relativity, also known as Einstein's theory of gravity. Alternative theories of gravity, such as Brans–Dicke theory, postulate a fifth possibly one with infinite range. This is because gravitational interactions, in theories other than general relativity, have degrees of freedom other than the "metric", which dictates the curvature of space, and different kinds of degrees of freedom produce different effects. For example, a scalar field cannot produce the bending of light rays.
The fifth force would manifest itself in an effect on solar system orbits, called the Nordtvedt effect. This is tested with Lunar Laser Ranging experiment and very-long-baseline interferometry.
Extra dimensions
Another kind of fifth force, which arises in Kaluza–Klein theory, where the universe has extra dimensions, or in supergravity or string theory is the Yukawa force, which is transmitted by a light scalar field (i.e. a scalar field with a long Compton wavelength, which determines the range). This has prompted a much recent interest, as a theory of supersymmetric large extra dimensions with size slightly less than a has prompted an experimental effort to test gravity on very small scales. This requires extremely sensitive experiments which search for a deviation from the inverse-square law of gravity over a range of distances. Essentially, they are looking for signs that the Yukawa interaction is engaging at a certain length.
Australian researchers, attempting to measure the gravitational constant deep in a mine shaft, found a discrepancy between the predicted and measured value, with the measured value being two percent too small. They concluded that the results may be explained by a repulsive fifth force with a range from a few centimetres to a kilometre. Similar experiments have been carried out on board a submarine, USS Dolphin (AGSS-555), while deeply submerged. A further experiment measuring the gravitational constant in a deep borehole in the Greenland ice sheet found discrepancies of a few percent, but it was not possible to eliminate a geological source for the observed signal.
Earth's mantle
Another experiment uses the Earth's mantle as a giant particle detector, focusing on geoelectrons.
Cepheid variables
Jain et al. (2012) examined existing data on the rate of pulsation of over a thousand cepheid variable stars in 25 galaxies. Theory suggests that the rate of cepheid pulsation in galaxies screened from a hypothetical fifth force by neighbouring clusters, would follow a different pattern from cepheids that are not screened. They were unable to find any variation from Einstein's theory of gravity.
Other approaches
Some experiments used a lake plus a tower that is eters high. A comprehensive review by Ephraim Fischbach and Carrick Talmadge suggested there is no compelling evidence for the fifth force, though scientists still search for it. The Fischbach–Talmadge article was written in 1992, and since then, other evidence has come to light that may indicate a fifth force.
The above experiments search for a fifth force that is, like gravity, independent of the composition of an object, so all objects experience the force in proportion to their masses. Forces that depend on the composition of an object can be very sensitively tested by torsion balance experiments of a type invented by Loránd Eötvös. Such forces may depend, for example, on the ratio of protons to neutrons in an atomic nucleus, nuclear spin, or the relative amount of different kinds of binding energy in a nucleus (see the semi-empirical mass formula). Searches have been done from very short ranges, to municipal scales, to the scale of the Earth, the Sun, and dark matter at the center of the galaxy.
Claims of new particles
In 2015, Attila Krasznahorkay at ATOMKI, the Hungarian Academy of Sciences's Institute for Nuclear Research in Debrecen, Hungary, and his colleagues posited the existence of a new, light boson only 34 times heavier than the electron (17 MeV). In an effort to find a dark photon, the Hungarian team fired protons at thin targets of lithium-7, which created unstable beryllium-8 nuclei that then decayed and ejected pairs of electrons and positrons. Excess decays were observed at an opening angle of 140° between the and , and a combined energy of 17 MeV, which indicated that a small fraction of beryllium-8 will shed excess energy in the form of a new particle.
In November 2019, Krasznahorkay announced that he and his team at ATOMKI had successfully observed the same anomalies in the decay of stable helium atoms as had been observed in beryllium-8, strengthening the case for the X17 particle's existence.
Feng et al. (2016) proposed that a protophobic (i.e. "proton-ignoring") X-boson with a mass of 16.7 MeV with suppressed couplings to protons relative to neutrons and electrons and femtometer range could explain the data. The force may explain the muon anomaly and provide a dark matter candidate. Several research experiments are underway to attempt to validate or refute these results.
See also
References
Force | Fifth force | [
"Physics",
"Mathematics"
] | 1,533 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
170,165 | https://en.wikipedia.org/wiki/Fermi%20energy | The Fermi energy is a concept in quantum mechanics usually referring to the energy difference between the highest and lowest occupied single-particle states in a quantum system of non-interacting fermions at absolute zero temperature.
In a Fermi gas, the lowest occupied state is taken to have zero kinetic energy, whereas in a metal, the lowest occupied state is typically taken to mean the bottom of the conduction band.
The term "Fermi energy" is often used to refer to a different yet closely related concept, the Fermi level (also called electrochemical potential).
There are a few key differences between the Fermi level and Fermi energy, at least as they are used in this article:
The Fermi energy is only defined at absolute zero, while the Fermi level is defined for any temperature.
The Fermi energy is an energy difference (usually corresponding to a kinetic energy), whereas the Fermi level is a total energy level including kinetic energy and potential energy.
The Fermi energy can only be defined for non-interacting fermions (where the potential energy or band edge is a static, well defined quantity), whereas the Fermi level remains well defined even in complex interacting systems, at thermodynamic equilibrium.
Since the Fermi level in a metal at absolute zero is the energy of the highest occupied single particle state,
then the Fermi energy in a metal is the energy difference between the Fermi level and lowest occupied single-particle state, at zero-temperature.
Context
In quantum mechanics, a group of particles known as fermions (for example, electrons, protons and neutrons) obey the Pauli exclusion principle. This states that two fermions cannot occupy the same quantum state. Since an idealized non-interacting Fermi gas can be analyzed in terms of single-particle stationary states, we can thus say that two fermions cannot occupy the same stationary state. These stationary states will typically be distinct in energy. To find the ground state of the whole system, we start with an empty system, and add particles one at a time, consecutively filling up the unoccupied stationary states with the lowest energy. When all the particles have been put in, the Fermi energy is the kinetic energy of the highest occupied state.
As a consequence, even if we have extracted all possible energy from a Fermi gas by cooling it to near absolute zero temperature, the fermions are still moving around at a high speed. The fastest ones are moving at a velocity corresponding to a kinetic energy equal to the Fermi energy. This speed is known as the Fermi velocity. Only when the temperature exceeds the related Fermi temperature, do the particles begin to move significantly faster than at absolute zero.
The Fermi energy is an important concept in the solid state physics of metals and superconductors. It is also a very important quantity in the physics of quantum liquids like low temperature helium (both normal and superfluid 3He), and it is quite important to nuclear physics and to understanding the stability of white dwarf stars against gravitational collapse.
Formula and typical values
The Fermi energy for a three-dimensional, non-relativistic, non-interacting ensemble of identical spin- fermions is given by
where N is the number of particles, m0 the rest mass of each fermion, V the volume of the system, and the reduced Planck constant.
Metals
Under the free electron model, the electrons in a metal can be considered to form a Fermi gas. The number density of conduction electrons in metals ranges between approximately 1028 and 1029 electrons/m3, which is also the typical density of atoms in ordinary solid matter. This number density produces a Fermi energy of the order of 2 to 10 electronvolts.
White dwarfs
Stars known as white dwarfs have mass comparable to the Sun, but have about a hundredth of its radius. The high densities mean that the electrons are no longer bound to single nuclei and instead form a degenerate electron gas. Their Fermi energy is about 0.3 MeV.
Nucleus
Another typical example is that of the nucleons in the nucleus of an atom. The radius of the nucleus admits deviations, so a typical value for the Fermi energy is usually given as 38 MeV.
Related quantities
Using this definition of above for the Fermi energy, various related quantities can be useful.
The Fermi temperature is defined as
where is the Boltzmann constant, and the Fermi energy. The Fermi temperature can be thought of as the temperature at which thermal effects are comparable to quantum effects associated with Fermi statistics. The Fermi temperature for a metal is a couple of orders of magnitude above room temperature.
Other quantities defined in this context are Fermi momentum
and Fermi velocity
These quantities are respectively the momentum and group velocity of a fermion at the Fermi surface.
The Fermi momentum can also be described as
where , called the Fermi wavevector, is the radius of the Fermi sphere.
is the electron density.
These quantities may not be well-defined in cases where the Fermi surface is non-spherical.
See also
Fermi–Dirac statistics: the distribution of electrons over stationary states for non-interacting fermions at non-zero temperature.
Fermi level
Quasi Fermi level
Notes
References
Further reading
Condensed matter physics
Fermi–Dirac statistics | Fermi energy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,094 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
170,429 | https://en.wikipedia.org/wiki/Isotropic%20etching | In semiconductor manufacturing, isotropic etching is a method commonly used to remove material from a substrate via a chemical process using an etchant substance. The etchant may be in liquid-, gas- or plasma-phase, although liquid etchants such as buffered hydrofluoric acid (BHF) for silicon dioxide etching are more often used. Unlike anisotropic etching, isotropic etching does not etch in a single direction, but rather etches in multiple directions within the substrate. Any horizontal component of the etch direction may therefore result in undercutting of patterned areas, and significant changes to device characteristics. Isotropic etching may occur unavoidably, or it may be desirable for process reasons.
References
Semiconductors | Isotropic etching | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 162 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
171,104 | https://en.wikipedia.org/wiki/Chitin | Chitin (C8H13O5N)n ( ) is a long-chain polymer of N-acetylglucosamine, an amide derivative of glucose. Chitin is the second most abundant polysaccharide in nature (behind only cellulose); an estimated 1 billion tons of chitin are produced each year in the biosphere. It is a primary component of cell walls in fungi (especially filamentous and mushroom-forming fungi), the exoskeletons of arthropods such as crustaceans and insects, the radulae, cephalopod beaks and gladii of molluscs and in some nematodes and diatoms.
It is also synthesised by at least some fish and lissamphibians. Commercially, chitin is extracted from the shells of crabs, shrimps, shellfish and lobsters, which are major by-products of the seafood industry. The structure of chitin is comparable to cellulose, forming crystalline nanofibrils or whiskers. It is functionally comparable to the protein keratin. Chitin has proved useful for several medicinal, industrial and biotechnological purposes.
Etymology
The English word "chitin" comes from the French word chitine, which was derived in 1821 from the Greek word χιτών (khitōn) meaning covering.
A similar word, "chiton", refers to a marine animal with a protective shell.
Chemistry, physical properties and biological function
The structure of chitin was determined by Albert Hofmann in 1929. Hofmann hydrolyzed chitin using a crude preparation of the enzyme chitinase, which he obtained from the snail Helix pomatia.
Chitin is a modified polysaccharide that contains nitrogen; it is synthesized from units of N-acetyl-D-glucosamine (to be precise, 2-(acetylamino)-2-deoxy-D-glucose). These units form covalent β-(1→4)-linkages (like the linkages between glucose units forming cellulose). Therefore, chitin may be described as cellulose with one hydroxyl group on each monomer replaced with an acetyl amine group. This allows for increased hydrogen bonding between adjacent polymers, giving the chitin-polymer matrix increased strength.
In its pure, unmodified form, chitin is translucent, pliable, resilient, and quite tough. In most arthropods, however, it is often modified, occurring largely as a component of composite materials, such as in sclerotin, a tanned proteinaceous matrix, which forms much of the exoskeleton of insects. Combined with calcium carbonate, as in the shells of crustaceans and molluscs, chitin produces a much stronger composite. This composite material is much harder and stiffer than pure chitin, and is tougher and less brittle than pure calcium carbonate. Another difference between pure and composite forms can be seen by comparing the flexible body wall of a caterpillar (mainly chitin) to the stiff, light elytron of a beetle (containing a large proportion of sclerotin).
In butterfly wing scales, chitin is organized into stacks of gyroids constructed of chitin photonic crystals that produce various iridescent colors serving phenotypic signaling and communication for mating and foraging. The elaborate chitin gyroid construction in butterfly wings creates a model of optical devices having potential for innovations in biomimicry. Scarab beetles in the genus Cyphochilus also utilize chitin to form extremely thin scales (five to fifteen micrometres thick) that diffusely reflect white light. These scales are networks of randomly ordered filaments of chitin with diameters on the scale of hundreds of nanometres, which serve to scatter light. The multiple scattering of light is thought to play a role in the unusual whiteness of the scales. In addition, some social wasps, such as Protopolybia chartergoides, orally secrete material containing predominantly chitin to reinforce the outer nest envelopes, composed of paper.
Chitosan is produced commercially by deacetylation of chitin by treatment with sodium hydroxide. Chitosan has a wide range of biomedical applications including wound healing, drug delivery and tissue engineering. Due to its specific intermolecular hydrogen bonding network, dissolving chitin in water is very difficult. Chitosan (with a degree of deacetylation of more than ~28%), on the other hand, can be dissolved in dilute acidic aqueous solutions below a pH of 6.0 such as acetic, formic and lactic acids. Chitosan with a degree of deacetylation greater than ~49% is soluble in water
Humans and other mammals
Humans and other mammals have chitinase and chitinase-like proteins that can degrade chitin; they also possess several immune receptors that can recognize chitin and its degradation products, initiating an immune response.
Chitin is sensed mostly in the lungs or gastrointestinal tract where it can activate the innate immune system through eosinophils or macrophages, as well as an adaptive immune response through T helper cells. Keratinocytes in skin can also react to chitin or chitin fragments.
Plants
Plants also have receptors that can cause a response to chitin, namely chitin elicitor receptor kinase 1 and chitin elicitor-binding protein. The first chitin receptor was cloned in 2006. When the receptors are activated by chitin, genes related to plant defense are expressed, and jasmonate hormones are activated, which in turn activate systemic defenses. Commensal fungi have ways to interact with the host immune response that, , were not well understood.
Some pathogens produce chitin-binding proteins that mask the chitin they shed from these receptors. Zymoseptoria tritici is an example of a fungal pathogen that has such blocking proteins; it is a major pest in wheat crops.
Fossil record
Chitin was probably present in the exoskeletons of Cambrian arthropods such as trilobites. The oldest preserved (intact) chitin samples thus far reported are dated to the Oligocene, about , from specimens encased in amber where the chitin has not completely degraded.
Uses
Agriculture
Chitin is a good inducer of plant defense mechanisms for controlling diseases. It has potential for use as a soil fertilizer or conditioner to improve fertility and plant resilience that may enhance crop yields.
Industrial
Chitin is used in many industrial processes. Examples of the potential uses of chemically modified chitin in food processing include the formation of edible films and as an additive to thicken and stabilize foods and food emulsions. Processes to size and strengthen paper employ chitin and chitosan.
Research
How chitin interacts with the immune system of plants and animals has been an active area of research, including the identity of key receptors with which chitin interacts, whether the size of chitin particles is relevant to the kind of immune response triggered, and mechanisms by which immune systems respond. Chitin is deacetylated chemically or enzymatically to produce chitosan, a highly biocompatible polymer which has found a wide range of applications in the biomedical industry. Chitin and chitosan have been explored as a vaccine adjuvant due to its ability to stimulate an immune response.
Chitin and chitosan are under development as scaffolds in studies of how tissue grows and how wounds heal, and in efforts to invent better bandages, surgical thread, and materials for allotransplantation. Sutures made of chitin have been experimentally developed, but their lack of elasticity and problems making thread have prevented commercial success so far.
Chitosan has been demonstrated and proposed to make a reproducible form of biodegradable plastic. Chitin nanofibers are extracted from crustacean waste and mushrooms for possible development of products in tissue engineering, drug delivery and medicine.
Chitin has been proposed for use in building structures, tools, and other solid objects from a composite material, combining chitin with Martian regolith. To build this, the biopolymers in the chitin are suggested as the binder for the regolith aggregate to form a concrete-like composite material. The authors believe that waste materials from food production (e.g. scales from fish, exoskeletons from crustaceans and insects, etc.) could be put to use as feedstock for manufacturing processes.
See also
Chitobiose
Lorica
Sporopollenin
Tectin
References
External links
Acetamides
Biomolecules
Biopesticides
Polysaccharides | Chitin | [
"Chemistry",
"Biology"
] | 1,830 | [
"Carbohydrates",
"Natural products",
"Biochemistry",
"Organic compounds",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Polysaccharides"
] |
171,317 | https://en.wikipedia.org/wiki/Countercurrent%20exchange | Countercurrent exchange is a mechanism between two flowing bodies flowing in opposite directions to each other, in which there is a transfer of some property, usually heat or some chemical. The flowing bodies can be liquids, gases, or even solid powders, or any combination of those. For example, in a distillation column, the vapors bubble up through the downward flowing liquid while exchanging both heat and mass. It occurs in nature and is mimicked in industry and engineering. It is a kind of exchange using counter flow arrangement.
The maximum amount of heat or mass transfer that can be obtained is higher with countercurrent than co-current (parallel) exchange because countercurrent maintains a slowly declining difference or gradient (usually temperature or concentration difference). In cocurrent exchange the initial gradient is higher but falls off quickly, leading to wasted potential. For example, in the adjacent diagram, the fluid being heated (exiting top) has a higher exiting temperature than the cooled fluid (exiting bottom) that was used for heating. With cocurrent or parallel exchange the heated and cooled fluids can only approach one another. The result is that countercurrent exchange can achieve a greater amount of heat or mass transfer than parallel under otherwise similar conditions.
Countercurrent exchange when set up in a circuit or loop can be used for building up concentrations, heat, or other properties of flowing liquids. Specifically when set up in a loop with a buffering liquid between the incoming and outgoing fluid running in a circuit, and with active transport pumps on the outgoing fluid's tubes, the system is called a countercurrent multiplier, enabling a multiplied effect of many small pumps to gradually build up a large concentration in the buffer liquid.
Other countercurrent exchange circuits where the incoming and outgoing fluids touch each other are used for retaining a high concentration of a dissolved substance or for retaining heat, or for allowing the external buildup of the heat or concentration at one point in the system.
Countercurrent exchange circuits or loops are found extensively in nature, specifically in biologic systems. In vertebrates, they are called a rete mirabile, originally the name of an organ in fish gills for absorbing oxygen from the water. It is mimicked in industrial systems. Countercurrent exchange is a key concept in chemical engineering thermodynamics and manufacturing processes, for example in extracting sucrose from sugar beet roots.
Countercurrent multiplication is a similar but different concept where liquid moves in a loop followed by a long length of movement in opposite directions with an intermediate zone. The tube leading to the loop passively building up a gradient of heat (or cooling) or solvent concentration while the returning tube has a constant small pumping action all along it, so that a gradual intensification of the heat or concentration is created towards the loop. Countercurrent multiplication has been found in the kidneys as well as in many other biological organs.
Three current exchange systems
Countercurrent exchange and cocurrent exchange are two mechanisms used to transfer some property of a fluid from one flowing current of fluid to another across a barrier allowing one way flow of the property between them. The property transferred could be heat, concentration of a chemical substance, or other properties of the flow.
When heat is transferred, a thermally-conductive membrane is used between the two tubes, and when the concentration of a chemical substance is transferred a semipermeable membrane is used.
Cocurrent flow—half transfer
In the cocurrent flow exchange mechanism, the two fluids flow in the same direction.
As the cocurrent and countercurrent exchange mechanisms diagram showed, a cocurrent exchange system has a variable gradient over the length of the exchanger. With equal flows in the two tubes, this method of exchange is only capable of moving half of the property from one flow to the other, no matter how long the exchanger is.
If each stream changes its property to be 50% closer to that of the opposite stream's inlet condition, exchange will stop when the point of equilibrium is reached, and the gradient has declined to zero. In the case of unequal flows, the equilibrium condition will occur somewhat closer to the conditions of the stream with the higher flow.
Cocurrent flow examples
A cocurrent heat exchanger is an example of a cocurrent flow exchange mechanism. Two tubes have a liquid flowing in the same direction. One starts off hot at , the second cold at . A thermoconductive membrane or an open section allows heat transfer between the two flows.
The hot fluid heats the cold one, and the cold fluid cools down the warm one. The result is thermal equilibrium: Both fluids end up at around the same temperature: , almost exactly between the two original temperatures ( and ). At the input end, there is a large temperature difference of and much heat transfer; at the output end, there is a very small temperature difference (both are at the same temperature of or close to it), and very little heat transfer if any at all. If the equilibrium—where both tubes are at the same temperature—is reached before the exit of the liquid from the tubes, no further heat transfer will be achieved along the remaining length of the tubes.
A similar example is the cocurrent concentration exchange. The system consists of two tubes, one with brine (concentrated saltwater), the other with freshwater (which has a low concentration of salt in it), and a semi permeable membrane which allows only water to pass between the two, in an osmotic process. Many of the water molecules pass from the freshwater flow in order to dilute the brine, while the concentration of salt in the freshwater constantly grows (since the salt is not leaving this flow, while water is). This will continue, until both flows reach a similar dilution, with a concentration somewhere close to midway between the two original dilutions. Once that happens, there will be no more flow between the two tubes, since both are at a similar dilution and there is no more osmotic pressure.
Countercurrent flow—almost full transfer
In countercurrent flow, the two flows move in opposite directions.
Two tubes have a liquid flowing in opposite directions, transferring a property from one tube to the other. For example, this could be transferring heat from a hot flow of liquid to a cold one, or transferring the concentration of a dissolved solute from a high concentration flow of liquid to a low concentration flow.
The counter-current exchange system can maintain a nearly constant gradient between the two flows over their entire length of contact. With a sufficiently long length and a sufficiently low flow rate this can result in almost all of the property transferred. So, for example, in the case of heat exchange, the exiting liquid will be almost as hot as the original incoming liquid's heat.
Countercurrent flow examples
In a countercurrent heat exchanger, the hot fluid becomes cold, and the cold fluid becomes hot.
In this example, hot water at enters the top pipe. It warms water in the bottom pipe which has been warmed up along the way, to almost . A minute but existing heat difference still exists, and a small amount of heat is transferred, so that the water leaving the bottom pipe is at close to . Because the hot input is at its maximum temperature of , and the exiting water at the bottom pipe is nearly at that temperature but not quite, the water in the top pipe can warm the one in the bottom pipe to nearly its own temperature. At the cold end—the water exit from the top pipe, because the cold water entering the bottom pipe is still cold at , it can extract the last of the heat from the now-cooled hot water in the top pipe, bringing its temperature down nearly to the level of the cold input fluid ().
The result is that the top pipe which received hot water, now has cold water leaving it at , while the bottom pipe which received cold water, is now emitting hot water at close to . In effect, most of the heat was transferred.
Conditions for higher transfer results
Nearly complete transfer in systems implementing countercurrent exchange, is only possible if the two flows are, in some sense, "equal".
For a maximum transfer of substance concentration, an equal flowrate of solvents and solutions is required. For maximum heat transfer, the average specific heat capacity and the mass flow rate must be the same for each stream. If the two flows are not equal, for example if heat is being transferred from water to air or vice versa, then, similar to cocurrent exchange systems, a variation in the gradient is expected because of a buildup of the property not being transferred properly.
Countercurrent exchange in biological systems
Countercurrent exchange is used extensively in biological systems for a wide variety of purposes. For example, fish use it in their gills to transfer oxygen from the surrounding water into their blood, and birds use a countercurrent heat exchanger between blood vessels in their legs to keep heat concentrated within their bodies. In vertebrates, this type of organ is referred to as a rete mirabile (originally the name of the organ in the fish gills). Mammalian kidneys use countercurrent exchange to remove water from urine so the body can retain water used to move the nitrogenous waste products (see countercurrent multiplier).
Countercurrent multiplication loop
A countercurrent multiplication loop is a system where fluid flows in a loop so that the entrance and exit are at similar low concentration of a dissolved substance but at the far end of the loop there is a high concentration of that substance. A buffer liquid between the incoming and outgoing tubes receives the concentrated substance. The incoming and outgoing tubes do not touch each other.
The system allows the buildup of a high concentration gradually, by allowing a natural buildup of concentration towards the tip inside the in-going tube, (for example using osmosis of water out of the input pipe and into the buffer fluid), and the use of many active transport pumps each pumping only against a very small gradient, during the exit from the loop, returning the concentration inside the output pipe to its original concentration.
The incoming flow starting at a low concentration has a semipermeable membrane with water passing to the buffer liquid via osmosis at a small gradient. There is a gradual buildup of concentration inside the loop until the loop tip where it reaches its maximum.
Theoretically a similar system could exist or be constructed for heat exchange.
In the example shown in the image, water enters at 299 mg/L (NaCl / H2O). Water passes because of a small osmotic pressure to the buffer liquid in this example at 300 mg/L (NaCl / H2O). Further up the loop there is a continued flow of water out of the tube and into the buffer, gradually raising the concentration of NaCl in the tube until it reaches 1199 mg/L at the tip. The buffer liquid between the two tubes is at a gradually rising concentration, always a bit over the incoming fluid, in this example reaching 1200 mg/L. This is regulated by the pumping action on the returning tube as will be explained immediately.
The tip of the loop has the highest concentration of salt (NaCl) in the incoming tube—in the example 1199 mg/L, and in the buffer 1200 mg/L. The returning tube has active transport pumps, pumping salt out to the buffer liquid at a low difference of concentrations of up to 200 mg/L more than in the tube. Thus when opposite the 1000 mg/L in the buffer liquid, the concentration in the tube is 800 and only 200 mg/L are needed to be pumped out. But the same is true anywhere along the line, so that at exit of the loop also only 200 mg/L need to be pumped.
In effect, this can be seen as a gradually multiplying effect—hence the name of the phenomena: a 'countercurrent multiplier' or the mechanism: Countercurrent multiplication, but in current engineering terms, countercurrent multiplication is any process where only slight pumping is needed, due to the constant small difference of concentration or heat along the process, gradually raising to its maximum. There is no need for a buffer liquid, if the desired effect is receiving a high concentration at the output pipe.
In the kidney
A circuit of fluid in the loop of Henle—an important part of the kidneys—allows for gradual buildup of the concentration of urine in the kidneys, by using active transport on the exiting nephrons (tubules carrying liquid in the process of gradually concentrating the urea). The active transport pumps need only to overcome a constant and low gradient of concentration, because of the countercurrent multiplier mechanism.
Various substances are passed from the liquid entering the nephrons until exiting the loop (See the nephron flow diagram). The sequence of flow is as follows:
Renal corpuscle: Liquid enters the nephron system at the Bowman's capsule.
Proximal convoluted tubule: It then may reabsorb urea in the thick descending limb. Water is removed from the nephrons by osmosis (and glucose and other ions are pumped out with active transport), gradually raising the concentration in the nephrons.
Loop of Henle Descending: The liquid passes from the thin descending limb to the thick ascending limb. Water is constantly released via osmosis. Gradually there is a buildup of osmotic concentration, until 1200 mOsm is reached at the loop tip, but the difference across the membrane is kept small and constant.
For example, the liquid at one section inside the thin descending limb is at 400 mOsm while outside it is 401. Further down the descending limb, the inside concentration is 500 while outside it is 501, so a constant difference of 1 mOsm is kept all across the membrane, although the concentration inside and outside are gradually increasing.
Loop of Henle Ascending: after the tip (or 'bend') of the loop, the liquid flows in the thin ascending limb. Salt–sodium Na+ and chloride Cl− ions are pumped out of the liquid gradually lowering the concentration in the exiting liquid, but, using the countercurrent multiplier mechanism, always pumping against a constant and small osmotic difference.
For example, the pumps at a section close to the bend, pump out from 1000 mOsm inside the ascending limb to 1200 mOsm outside it, with a 200 mOsm across. Pumps further up the thin ascending limb, pump out from 400 mOsm into liquid at 600 mOsm, so again the difference is retained at 200 mOsm from the inside to the outside, while the concentration both inside and outside are gradually decreasing as the liquid flow advances.
The liquid finally reaches a low concentration of 100 mOsm when leaving the thin ascending limb and passing through the thick one
Distal convoluted tubule: Once leaving the loop of Henle the thick ascending limb can optionally reabsorb and re increase the concentration in the nephrons.
Collecting duct: The collecting duct receives liquid between 100 mOsm if no re-absorption is done, to 300 or above if re-absorption was used. The collecting duct may continue raising the concentration if required, by gradually pumping out the same ions as the Distal convoluted tubule, using the same gradient as the ascending limbs in the loop of Henle, and reaching the same concentration.
Ureter: The liquid urine leaves to the ureter.
Same principle is used in hemodialysis within artificial kidney machines.
History
Initially the countercurrent exchange mechanism and its properties were proposed in 1951 by professor Werner Kuhn and two of his former students who called the mechanism found in the loop of Henle in mammalian kidneys a Countercurrent multiplier and confirmed by laboratory findings in 1958 by Professor Carl W. Gottschalk. The theory was acknowledged a year later after a meticulous study showed that there is almost no osmotic difference between liquids on both sides of nephrons. Homer Smith, a considerable contemporary authority on renal physiology, opposed the model countercurrent concentration for 8 years, until conceding ground in 1959. Ever since, many similar mechanisms have been found in biologic systems, the most notable of these: the rete mirabile in fish.
Countercurrent exchange of heat in organisms
In cold weather the blood flow to the limbs of birds and mammals is reduced on exposure to cold environmental conditions, and returned to the trunk via the deep veins which lie alongside the arteries (forming venae comitantes). This acts as a counter-current exchange system which short-circuits the warmth from the arterial blood directly into the venous blood returning into the trunk, causing minimal heat loss from the extremities in cold weather. The subcutaneous limb veins are tightly constricted, thereby reducing heat loss via this route, and forcing the blood returning from the extremities into the counter-current blood flow systems in the centers of the limbs. Birds and mammals that regularly immerse their limbs in cold or icy water have particularly well developed counter-current blood flow systems to their limbs, allowing prolonged exposure of the extremities to the cold without significant loss of body heat, even when the limbs are as thin as the lower legs, or tarsi, of a bird, for instance.
When animals like the leatherback turtle and dolphins are in colder water to which they are not acclimatized, they use this CCHE mechanism to prevent heat loss from their flippers, tail flukes, and dorsal fins. Such CCHE systems are made up of a complex network of peri-arterial venous plexuses, or venae comitantes, that run through the blubber from their minimally insulated limbs and thin streamlined protuberances. Each plexus consists of a central artery containing warm blood from the heart surrounded by a bundle of veins containing cool blood from the body surface. As these fluids flow past each other, they create a heat gradient in which heat is transferred and retained inside the body. The warm arterial blood transfers most of its heat to the cool venous blood now coming in from the outside. This conserves heat by recirculating it back to the body core. Since the arteries give up a good deal of their heat in this exchange, there is less heat lost through convection at the periphery surface.
Another example is found in the legs of an Arctic fox treading on snow. The paws are necessarily cold, but blood can circulate to bring nutrients to the paws without losing much heat from the body. Proximity of arteries and veins in the leg results in heat exchange, so that as the blood flows down it becomes cooler, and does not lose much heat to the snow. As the (cold) blood flows back up from the paws through the veins, it picks up heat from the blood flowing in the opposite direction, so that it returns to the torso in a warm state, allowing the fox to maintain a comfortable temperature, without losing it to the snow. This system is so efficient that the Arctic fox does not begin to shiver until the temperature drops to .
Countercurrent exchange in sea and desert birds to conserve water
Sea and desert birds have been found to have a salt gland near the nostrils which concentrates brine, later to be "sneezed" out to the sea, in effect allowing these birds to drink seawater without the need to find freshwater resources. It also enables the seabirds to remove the excess salt entering the body when eating, swimming or diving in the sea for food. The kidney cannot remove these quantities and concentrations of salt.
The salt secreting gland has been found in seabirds like pelicans, petrels, albatrosses, gulls, and terns. It has also been found in Namibian ostriches and other desert birds, where a buildup of salt concentration is due to dehydration and scarcity of drinking water.
In seabirds the salt gland is above the beak, leading to a main canal above the beak, and water is blown from two small nostrils on the beak, to empty it. The salt gland has two countercurrent mechanisms working in it:
a. A salt extraction system with a countercurrent multiplication mechanism, where salt is actively pumped from the blood 'venules' (small veins) into the gland tubules. Although the fluid in the tubules is with a higher concentration of salt than the blood, the flow is arranged in a countercurrent exchange, so that the blood with a high concentration of salt enters the system close to where the gland tubules exit and connect to the main canal. Thus, all along the gland, there is only a small gradient to climb, in order to push the salt from the blood to the salty fluid with active transport powered by ATP.
b. The blood supply system to the gland is set in countercurrent exchange loop mechanism for keeping the high concentration of salt in the gland's blood, so that it does not leave back to the blood system.
The glands remove the salt efficiently and thus allow the birds to drink the salty water from their environment while they are hundreds of miles away from land.
Countercurrent exchange in industry and scientific research
Countercurrent Chromatography is a method of separation, that is based on the differential partitioning of analytes between two immiscible liquids using countercurrent or cocurrent flow. Evolving from Craig's Countercurrent Distribution (CCD), the most widely used term and abbreviation is CounterCurrent Chromatography (CCC), in particular when using hydrodynamic CCC instruments. The term partition chromatography is largely a synonymous and predominantly used for hydrostatic CCC instruments.
Distillation of chemicals such as in petroleum refining is done in towers or columns with perforated trays. Vapor from the low boiling fractions bubbles upward through the holes in the trays in contact with the down flowing high boiling fractions. The concentration of low boiling fraction increases in each tray up the tower as it is "stripped". The low boiling fraction is drawn off the top of the tower and the high boiling fraction drawn from the bottom. The process in the trays is a combination of heat transfer and mass transfer. Heat is supplied at the bottom, known as a "reboiler" and cooling is done with a condenser at the top.
Liquid–liquid extraction (also called 'solvent extraction' or 'partitioning') is a common method for extracting a substance from one liquid into another liquid at a different 'phase' (such as "slurry"). This method, which implements a countercurrent mechanism, is used in nuclear reprocessing, ore processing, the production of fine organic compounds, the processing of perfumes, the production of vegetable oils and biodiesel, and other industries.
Gold can be separated from a cyanide solution with the Merrill–Crowe process using Counter Current Decantation (CCD). In some mines, nickel and cobalt are treated with CCD, after the original ore was treated with concentrated sulfuric acid and steam in titanium covered autoclaves, producing nickel cobalt slurry. The nickel and cobalt in the slurry are removed from it almost completely using a CCD system exchanging the cobalt and nickel with flash steam heated water.
Lime can be manufactured in countercurrent furnaces allowing the heat to reach high temperatures using low cost, low temperature burning fuel. Historically this was developed by the Japanese in certain types of the Anagama kiln. The kiln is built in stages, where fresh air coming to the fuel is passed downwards while the smoke and heat is pushed up and out. The heat does not leave the kiln, but is transferred back to the incoming air, and thus slowly builds up to and more.
Cement may be created using a countercurrent kiln where the heat is passed in the cement and the exhaust combined, while the incoming air draft is passed along the two, absorbing the heat and retaining it inside the furnace, finally reaching high temperatures.
Gasification: the process of creating methane and carbon monoxide from organic or fossil matter, can be done using a counter-current fixed bed ("up draft") gasifier which is built in a similar way to the Anagama kiln, and must therefore withstand more harsh conditions, but reaches better efficiency.
In nuclear power plants, water leaving the plant must not contain even trace particles of Uranium. Counter Current Decantation (CCD) is used in some facilities to extract water, totally clear of Uranium.
Zippe-type centrifuges use countercurrent multiplication between rising and falling convection currents to reduce the number of stages needed in a cascade.
Some Centrifugal extractors use counter current exchange mechanisms for extracting high rates of the desired material.
Some protein skimmers (devices used to clean saltwater pools and fish ponds of organic matter) use counter current technologies.
Countercurrent processes have also been used to study the behavior of small animals and isolate individuals with altered behaviors due to genetic mutations.
See also
Anagama kiln
Bidirectional traffic
Economizer
Heat recovery ventilation
Regenerative heat exchanger
Countercurrent multiplier
References
External links
Countercurrent multiplier animation from Colorado University.
Research about elephant seals using countercurrent heat exchange to keep heat from leaving their body while breathing out, during hibernation.
Patent for a snow mask with a removable countercurrent exchange module which keeps the warmth from leaving the mask when breathing out.
Chemical process engineering
Industrial processes
Animal anatomy
Renal physiology
Heat transfer | Countercurrent exchange | [
"Physics",
"Chemistry",
"Engineering"
] | 5,329 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Chemical engineering",
"Thermodynamics",
"Chemical process engineering"
] |
171,396 | https://en.wikipedia.org/wiki/Zone%20melting | Zone melting (or zone refining, or floating-zone method, or floating-zone technique) is a group of similar methods of purifying crystals, in which a narrow region of a crystal is melted, and this molten zone is moved along the crystal. The molten region melts impure solid at its forward edge and leaves a wake of purer material solidified behind it as it moves through the ingot. The impurities concentrate in the melt, and are moved to one end of the ingot. Zone refining was invented by John Desmond Bernal and further developed by William G. Pfann in Bell Labs as a method to prepare high-purity materials, mainly semiconductors, for manufacturing transistors. Its first commercial use was in germanium, refined to one atom of impurity per ten billion, but the process can be extended to virtually any solute–solvent system having an appreciable concentration difference between solid and liquid phases at equilibrium. This process is also known as the float zone process, particularly in semiconductor materials processing.
Process details
The principle is that the segregation coefficient k (the ratio at equilibrium of an impurity in the solid phase to that in the liquid phase) is usually less than one. Therefore, at the solid/liquid boundary, the impurity atoms will diffuse to the liquid region. Thus, by passing a crystal boule through a thin section of furnace very slowly, such that only a small region of the boule is molten at any time, the impurities will be segregated at the end of the crystal. Because of the lack of impurities in the leftover regions which solidify, the boule can grow as a perfect single crystal if a seed crystal is placed at the base to initiate a chosen direction of crystal growth. When high purity is required, such as in semiconductor industry, the impure end of the boule is cut off, and the refining is repeated.
In zone refining, solutes are segregated at one end of the ingot in order to purify the remainder, or to concentrate the impurities. In zone leveling, the objective is to distribute solute evenly throughout the purified material, which may be sought in the form of a single crystal. For example, in the preparation of a transistor or diode semiconductor, an ingot of germanium is first purified by zone refining. Then a small amount of antimony is placed in the molten zone, which is passed through the pure germanium. With the proper choice of rate of heating and other variables, the antimony can be spread evenly through the germanium. This technique is also used for the preparation of silicon for use in integrated circuits ("chips").
Heaters
A variety of heaters can be used for zone melting, with their most important characteristic being the ability to form short molten zones that move slowly and uniformly through the ingot. Induction coils, ring-wound resistance heaters, or gas flames are common methods. Another method is to pass an electric current directly through the ingot while it is in a magnetic field, with the resulting magnetomotive force carefully set to be just equal to the weight in order to hold the liquid suspended. Optical heaters using high-powered halogen or xenon lamps are used extensively in research facilities particularly for the production of insulators, but their use in industry is limited by the relatively low power of the lamps, which limits the size of crystals produced by this method. Zone melting can be done as a batch process, or it can be done continuously, with fresh impure material being continually added at one end and purer material being removed from the other, with impure zone melt being removed at whatever rate is dictated by the impurity of the feed stock.
Indirect-heating floating zone methods use an induction-heated tungsten ring to heat the ingot radiatively, and are useful when the ingot is of a high-resistivity semiconductor on which classical induction heating is ineffective.
Mathematical expression of impurity concentration
When the liquid zone moves by a distance , the number of impurities in the liquid change. Impurities are incorporated in the melting liquid and freezing solid.
: segregation coefficient
: zone length
: initial uniform impurity concentration of the solidified rod
: concentration of impurities in the liquid melt per length
: number of impurities in the liquid
: number of impurities in zone when first formed at bottom
: concentration of impurities in the solid rod
The number of impurities in the liquid changes in accordance with the expression below during the movement of the molten zone
Applications
Solar cells
In solar cells, float zone processing is particularly useful because the single-crystal silicon grown has desirable properties. The bulk charge carrier lifetime in float-zone silicon is the highest among various manufacturing processes. Float-zone carrier lifetimes are around 1000 microseconds compared to 20–200 microseconds with Czochralski method, and 1–30 microseconds with cast polycrystalline silicon. A longer bulk lifetime increases the efficiency of solar cells significantly.
High-resistivity devices
It's used for production of float-zone silicon-based high-power semiconductor devices.
Related processes
Zone remelting
Another related process is zone remelting, in which two solutes are distributed through a pure metal. This is important in the manufacture of semiconductors, where two solutes of opposite conductivity type are used. For example, in germanium, pentavalent elements of group V such as antimony and arsenic produce negative (n-type) conduction and the trivalent elements of group III such as aluminium and boron produce positive (p-type) conduction. By melting a portion of such an ingot and slowly refreezing it, solutes in the molten region become distributed to form the desired n-p and p-n junctions.
See also
Fractional freezing a.k.a. freeze distillation
Monocrystalline silicon
Wafer (electronics)
Further reading
References
Crystals
Industrial processes
Liquid-solid separation
Methods of crystal growth
Semiconductor growth | Zone melting | [
"Chemistry",
"Materials_science"
] | 1,237 | [
"Separation processes by phases",
"Methods of crystal growth",
"Crystallography",
"Crystals",
"Liquid-solid separation"
] |
171,414 | https://en.wikipedia.org/wiki/Engineering%20drawing | An engineering drawing is a type of technical drawing that is used to convey information about an object. A common use is to specify the geometry necessary for the construction of a component and is called a detail drawing. Usually, a number of drawings are necessary to completely specify even a simple component. These drawings are linked together by a "master drawing." This "master drawing" is more commonly known as an assembly drawing. The assembly drawing gives the drawing numbers of the subsequent detailed components, quantities required, construction materials and possibly 3D images that can be used to locate individual items. Although mostly consisting of pictographic representations, abbreviations and symbols are used for brevity and additional textual explanations may also be provided to convey the necessary information.
The process of producing engineering drawings is often referred to as technical drawing or drafting (draughting). Drawings typically contain multiple views of a component, although additional scratch views may be added of details for further explanation. Only the information that is a requirement is typically specified. Key information such as dimensions is usually only specified in one place on a drawing, avoiding redundancy and the possibility of inconsistency. Suitable tolerances are given for critical dimensions to allow the component to be manufactured and function. More detailed production drawings may be produced based on the information given in an engineering drawing. Drawings have an information box or title block containing who drew the drawing, who approved it, units of dimensions, meaning of views, the title of the drawing and the drawing number.
History
As a necessary means for visually conveying ideas, technical drawing has been in one form or another a part of human history since antiquity. The use of these early drawings was to express architectural and engineering concepts for large cultural structures: the temples, monuments, and public infrastructure. Basic forms of technical drawing were used by the Egyptians and Mesopotamians to create highly detailed irrigation systems, pyramids, and other such sophisticated structures. But their methods were, comparatively easy, yet needed a great deal of skill and accuracy. Even in their primitive form, they gave the construction a drawing for structures that would stand the test of time.
With the invention of technical drawing in ancient Greece and Rome technical drawing, they have further evolved. Works by Vitruvius and other engineers and architects such as Vitruvius used drawings as a medium for the transmission of construction techniques, and the illustration of the basic principles of balance and proportion in architecture. Early examples of what would lead to more formal technical drawing practices included the drawings and geometric calculations used to construct aqueducts, bridges, and fortresses. Technical drawings also figured in the 12th-century design of cathedrals and castles, albeit such drawings were more typically produced by artisans and stonemasons, not formally trained engineers.
The Renaissance was a period of great success for technical drawing. These inventive artists and inventors were starting to use sophisticated methods of visual representation within their work as well as a methodical adherence to accuracy. His notebooks contained drawings of mechanical devices anatomical studies, and engineering projects that demonstrated his advanced understanding of form, function, and proportion, as elucidated by his notebooks. Perhaps he was the first of the pioneers who combined the arts with engineering ability to produce technical drawings at once imaginative and instructive. It was an important foundation for future developments in technical drawing work.
As the Industrial Revolution took hold, modern engineering drawing took shape with the emergence of strictly specified conventions like drawing in orthographic projection, exploding, and standard scales. Part of the movement towards standardization was somewhat triggered by the development of engineering education and uniform drawing techniques in France. During the same period, the French mathematician Gaspard Monge developed descriptive geometry, a means of representing three-dimensional objects in two-dimensional space, and contributed to technical drawing in a major way. His work set the ground for orthographic projection which is one of the core techniques to be used in technical drawing today. Monge's methods were disseminated initially as a military secret, then far and wide, and his methods shaped the future of engineering education, and also the engineering practice.
Further contributions to the craft of technical drawing were made by pioneers like Marc Isambard Brunel. L. T. C. Rolt's biography of Isambard Kingdom Brunel, to whom Marc contributed in 1799 with his detailed drawings of block-making machinery, testified to the developing nature of British engineering methods. By applying what we now call mechanical drawing techniques to depict three-dimensional machinery on a two-dimensional plane more efficient manufacturing processes as well as greater precision were enabled. These innovations were essential as the world began to move toward mechanized production, and complex engineering projects, such as bridges, railways, and ships, required highly detailed and accurate technical representations to succeed.
This increasing need for a degree of precision in technical drawings during the 19th century was a direct result of the Industrial Revolution. In this era, we have seen the development of large-scale engineering projects such as railways, steam engines, and iron structures which require a heightened degree of accuracy and standardization. New conventions and symbols were created by engineers; the use of which became standardized throughout industries, so that any person who could read a technical drawing could know the specifications of a component or structure. The standardization process helped engineer practices to become standardized, making it easier for engineers, manufacturers, and builders to work together.
In the 20th century, technical drawing underwent yet another transformation with the introduction of drafting tools such as the T-square, compasses, and protractors. These tools helped drafters achieve the high degree of precision necessary for increasingly complex projects, such as skyscrapers, airplanes, and automobiles. The establishment of standards such as the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) further formalized technical drawing conventions, ensuring consistency in engineering practices around the world.
Today, technical drawing has largely transitioned from manual drafting to computer-aided design (CAD). CAD software has revolutionized the way technical drawings are created, allowing for faster, more precise, and easily modifiable drawings. Engineers can now visualize designs in three dimensions, simulate performance, and make adjustments before any physical prototype is built. This digital transformation has not only increased efficiency but also broadened the possibilities for innovation, enabling engineers to tackle challenges that were previously unimaginable.
However, despite the advent of digital tools, the fundamental principles of technical drawing remain rooted in its history. Precision, clarity, and the ability to convey complex information visually are still at the core of technical drawing. The conventions established over centuries—from orthographic projection to the use of scale and dimension lines—continue to be essential in modern engineering and architectural practice. The evolution of technical drawing is a testament to human ingenuity, demonstrating how the ability to convey complex ideas visually has been pivotal in the advancement of civilization.
Standardization and disambiguation
Engineering drawings specify the requirements of a component or assembly which can be complicated. Standards provide rules for their specification and interpretation. Standardization also aids internationalization, because people from different countries who speak different languages can read the same engineering drawing, and interpret it the same way.
One major set of engineering drawing standards is ASME Y14.5 and Y14.5M (most recently revised in 2018). These apply widely in the United States, although ISO 8015 (Geometrical product specifications (GPS) — Fundamentals — Concepts, principles and rules) is now also important. In 2018, ASME AED-1 was created to develop advanced practices unique to aerospace and other industries and supplement to Y14.5 Standards.
In 2011, a new revision of ISO 8015 (Geometrical product specifications (GPS) — Fundamentals — Concepts, principles and rules) was published containing the Invocation Principle. This states that, "Once a portion of the ISO geometric product specification (GPS) system is invoked in a mechanical engineering product documentation, the entire ISO GPS system is invoked." It also goes on to state that marking a drawing "Tolerancing ISO 8015" is optional. The implication of this is that any drawing using ISO symbols can only be interpreted to ISO GPS rules. The only way not to invoke the ISO GPS system is to invoke a national or other standard. Britain, BS 8888 (Technical Product Specification) has undergone important updates in the 2010s.
Media
For centuries, until the 1970s, all engineering drawing was done manually by using pencil and pen on paper or other substrate (e.g., vellum, mylar). Since the advent of computer-aided design (CAD), engineering drawing has been done more and more in the electronic medium with each passing decade. Today most engineering drawing is done with CAD, but pencil and paper have not entirely disappeared.
Some of the tools of manual drafting include pencils, pens and their ink, straightedges, T-squares, French curves, triangles, rulers, protractors, dividers, compasses, scales, erasers, and tacks or push pins. (Slide rules used to number among the supplies, too, but nowadays even manual drafting, when it occurs, benefits from a pocket calculator or its onscreen equivalent.) And of course the tools also include drawing boards (drafting boards) or tables. The English idiom "to go back to the drawing board", which is a figurative phrase meaning to rethink something altogether, was inspired by the literal act of discovering design errors during production and returning to a drawing board to revise the engineering drawing. Drafting machines are devices that aid manual drafting by combining drawing boards, straightedges, pantographs, and other tools into one integrated drawing environment. CAD provides their virtual equivalents.
Producing drawings usually involves creating an original that is then reproduced, generating multiple copies to be distributed to the shop floor, vendors, company archives, and so on. The classic reproduction methods involved blue and white appearances (whether white-on-blue or blue-on-white), which is why engineering drawings were long called, and even today are still often called, "blueprints" or "bluelines", even though those terms are anachronistic from a literal perspective, since most copies of engineering drawings today are made by more modern methods (often inkjet or laser printing) that yield black or multicolour lines on white paper. The more generic term "print" is now in common usage in the US to mean any paper copy of an engineering drawing. In the case of CAD drawings, the original is the CAD file, and the printouts of that file are the "prints".
Systems of dimensioning and tolerancing
Almost all engineering drawings (except perhaps reference-only views or initial sketches) communicate not only geometry (shape and location) but also dimensions and tolerances for those characteristics. Several systems of dimensioning and tolerancing have evolved. The simplest dimensioning system just specifies distances between points (such as an object's length or width, or hole center locations). Since the advent of well-developed interchangeable manufacture, these distances have been accompanied by tolerances of the plus-or-minus or min-and-max-limit types. Coordinate dimensioning involves defining all points, lines, planes, and profiles in terms of Cartesian coordinates, with a common origin. Coordinate dimensioning was the sole best option until the post-World War II era saw the development of geometric dimensioning and tolerancing (GD&T), which departs from the limitations of coordinate dimensioning (e.g., rectangular-only tolerance zones, tolerance stacking) to allow the most logical tolerancing of both geometry and dimensions (that is, both form [shapes/locations] and sizes).
Common features
Drawings convey the following critical information:
Geometry – the shape of the object; represented as views; how the object will look when it is viewed from various angles, such as front, top, side, etc.
Dimensions – the size of the object is captured in accepted units.
Tolerances – the allowable variations for each dimension.
Material – represents what the item is made of.
Finish – specifies the surface quality of the item, functional or cosmetic. For example, a mass-marketed product usually requires a much higher surface quality than, say, a component that goes inside industrial machinery.
Line styles and types
A variety of line styles graphically represent physical objects. Types of lines include the following:
visible – are continuous lines used to depict edges directly visible from a particular angle.
hidden – are short-dashed lines that may be used to represent edges that are not directly visible.
center – are alternately long- and short-dashed lines that may be used to represent the axes of circular features.
cutting plane – are thin, medium-dashed lines, or thick alternately long- and double short-dashed that may be used to define sections for section views.
section – are thin lines in a pattern (pattern determined by the material being "cut" or "sectioned") used to indicate surfaces in section views resulting from "cutting". Section lines are commonly referred to as "cross-hatching".
phantom – (not shown) are alternately long- and double short-dashed thin lines used to represent a feature or component that is not part of the specified part or assembly. E.g. billet ends that may be used for testing, or the machined product that is the focus of a tooling drawing.
Lines can also be classified by a letter classification in which each line is given a letter.
Type A lines show the outline of the feature of an object. They are the thickest lines on a drawing and done with a pencil softer than HB.
Type B lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. A harder pencil should be used, such as a 2H pencil.
Type C lines are used for breaks when the whole object is not shown. These are freehand drawn and only for short breaks. 2H pencil
Type D lines are similar to Type C, except these are zigzagged and only for longer breaks. 2H pencil
Type E lines indicate hidden outlines of internal features of an object. These are dotted lines. 2H pencil
Type F lines are Type E lines, except these are used for drawings in electrotechnology. 2H pencil
Type G lines are used for centre lines. These are dotted lines, but a long line of 10–20 mm, then a 1 mm gap, then a small line of 2 mm. 2H pencil
Type H lines are the same as type G, except that every second long line is thicker. These indicate the cutting plane of an object. 2H pencil
Type K lines indicate the alternate positions of an object and the line taken by that object. These are drawn with a long line of 10–20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2H pencil.
Multiple views and projections
In most cases, a single view is not sufficient to show all necessary features, and several views are used. Types of views include the following:
Multiview projection
A multiview projection is a type of orthographic projection that shows the object as it looks from the front, right, left, top, bottom, or back (e.g. the primary views), and is typically positioned relative to each other according to the rules of either first-angle or third-angle projection. The origin and vector direction of the projectors (also called projection lines) differs, as explained below.
In first-angle projection, the parallel projectors originate as if radiated from behind the viewer and pass through the 3D object to project a 2D image onto the orthogonal plane behind it. The 3D object is projected into 2D "paper" space as if you were looking at a radiograph of the object: the top view is under the front view, the right view is at the left of the front view. First-angle projection is the ISO standard and is primarily used in Europe.
In third-angle projection, the parallel projectors originate as if radiated from the far side of the object and pass through the 3D object to project a 2D image onto the orthogonal plane in front of it. The views of the 3D object are like the panels of a box that envelopes the object, and the panels pivot as they open up flat into the plane of the drawing. Thus the left view is placed on the left and the top view on the top; and the features closest to the front of the 3D object will appear closest to the front view in the drawing. Third-angle projection is primarily used in the United States and Canada, where it is the default projection system according to ASME standard ASME Y14.3M.
Until the late 19th century, first-angle projection was the norm in North America as well as Europe; but circa the 1890s, third-angle projection spread throughout the North American engineering and manufacturing communities to the point of becoming a widely followed convention, and it was an ASA standard by the 1950s. Circa World War I, British practice was frequently mixing the use of both projection methods.
As shown above, the determination of what surface constitutes the front, back, top, and bottom varies depending on the projection method used.
Not all views are necessarily used. Generally only as many views are used as are necessary to convey all needed information clearly and economically. The front, top, and right-side views are commonly considered the core group of views included by default, but any combination of views may be used depending on the needs of the particular design. In addition to the six principal views (front, back, top, bottom, right side, left side), any auxiliary views or sections may be included as serve the purposes of part definition and its communication. View lines or section lines (lines with arrows marked "A-A", "B-B", etc.) define the direction and location of viewing or sectioning. Sometimes a note tells the reader in which zone(s) of the drawing to find the view or section.
Auxiliary views
An auxiliary view is an orthographic view that is projected into any plane other than one of the six primary views. These views are typically used when an object contains some sort of inclined plane. Using the auxiliary view allows for that inclined plane (and any other significant features) to be projected in their true size and shape. The true size and shape of any feature in an engineering drawing can only be known when the Line of Sight (LOS) is perpendicular to the plane being referenced.
It is shown like a three-dimensional object. Auxiliary views tend to make use of axonometric projection. When existing all by themselves, auxiliary views are sometimes known as pictorials.
Isometric projection
An isometric projection shows the object from angles in which the scales along each axis of the object are equal. Isometric projection corresponds to rotation of the object by ± 45° about the vertical axis, followed by rotation of approximately ± 35.264° [= arcsin(tan(30°))] about the horizontal axis starting from an orthographic projection view. "Isometric" comes from the Greek for "same measure". One of the things that makes isometric drawings so attractive is the ease with which 60° angles can be constructed with only a compass and straightedge.
Isometric projection is a type of axonometric projection. The other two types of axonometric projection are:
Dimetric projection
Trimetric projection
Oblique projection
An oblique projection is a simple type of graphical projection used for producing pictorial, two-dimensional images of three-dimensional objects:
it projects an image by intersecting parallel rays (projectors)
from the three-dimensional source object with the drawing surface (projection plan).
In both oblique projection and orthographic projection, parallel lines of the source object produce parallel lines in the projected image.
Perspective projection
Perspective is an approximate representation on a flat surface, of an image as it is perceived by the eye. The two most characteristic features of perspective are that objects are drawn:
Smaller as their distance from the observer increases
Foreshortened: the size of an object's dimensions along the line of sight are relatively shorter than dimensions across the line of sight.
Section Views
Projected views (either Auxiliary or Multi view) which show a cross section of the source object along the specified cut plane. These views are commonly used to show internal features with more clarity than regular projections or hidden lines, it also helps reducing number of hidden lines.In assembly drawings, hardware components (e.g. nuts, screws, washers) are typically not sectioned. Section view is a half side view of object.
Scale
Plans are usually "scale drawings", meaning that the plans are drawn at specific ratio relative to the actual size of the place or object. Various scales may be used for different drawings in a set. For example, a floor plan may be drawn at 1:50 (1:48 or ″ = 1′ 0″) whereas a detailed view may be drawn at 1:25 (1:24 or ″ = 1′ 0″). Site plans are often drawn at 1:200 or 1:100.
Scale is a nuanced subject in the use of engineering drawings. On one hand, it is a general principle of engineering drawings that they are projected using standardized, mathematically certain projection methods and rules. Thus, great effort is put into having an engineering drawing accurately depict size, shape, form, aspect ratios between features, and so on. And yet, on the other hand, there is another general principle of engineering drawing that nearly diametrically opposes all this effort and intent—that is, the principle that users are not to scale the drawing to infer a dimension not labeled. This stern admonition is often repeated on drawings, via a boilerplate note in the title block telling the user, "DO NOT SCALE DRAWING."
The explanation for why these two nearly opposite principles can coexist is as follows. The first principle—that drawings will be made so carefully and accurately—serves the prime goal of why engineering drawing even exists, which is successfully communicating part definition and acceptance criteria—including "what the part should look like if you've made it correctly." The service of this goal is what creates a drawing that one even could scale and get an accurate dimension thereby. And thus the great temptation to do so, when a dimension is wanted but was not labeled. The second principle—that even though scaling the drawing will usually work, one should nevertheless never do it—serves several goals, such as enforcing total clarity regarding who has authority to discern design intent, and preventing erroneous scaling of a drawing that was never drawn to scale to begin with (which is typically labeled "drawing not to scale" or "scale: NTS"). When a user is forbidden from scaling the drawing, they must turn instead to the engineer (for the answers that the scaling would seek), and they will never erroneously scale something that is inherently unable to be accurately scaled.
But in some ways, the advent of the CAD and MBD era challenges these assumptions that were formed many decades ago. When part definition is defined mathematically via a solid model, the assertion that one cannot interrogate the model—the direct analog of "scaling the drawing"—becomes ridiculous; because when part definition is defined this way, it is not possible for a drawing or model to be "not to scale". A 2D pencil drawing can be inaccurately foreshortened and skewed (and thus not to scale), yet still be a completely valid part definition as long as the labeled dimensions are the only dimensions used, and no scaling of the drawing by the user occurs. This is because what the drawing and labels convey is in reality a symbol of what is wanted, rather than a true replica of it. (For example, a sketch of a hole that is clearly not round still accurately defines the part as having a true round hole, as long as the label says "10mm DIA", because the "DIA" implicitly but objectively tells the user that the skewed drawn circle is a symbol representing a perfect circle.) But if a mathematical model—essentially, a vector graphic—is declared to be the official definition of the part, then any amount of "scaling the drawing" can make sense; there may still be an error in the model, in the sense that what was intended is not depicted (modeled); but there can be no error of the "not to scale" type—because the mathematical vectors and curves are replicas, not symbols, of the part features.
Even in dealing with 2D drawings, the manufacturing world has changed since the days when people paid attention to the scale ratio claimed on the print, or counted on its accuracy. In the past, prints were plotted on a plotter to exact scale ratios, and the user could know that a line on the drawing 15 mm long corresponded to a 30 mm part dimension because the drawing said "1:2" in the "scale" box of the title block. Today, in the era of ubiquitous desktop printing, where original drawings or scaled prints are often scanned on a scanner and saved as a PDF file, which is then printed at any percent magnification that the user deems handy (such as "fit to paper size"), users have pretty much given up caring what scale ratio is claimed in the "scale" box of the title block. Which, under the rule of "do not scale drawing", never really did that much for them anyway.
Showing dimensions
The required sizes of features are conveyed through use of dimensions. Distances may be indicated with either of two standardized forms of dimension: linear and ordinate.
With linear dimensions, two parallel lines, called "extension lines," spaced at the distance between two features, are shown at each of the features. A line perpendicular to the extension lines, called a "dimension line," with arrows at its endpoints, is shown between, and terminating at, the extension lines. The distance is indicated numerically at the midpoint of the dimension line, either adjacent to it, or in a gap provided for it.
With ordinate dimensions, one horizontal and one vertical extension line establish an origin for the entire view. The origin is identified with zeroes placed at the ends of these extension lines. Distances along the x- and y-axes to other features are specified using other extension lines, with the distances indicated numerically at their ends.
Sizes of circular features are indicated using either diametral or radial dimensions. Radial dimensions use an "R" followed by the value for the radius; Diametral dimensions use a circle with forward-leaning diagonal line through it, called the diameter symbol, followed by the value for the diameter. A radially-aligned line with arrowhead pointing to the circular feature, called a leader, is used in conjunction with both diametral and radial dimensions.
All types of dimensions are typically composed of two parts: the nominal value, which is the "ideal" size of the feature, and the tolerance, which specifies the amount that the value may vary above and below the nominal.
Geometric dimensioning and tolerancing is a method of specifying the functional geometry of an object.
Sizes of drawings
Sizes of drawings typically comply with either of two different standards, ISO (World Standard) or ANSI/ASME Y14.1 (American).
The metric drawing sizes correspond to international paper sizes. These developed further refinements in the second half of the twentieth century, when photocopying became cheap. Engineering drawings could be readily doubled (or halved) in size and put on the next larger (or, respectively, smaller) size of paper with no waste of space. And the metric technical pens were chosen in sizes so that one could add detail or drafting changes with a pen width changing by approximately a factor of the square root of 2. A full set of pens would have the following nib sizes: 0.13, 0.18, 0.25, 0.35, 0.5, 0.7, 1.0, 1.5, and 2.0 mm. However, the International Organization for Standardization (ISO) called for four pen widths and set a colour code for each: 0.25 (white), 0.35 (yellow), 0.5 (brown), 0.7 (blue); these nibs produced lines that related to various text character heights and the ISO paper sizes.
All ISO paper sizes have the same aspect ratio, one to the square root of 2, meaning that a document designed for any given size can be enlarged or reduced to any other size and will fit perfectly. Given this ease of changing sizes, it is of course common to copy or print a given document on different sizes of paper, especially within a series, e.g. a drawing on A3 may be enlarged to A2 or reduced to A4.
The US customary "A-size" corresponds to "letter" size, and "B-size" corresponds to "ledger" or "tabloid" size. There were also once British paper sizes, which went by names rather than alphanumeric designations.
American Society of Mechanical Engineers (ASME) ANSI/ASME Y14.1, Y14.2, Y14.3, and Y14.5 are commonly referenced standards in the US.
Technical lettering
Technical lettering is the process of forming letters, numerals, and other characters in technical drawing. It is used to describe, or provide detailed specifications for an object. With the goals of legibility and uniformity, styles are standardized and lettering ability has little relationship to normal writing ability. Engineering drawings use a Gothic sans-serif script, formed by a series of short strokes. Lower case letters are rare in most drawings of machines. ISO Lettering templates, designed for use with technical pens and pencils, and to suit ISO paper sizes, produce lettering characters to an international standard. The stroke thickness is related to the character height (for example, 2.5 mm high characters would have a stroke thickness - pen nib size - of 0.25 mm, 3.5 would use a 0.35 mm pen and so forth). The ISO character set (font) has a seriffed one, a barred seven, an open four, six, and nine, and a round topped three, that improves legibility when, for example, an A0 drawing has been reduced to A1 or even A3 (and perhaps enlarged back or reproduced/faxed/ microfilmed &c). When CAD drawings became more popular, especially using US software, such as AutoCAD, the nearest font to this ISO standard font was Romantic Simplex (RomanS) - a proprietary shx font) with a manually adjusted width factor (override) to make it look as near to the ISO lettering for the drawing board. However, with the closed four, and arced six and nine, romans.shx typeface could be difficult to read in reductions. In more recent revisions of software packages, the TrueType font ISOCPEUR reliably reproduces the original drawing board lettering stencil style, however, many drawings have switched to the ubiquitous Arial.ttf.
Conventional parts (areas)
Title block
Every engineering drawing must have a title block.
The title block (T/B, TB) is an area of the drawing that conveys header-type information about the drawing, such as:
Drawing title (hence the name "title block")
Drawing number
Part number(s)
Name of the design activity (corporation, government agency, etc.)
Identifying code of the design activity (such as a CAGE code)
Address of the design activity (such as city, state/province, country)
Measurement units of the drawing (for example, inches, millimeters)
Default tolerances for dimension callouts where no tolerance is specified
Boilerplate callouts of general specs
Intellectual property rights warning
ISO 7200 specifies the data fields used in title blocks.
It standardizes eight mandatory data fields:
Title (hence the name "title block")
Created by (name of drafter)
Approved by
Legal owner (name of company or organization)
Document type
Drawing number (same for every sheet of this document, unique for each technical document of the organization)
Sheet number and number of sheets (for example, "Sheet 5/7")
Date of issue (when the drawing was made)
Traditional locations for the title block are the bottom right (most commonly) or the top right or center.
Revisions block
The revisions block (rev block) is a tabulated list of the revisions (versions) of the drawing, documenting the revision control.
Traditional locations for the revisions block are the top right (most commonly) or adjoining the title block in some way.
Next assembly
The next assembly block, often also referred to as "where used" or sometimes "effectivity block", is a list of higher assemblies where the product on the current drawing is used. This block is commonly found adjacent to the title block.
Notes list
The notes list provides notes to the user of the drawing, conveying any information that the callouts within the field of the drawing did not. It may include general notes, flagnotes, or a mixture of both.
Traditional locations for the notes list are anywhere along the edges of the field of the drawing.
General notes
General notes (G/N, GN) apply generally to the contents of the drawing, as opposed to applying only to certain part numbers or certain surfaces or features.
Flagnotes
Flagnotes or flag notes (FL, F/N) are notes that apply only where a flagged callout points, such as to particular surfaces, features, or part numbers. Typically the callout includes a flag icon. Some companies call such notes "delta notes", and the note number is enclosed inside a triangular symbol (similar to capital letter delta, Δ). "FL5" (flagnote 5) and "D5" (delta note 5) are typical ways to abbreviate in ASCII-only contexts.
Field of the drawing
The field of the drawing (F/D, FD) is the main body or main area of the drawing, excluding the title block, rev block, P/L and so on
List of materials, bill of materials, parts list
The list of materials (L/M, LM, LoM), bill of materials (B/M, BM, BoM), or parts list (P/L, PL) is a (usually tabular) list of the materials used to make a part, and/or the parts used to make an assembly. It may contain instructions for heat treatment, finishing, and other processes, for each part number. Sometimes such LoMs or PLs are separate documents from the drawing itself.
Traditional locations for the LoM/BoM are above the title block, or in a separate document.
Parameter tabulations
Some drawings call out dimensions with parameter names (that is, variables, such a "A", "B", "C"), then tabulate rows of parameter values for each part number.
Traditional locations for parameter tables, when such tables are used, are floating near the edges of the field of the drawing, either near the title block or elsewhere along the edges of the field.
Views and sections
Each view or section is a separate set of projections, occupying a contiguous portion of the field of the drawing. Usually views and sections are called out with cross-references to specific zones of the field.
Zones
Often a drawing is divided into zones by an alphanumeric grid, with zone labels along the margins, such as A, B, C, D up the sides and 1,2,3,4,5,6 along the top and bottom.
Names of zones are thus, for example, A5, D2, or B1. This feature greatly eases discussion of, and reference to, particular areas of the drawing.
Abbreviations and symbols
As in many technical fields, a wide array of abbreviations and symbols have been developed in engineering drawing during the 20th and 21st centuries. For example, cold rolled steel is often abbreviated as CRS, and diameter is often abbreviated as DIA, D, or ⌀.
Most engineering drawings are language-independent—words are confined to the title block; symbols are used in place of words elsewhere.
With the advent of computer generated drawings for manufacturing and machining, many symbols have fallen out of common use. This poses a problem when attempting to interpret an older hand-drawn document that contains obscure elements that cannot be readily referenced in standard teaching text or control documents such as ASME and ANSI standards. For example, ASME Y14.5M 1994 excludes a few elements that convey critical information as contained in older US Navy drawings and aircraft manufacturing drawings of World War 2 vintage. Researching the intent and meaning of some symbols can prove difficult.
Example
Here is an example of an engineering drawing (an isometric view of the same object is shown above). The different line types are colored for clarity.
Black = object line and hatching
Red = hidden line
Blue = center line of piece or opening
Magenta = phantom line or cutting plane line
Sectional views are indicated by the direction of arrows, as in the example right side.
Legal instruments
An engineering drawing is a legal document (that is, a legal instrument), because it communicates all the needed information about "what is wanted" to the people who will expend resources turning the idea into a reality. It is thus a part of a contract; the purchase order and the drawing together, as well as any ancillary documents (engineering change orders [ECOs], called-out specs), constitute the contract. Thus, if the resulting product is wrong, the worker or manufacturer are protected from liability as long as they have faithfully executed the instructions conveyed by the drawing. If those instructions were wrong, it is the fault of the engineer. Because manufacturing and construction are typically very expensive processes (involving large amounts of capital and payroll), the question of liability for errors has legal implications.
Relationship to model-based definition (MBD/DPD)
For centuries, engineering drawing was the sole method of transferring information from design into manufacture. In recent decades another method has arisen, called model-based definition (MBD) or digital product definition (DPD). In MBD, the information captured by the CAD software app is fed automatically into a CAM app (computer-aided manufacturing), which (with or without postprocessing apps) creates code in other languages such as G-code to be executed by a CNC machine tool (computer numerical control), 3D printer, or (increasingly) a hybrid machine tool that uses both. Thus today it is often the case that the information travels from the mind of the designer into the manufactured component without having ever been codified by an engineering drawing. In MBD, the dataset, not a drawing, is the legal instrument. The term "technical data package" (TDP) is now used to refer to the complete package of information (in one medium or another) that communicates information from design to production (such as 3D-model datasets, engineering drawings, engineering change orders (ECOs), spec revisions and addenda, and so on).
It still takes CAD/CAM programmers, CNC setup workers, and CNC operators to do manufacturing, as well as other people such as quality assurance staff (inspectors) and logistics staff (for materials handling, shipping-and-receiving, and front office functions). These workers often use drawings in the course of their work that have been produced from the MBD dataset. When proper procedures are being followed, a clear chain of precedence is always documented, such that when a person looks at a drawing, they are told by a note thereon that this drawing is not the governing instrument (because the MBD dataset is). In these cases, the drawing is still a useful document, although legally it is classified as "for reference only", meaning that if any controversies or discrepancies arise, it is the MBD dataset, not the drawing, that governs.
See also
Architectural drawing
ASME AED-1 Aerospace and Advanced Engineering Drawings
B. Hick and Sons – Notable collection of early locomotive and steam engine drawings
CAD standards
Descriptive geometry
Document management system
Engineering drawing symbols
Geometric tolerance
ISO 128 Technical drawings – General principles of presentation
light plot
Linear scale
Patent drawing
Scale rulers: architect's scale and engineer's scale
Specification (technical standard)
Structural drawing
References
Bibliography
: Engineering Drawing (book)
: Engineering Drawing (book)
Further reading
Basant Agrawal and C M Agrawal (2013). Engineering Drawing. Second Edition, McGraw Hill Education India Pvt. Ltd., New Delhi.
Paige Davis, Karen Renee Juneau (2000). Engineering Drawing
David A. Madsen, Karen Schertz, (2001) Engineering Drawing & Design. Delmar Thomson Learning.
Cecil Howard Jensen, Jay D. Helsel, Donald D. Voisinet Computer-aided engineering drawing using AutoCAD.
Warren Jacob Luzadder (1959). Fundamentals of engineering drawing for technical students and professional.
M.A. Parker, F. Pickup (1990) Engineering Drawing with Worked Examples.
Colin H. Simmons, Dennis E. Maguire Manual of engineering drawing. Elsevier.
Cecil Howard Jensen (2001). Interpreting Engineering Drawings.
B. Leighton Wellman (1948). Technical Descriptive Geometry. McGraw-Hill Book Company, Inc.
External links
Examples of cubes drawn in different projections
Animated presentation of drawing systems used in technical drawing (Flash animation)
Design Handbook: Engineering Drawing and Sketching, by MIT OpenCourseWare
Engineering concepts
Technical drawing
Infographics | Engineering drawing | [
"Engineering"
] | 8,553 | [
"Design engineering",
"Technical drawing",
"Civil engineering",
"nan"
] |
171,513 | https://en.wikipedia.org/wiki/Jetboat | A jetboat is a boat propelled by a jet of water ejected from the back of the craft. Unlike a powerboat or motorboat that uses an external propeller in the water below or behind the boat, a jetboat draws the water from under the boat through an intake and into a pump-jet inside the boat, before expelling it through a nozzle at the stern.
The modern jetboat was developed by New Zealand engineer Sir William Hamilton in the mid-1950s. His goal was a boat to run up the fast-flowing rivers of New Zealand that were too shallow for propellers.
Previous attempts at waterjet propulsion had very short lifetimes, generally due to the inefficient design of the units and the fact that they offered few advantages over conventional propellers. Unlike these previous waterjet developments, such as Campini's and the Hanley Hydrojet, Hamilton had a specific need for a propulsion system to operate in very shallow water, and the waterjet proved to be the ideal solution. The popularity of the jet unit and jetboat increased rapidly. It was found the waterjet was better than propellers for a wide range of vessel types, and waterjets are now used widely for many high-speed vessels including passenger ferries, rescue craft, patrol boats and offshore supply vessels.
Jetboats are highly manoeuvrable, and many can be reversed from full speed and brought to a stop within little more than their own length, in a manoeuvre known as a "crash stop". The well known Hamilton turn or "jet spin" is a high-speed manoeuvre where the boat's engine throttle is cut, the steering is turned sharply and the throttle opened again, causing the boat to spin quickly around with a large spray of water.
There is no engineering limit to the size of jetboats, though whether they are useful depends on the type of application. Classic prop-drives are generally more efficient and economical at low speeds, up to about , but as boat speed increases, the extra hull resistance generated by struts, rudders, shafts and so on means waterjets are more efficient up to . For very large propellers turning at slow speeds, such as in tugboats, the equivalent size waterjet would be too big to be practical. The vast majority of waterjet units are therefore installed in high-speed vessels and in situations where shallow draught, maneuverability, and load flexibility are the main concerns.
The biggest jet-driven vessels are found in military use and the high-speed passenger and car ferry industry. South Africa's s (approximately long) and the long United States Littoral Combat Ship are among the biggest jet-propelled vessels . Even these vessels are capable of performing "crash stops".
Function
A conventional screw propeller works within the body of water below a boat hull, effectively "screwing" through the water to drive a vessel forward by generating a difference in pressure between the forward and rear surfaces of the propeller blades and by accelerating a mass of water rearward. By contrast, a waterjet unit delivers a high-pressure "push" from the stern of a vessel by accelerating a volume of water as it passes through a specialised pump mounted above the waterline inside the boat hull. Both methods yield thrust due to Newton's third law— every action has an equal and opposite reaction.
In a jetboat, the waterjet draws water from beneath the hull, where it passes through a series of impellers and stators – known as stages – which increase the velocity of the waterflow. Most modern jets are single-stage, while older waterjets may have as many as three stages. The tail section of the waterjet unit extends out through the transom of the hull, above the waterline. This jetstream exits the unit through a small nozzle at high velocity to push the boat forward. Steering is accomplished by moving this nozzle to either side, or less commonly, by small gates on either side that deflect the jetstream. Because the jetboat relies on the flow of water through the nozzle for control, it is not possible to steer a conventional jetboat without the engine running.
Unlike conventional propeller systems where the rotation of the propeller is reversed to provide astern movement, a waterjet will continue to pump normally while a deflector is lowered into the jetstream after it leaves the outlet nozzle. This deflector redirects thrust forces forward to provide reverse thrust. Most highly developed reverse deflectors redirect the jetstream down and to each side to prevent recirculation of the water through the jet again, which may cause aeration problems, or increase reverse thrust. Steering is still available with the reverse deflector lowered so the vessel will have full maneuverability. With the deflector lowered about halfway into the jetstream, forward and reverse thrust are equal so the boat maintains a fixed position, but steering is still available to allow the vessel to turn on the spot – something which is impossible with a conventional single propeller.
Unlike hydrofoils, which use underwater wings or struts to lift the vessel clear of the water, standard jetboats use a conventional planing hull to ride across the water surface, with only the rear portion of the hull displacing any water. With the majority of the hull clear of the water, there is reduced drag, greatly enhancing speed and maneuverability, so jetboats are normally operated at planing speed. At slower speeds with less water pumping through the jet unit, the jetboat will lose some steering control and maneuverability and will quickly slow down as the hull comes off its planing state and hull resistance is increased. However, loss of steering control at low speeds can be overcome by lowering the reverse deflector slightly and increasing throttle – so an operator may increase thrust and thus control without increasing boat speed itself. A conventional river-going jetboat will have a shallow-angled (but not flat-bottomed) hull to improve its high-speed cornering control and stability, while also allowing it to traverse very shallow water. At speed, jetboats can be safely operated in less than 7.5 cm (3 inches) of water.
One of the most significant breakthroughs, in the development of the waterjet, was to change the design so it expelled the jetstream above the water line, contrary to many people's intuition. Hamilton discovered early on that this greatly improved performance, compared to expelling below the waterline, while also providing a "clean" hull bottom (i.e. nothing protruding below the hull line) to allow the boat to skim through very shallow water. It makes no difference to the amount of thrust generated whether the outlet is above or below the waterline, but having it above the waterline reduces hull resistance and draught. Hamilton's first waterjet design had the outlet below the hull and actually in front of the inlet. This probably meant that disturbed water was entering the jet unit and reducing its performance, and the main reason why the change to above the waterline made such a difference.
Applications
Applications for jetboats include most activities where conventional propellers are also used, but in particular passenger ferry services, coastguard and police patrol, navy and military, adventure tourism (which is becoming increasingly popular around the globe), pilot boat operations, surf rescue, farming, fishing, exploration, pleasure boating, and other water activities where motor boats are used. Jetboats can also be raced for sport, both on rivers (World Champion Jet Boat Marathon held in Mexico, Canada, USA and New Zealand) and on specially designed racecourses known as sprint tracks. Recently there has been increasing use of jetboats in the form of rigid-hulled inflatable boats and as luxury yacht tenders. Many jetboats are small enough to be carried on a trailer and towed by car.
As jetboats have no external rotating parts they are safer for swimmers and marine life, though they can be struck by the hull. The safety benefit itself can sometimes be reason enough to use this type of propulsion.
In 1977, Sir Edmund Hillary led a jetboat expedition, titled "Ocean to Sky", from the mouth of the Ganges River to its source. One of the jetboats was sunk by a friend of Hillary.
Drawbacks
The fuel efficiency and performance of a jetboat can be affected by anything that disrupts the smooth flow of water through the jet unit. For example, a plastic bag sucked onto the jet unit's intake grill can have quite an adverse effect.
Another disadvantage of jetboats appears to be that they are more sensitive to engine/jet unit mismatch, compared with the problem of engine/propeller mismatch in propeller-driven craft. If the jet-propulsion unit is not well-matched to the engine performance, excessive fuel consumption and poor performance can result.
See also
Personal water craft
List of water sports
Jet sprint boat racing
References
External links
Hamilton waterjet history
Jet boat origins and history
Motorboats
Marine propulsion
New Zealand inventions | Jetboat | [
"Engineering"
] | 1,829 | [
"Marine propulsion",
"Marine engineering"
] |
171,552 | https://en.wikipedia.org/wiki/Collision%20detection | Collision detection is the computational problem of detecting an intersection of two or more objects in virtual space. More precisely, it deals with the questions of if, when and where two or more objects intersect. Collision detection is a classic problem of computational geometry with applications in computer graphics, physical simulation, video games, robotics (including autonomous driving) and computational physics. Collision detection algorithms can be divided into operating on 2D or 3D spatial objects.
Overview
Collision detection is closely linked to calculating the distance between objects, as two objects (or more) intersect when the distance between them reaches zero or even becomes negative. Negative distance indicates that one object has penetrated another. Performing collision detection requires more context than just the distance between the objects.
Accurately identifying the points of contact on both objects' surfaces is also essential for the computation of a physically accurate collision response. The complexity of this task increases with the level of detail in the objects' representations: the more intricate the model, the greater the computational cost.
Collision detection frequently involves dynamic objects, adding a temporal dimension to distance calculations. Instead of simply measuring distance between static objects, collision detection algorithms often aim to determine whether the objects’ motion will bring them to a point in time when their distance is zero—an operation that adds significant computational overhead.
In collision detection involving multiple objects, a naive approach would require detecting collisions for all pairwise combinations of objects. As the number of objects increases, the number of required comparisons grows rapidly: for objects, intersection tests are needed with a naive approach. This quadratic growth makes such an approach computationally expensive as increases.
Due to the complexity mentioned above, collision detection is computationally intensive process. Nevertheless, it is essential for interactive applications like video games, robotics, and real-time physics engines. To manage these computational demands, extensive efforts have gone into optimizing collision detection algorithms.
A commonly used approach towards accelerating the required computations is to divide the process into two phases: the broad phase and the narrow phase. The broad phase aims to answer the question of whether objects might collide, using a conservative but efficient approach to rule out pairs that clearly do not intersect, thus avoiding unnecessary calculations.
Objects that cannot be definitively separated in the broad phase are passed to the narrow phase. Here, more precise algorithms determine whether these objects actually intersect. If they do, the narrow phase often calculates the exact time and location of the intersection.
Broad phase
This phase aims at quickly finding objects or parts of objects for which it can be quickly determined that no further collision test is needed. A useful property of such approach is that it is output sensitive. In the context of collision detection this means that the time complexity of the collision detection is proportional to the number of objects that are close to each other. An early example of that is the I-COLLIDE where the number of required narrow phase collision tests was where is the number of objects and is the number of objects at close proximity. This is a significant improvement over the quadratic complexity of the naive approach.
Spatial partitioning
Several approaches can grouped under the spatial partitioning umbrella, which includes octrees (for 3D), quadtrees (for 2D) binary space partitioning (or BSP trees) and other, similar approaches. If one splits space into a number of simple cells, and if two objects can be shown not to be in the same cell, then they need not be checked for intersection. Dynamic scenes and deformable objects require updating the partitioning which can add overhead.
Bounding volume hierarchy
Bounding Volume Hierarchy (BVH) a tree structure over a set of bounding volumes. Collision is determined by doing a tree traversal starting from the root. If the bounding volume of the root doesn't intersect with the object of interest, the traversal can be stopped. If, however there is an intersection, the traversal proceeds and checks the branches for each there is an intersection. Branches for which there is no intersection with the bounding volume can be culled from further intersection test. Therefore, multiple objects can be determined to not intersect at once. BVH can be used with deformable objects such as cloth or soft-bodies but the volume hierarchy has to be adjusted as the shape deforms. For deformable objects we need to be concerned about self-collisions or self intersections. BVH can be used for that end as well. Collision between two objects is computed by computing intersection between the bounding volumes of the root of the tree as there are collision we dive into the sub-trees that intersect. Exact collisions between the actual objects, or its parts (often triangles of a triangle mesh) need to be computed only between intersecting leaves. The same approach works for pair wise collision and self-collisions.
Exploiting temporal coherence
During the broad-phase, when the objects in the world move or deform, the data-structures used to cull collisions have to be updated. In cases where the changes between two frames or time-steps are small and the objects can be approximated well with axis-aligned bounding boxes, the sweep and prune algorithm can be a suitable approach.
Several key observation make the implementation efficient: Two bounding-boxes intersect if, and only if, there is overlap along all three axes; overlap can be determined, for each axis separately, by sorting the intervals for all the boxes; and lastly, between two frames updates are typically small (making sorting algorithms optimized for almost-sorted lists suitable for this application). The algorithm keeps track of currently intersecting boxes, and as objects moves, re-sorting the intervals helps keep track of the status.
Pairwise pruning
Once we've selected a pair of physical bodies for further investigation, we need to check for collisions more carefully. However, in many applications, individual objects (if they are not too deformable) are described by a set of smaller primitives, mainly triangles. So now, we have two sets of triangles, and (for simplicity, we will assume that each set has the same number of triangles.)
The obvious thing to do is to check all triangles against all triangles for collisions, but this involves comparisons, which is highly inefficient. If possible, it is desirable to use a pruning algorithm to reduce the number of pairs of triangles we need to check.
The most widely used family of algorithms is known as the hierarchical bounding volumes method. As a preprocessing step, for each object (in our example, and ) we will calculate a hierarchy of bounding volumes. Then, at each time step, when we need to check for collisions between and , the hierarchical bounding volumes are used to reduce the number of pairs of triangles under consideration. For simplicity, we will give an example using bounding spheres, although it has been noted that spheres are undesirable in many cases.
If is a set of triangles, we can pre-calculate a bounding sphere . There are many ways of choosing , we only assume that is a sphere that completely contains and is as small as possible.
Ahead of time, we can compute and . Clearly, if these two spheres do not intersect (and that is very easy to test), then neither do and . This is not much better than an n-body pruning algorithm, however.
If is a set of triangles, then we can split it into two halves and . We can do this to and , and we can calculate (ahead of time) the bounding spheres and . The hope here is that these bounding spheres are much smaller than and . And, if, for instance, and do not intersect, then there is no sense in checking any triangle in against any triangle in .
As a precomputation, we can take each physical body (represented by a set of triangles) and recursively decompose it into a binary tree, where each node represents a set of triangles, and its two children represent and . At each node in the tree, we can pre-compute the bounding sphere .
When the time comes for testing a pair of objects for collision, their bounding sphere tree can be used to eliminate many pairs of triangles.
Many variants of the algorithms are obtained by choosing something other than a sphere for . If one chooses axis-aligned bounding boxes, one gets AABBTrees. Oriented bounding box trees are called OBBTrees. Some trees are easier to update if the underlying object changes. Some trees can accommodate higher order primitives such as splines instead of simple triangles.
Narrow phase
Objects that cannot be definitively separated in the broad phase are passed to the narrow phase. In this phase, the objects under consideration are relatively close to each other. Still, attempts to quickly determine if a full intersection is needed are employed first. This step is sometimes referred to as mid-phase. Once these tests passed (e.g. the pair of objects may be colliding) more precise algorithms determine whether these objects actually intersect. If they do, the narrow phase often calculates the exact time and location of the intersection.
Bounding volumes
A quick way to potentially avoid a needless expensive computation is to check if the bounding volume enclosing the two objects intersect. If they don't, there is not need to check the actual objects. However, if the bounding volume intersect, the more expensive computation has to be performed. In order for the bounding-volume test to add value, two properties need to be balanced: a) the cost of intersecting the bounding volume needs to be low and b) the bounding volume needs to be tight enough so that the number of 'false positive' intersection will be low. A false positive intersection in this case means that the bounding volume intersect but the actual objects do not. Different bounding volume types offer different trade-offs for these properties.
Axis-Align Bounding Boxes (AABB) and cuboids are popular due to their simplicity and quick intersection tests. Bounding volumes such as Oriented Bounding Boxes (OBB), K-DOPs and Convex-hulls offer a tighter approximation of the enclosed shape at the expense of a more elaborate intersection test.
Bounding volumes are typically used in the early (pruning) stage of collision detection, so that only objects with overlapping bounding volumes need be compared in detail. Computing collision or overlap between bounding volumes involves additional computations, therefore, in order for it to beneficial we need the bounding volume to be relatively tight and the computation overhead to due the collisions to be low.
Exact pairwise collision detection
Objects for which pruning approaches could not rule out the possibility of a collision have to undergo an exact collision detection computation.
Collision detection between convex objects
According to the separating planes theorem, for any two disjoint convex objects, there exists a plane so that one object lies completely on one side of that plane, and the other object lies on the opposite side of that plane. This property allows the development of efficient collision detection algorithms between convex objects. Several algorithms are available for finding the closest points on the surface of two convex polyhedral objects - and determining collision. Early work by Ming C. Lin that used a variation on the simplex algorithm from linear programming and the Gilbert-Johnson-Keerthi distance algorithm are two such examples. These algorithms approach constant time when applied repeatedly to pairs of stationary or slow-moving objects, and every step is initialized from the previous collision check.
The result of all this algorithmic work is that collision detection can be done efficiently for thousands of moving objects in real time on typical personal computers and game consoles.
A priori pruning
Where most of the objects involved are fixed, as is typical of video games, a priori methods using precomputation can be used to speed up execution.
Pruning is also desirable here, both n-body pruning and pairwise pruning, but the algorithms must take time and the types of motions used in the underlying physical system into consideration.
When it comes to the exact pairwise collision detection, this is highly trajectory dependent, and one almost has to use a numerical root-finding algorithm to compute the instant of impact.
As an example, consider two triangles moving in time and . At any point in time, the two triangles can be checked for intersection using the twenty planes previously mentioned. However, we can do better, since these twenty planes can all be tracked in time. If is the plane going through points in then there are twenty planes to track. Each plane needs to be tracked against three vertices, this gives sixty values to track. Using a root finder on these sixty functions produces the exact collision times for the two given triangles and the two given trajectory. We note here that if the trajectories of the vertices are assumed to be linear polynomials in then the final sixty functions are in fact cubic polynomials, and in this exceptional case, it is possible to locate the exact collision time using the formula for the roots of the cubic. Some numerical analysts suggest that using the formula for the roots of the cubic is not as numerically stable as using a root finder for polynomials.
Triangle centroid segments
A triangle mesh object is commonly used in 3D body modeling. Normally the collision function is a triangle to triangle intercept or a bounding shape associated with the mesh. A triangle centroid is a center of mass location such that it would balance on a pencil tip. The simulation need only add a centroid dimension to the physics parameters. Given centroid points in both object and target it is possible to define the line segment connecting these two points.
The position vector of the centroid of a triangle is the average of the position vectors of its vertices. So if its vertices have Cartesian coordinates , and then the centroid is .
Here is the function for a line segment distance between two 3D points.
Here the length/distance of the segment is an adjustable "hit" criteria size of segment. As the objects approach the length decreases to the threshold value. A triangle sphere becomes the effective geometry test. A sphere centered at the centroid can be sized to encompass all the triangle's vertices.
Usage
Collision detection in computer simulation
Physical simulators differ in the way they react on a collision. Some use the softness of the material to calculate a force, which will resolve the collision in the following time steps like it is in reality. This is very CPU intensive for low softness materials. Some simulators estimate the time of collision by linear interpolation, roll back the simulation, and calculate the collision by the more abstract methods of conservation laws.
Some iterate the linear interpolation (Newton's method) to calculate the time of collision with a much higher precision than the rest of the simulation. Collision detection utilizes time coherence to allow even finer time steps without much increasing CPU demand, such as in air traffic control.
After an inelastic collision, special states of sliding and resting can occur and, for example, the Open Dynamics Engine uses constraints to simulate them. Constraints avoid inertia and thus instability. Implementation of rest by means of a scene graph avoids drift.
In other words, physical simulators usually function one of two ways: where the collision is detected a posteriori (after the collision occurs) or a priori (before the collision occurs). In addition to the a posteriori and a priori distinction, almost all modern collision detection algorithms are broken into a hierarchy of algorithms. Often the terms "discrete" and "continuous" are used rather than a posteriori and a priori.
A posteriori (discrete) versus a priori (continuous)
In the a posteriori case, the physical simulation is advanced by a small step, then checked to see if any objects are intersecting or visibly considered intersecting. At each simulation step, a list of all intersecting bodies is created, and the positions and trajectories of these objects are "fixed" to account for the collision. This method is called a posteriori because it typically misses the actual instant of collision, and only catches the collision after it has actually happened.
In the a priori methods, there is a collision detection algorithm which will be able to predict very precisely the trajectories of the physical bodies. The instants of collision are calculated with high precision, and the physical bodies never actually interpenetrate. This is called a priori because the collision detection algorithm calculates the instants of collision before it updates the configuration of the physical bodies.
The main benefits of the a posteriori methods are as follows. In this case, the collision detection algorithm need not be aware of the myriad of physical variables; a simple list of physical bodies is fed to the algorithm, and the program returns a list of intersecting bodies. The collision detection algorithm doesn't need to understand friction, elastic collisions, or worse, nonelastic collisions and deformable bodies. In addition, the a posteriori algorithms are in effect one dimension simpler than the a priori algorithms. An a priori algorithm must deal with the time variable, which is absent from the a posteriori problem.
On the other hand, a posteriori algorithms cause problems in the "fixing" step, where intersections (which aren't physically correct) need to be corrected. Moreover, if the discrete step is too large, the collision could go undetected, resulting in an object which passes through another if it is sufficiently fast or small.
The benefits of the a priori algorithms are increased fidelity and stability. It is difficult (but not completely impossible) to separate the physical simulation from the collision detection algorithm. However, in all but the simplest cases, the problem of determining ahead of time when two bodies will collide (given some initial data) has no closed form solution—a numerical root finder is usually involved.
Some objects are in resting contact, that is, in collision, but neither bouncing off, nor interpenetrating, such as a vase resting on a table. In all cases, resting contact requires special treatment: If two objects collide (a posteriori) or slide (a priori) and their relative motion is below a threshold, friction becomes stiction and both objects are arranged in the same branch of the scene graph.
Video games
Video games have to split their very limited computing time between several tasks. Despite this resource limit, and the use of relatively primitive collision detection algorithms, programmers have been able to create believable, if inexact, systems for use in games.
For a long time, video games had a very limited number of objects to treat, and so checking all pairs was not a problem. In two-dimensional games, in some cases, the hardware was able to efficiently detect and report overlapping pixels between sprites on the screen. In other cases, simply tiling the screen and binding each sprite into the tiles it overlaps provides sufficient pruning, and for pairwise checks, bounding rectangles or circles called hitboxes are used and deemed sufficiently accurate.
Three-dimensional games have used spatial partitioning methods for -body pruning, and for a long time used one or a few spheres per actual 3D object for pairwise checks. Exact checks are very rare, except in games attempting to simulate reality closely. Even then, exact checks are not necessarily used in all cases.
Because games do not need to mimic actual physics, stability is not as much of an issue. Almost all games use a posteriori collision detection, and collisions are often resolved using very simple rules. For instance, if a character becomes embedded in a wall, they might be simply moved back to their last known good location. Some games will calculate the distance the character can move before getting embedded into a wall, and only allow them to move that far.
In many cases for video games, approximating the characters by a point is sufficient for the purpose of collision detection with the environment. In this case, binary space partitioning trees provide a viable, efficient and simple algorithm for checking if a point is embedded in the scenery or not. Such a data structure can also be used to handle "resting position" situation gracefully when a character is running along the ground. Collisions between characters, and collisions with projectiles and hazards, are treated separately.
A robust simulator is one that will react to any input in a reasonable way. For instance, if we imagine a high speed racecar video game, from one simulation step to the next, it is conceivable that the cars would advance a substantial distance along the race track. If there is a shallow obstacle on the track (such as a brick wall), it is not entirely unlikely that the car will completely leap over it, and this is very undesirable. In other instances, the "fixing" that posteriori algorithms require isn't implemented correctly, resulting in bugs that can trap characters in walls or allow them to pass through them and fall into an endless void where there may or may not be a deadly bottomless pit, sometimes referred to as "black hell", "blue hell", or "green hell", depending on the predominant color. These are the hallmarks of a failing collision detection and physical simulation system. Big Rigs: Over the Road Racing is an infamous example of a game with a failing or possibly missing collision detection system.
Hitbox
A hitbox is an invisible shape commonly used in video games for real-time collision detection; it is a type of bounding box. It is often a rectangle (in 2D games) or cuboid (in 3D) that is attached to and follows a point on a visible object (such as a model or a sprite). Circular or spheroidial shapes are also common, though they are still most often called "boxes". It is common for animated objects to have hitboxes attached to each moving part to ensure accuracy during motion.
Hitboxes are used to detect "one-way" collisions such as a character being hit by a punch or a bullet. They are unsuitable for the detection of collisions with feedback (e.g. bumping into a wall) due to the difficulty experienced by both humans and AI in managing a hitbox's ever-changing locations; these sorts of collisions are typically handled with much simpler axis-aligned bounding boxes instead. Players may use the term "hitbox" to refer to these types of interactions regardless.
A hurtbox is a hitbox used to detect incoming sources of damage. In this context, the term hitbox is typically reserved for those which deal damage. For example, an attack may only land if the hitbox around an attacker's punch connects with one of the opponent's hurtboxes on their body, while opposing hitboxes colliding may result in the players trading or cancelling blows, and opposing hurtboxes do not interact with each other. The term is not standardized across the industry; some games reverse their definitions of hitbox and hurtbox, while others only use "hitbox" for both sides.
See also
Collision response
Hit-testing
Bounding volume
Game physics
Gilbert–Johnson–Keerthi distance algorithm
Minkowski Portal Refinement
Physics engine
Lubachevsky–Stillinger algorithm
Ragdoll physics
References
External links
University of North Carolina at Chapel Hill collision detection research website
Prof. Steven Cameron (Oxford University) web site on collision detection
How to Avoid a Collision by George Beck, Wolfram Demonstrations Project.
Bounding boxes and their usage
Separating Axis Theorem
Unity 3D Collision
Godot Physics Collision
Computational geometry
Computer graphics
Video game development
Computer physics engines
Robotics engineering | Collision detection | [
"Mathematics",
"Technology",
"Engineering"
] | 4,792 | [
"Computational geometry",
"Computational mathematics",
"Computer engineering",
"Robotics engineering"
] |
171,560 | https://en.wikipedia.org/wiki/Metropolis%20light%20transport | Metropolis light transport (MLT) is a global illumination application of a Monte Carlo method called the Metropolis–Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three-dimensional scenes.
The procedure constructs paths from the eye to a light source using bidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path's 'nodes' in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new 'nodes' to add and whether or not these new nodes will actually create a new path.
Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing.
Energy Redistribution Path Tracing (ERPT) uses Metropolis sampling-like mutation strategies instead of an intermediate probability distribution step.
See also
Nicholas Metropolis – The physicist after whom the algorithm is named
Renderers using MLT:
Arion – A commercial unbiased renderer based on path tracing and providing an MLT sampler
Nvidia Iray (external link) – An unbiased renderer that has an option for MLT
Kerkythea – A free unbiased 3D renderer that uses MLT
LuxCoreRender – An open source unbiased renderer that uses MLT
Mitsuba Renderer (web site) A research-oriented renderer which implements several MLT variants
Octane Render – A commercial unbiased renderer that uses MLT
Indigo Renderer (web site) – An unbiased, photorealistic GPU and CPU renderer that supports MLT and is aimed at ultimate image quality, by accurately simulating the physics of light.
References
External links
Metropolis project at Stanford
Homepage of the Mitsuba renderer
LuxRender - an open source render engine that supports MLT
Kerkythea 2008 - a freeware rendering system that uses MLT
A Practical Introduction to Metropolis Light Transport
Unbiased physically based rendering on the GPU
Monte Carlo methods
Global illumination algorithms | Metropolis light transport | [
"Physics",
"Technology"
] | 533 | [
"Monte Carlo methods",
"Computing stubs",
"Computational physics"
] |
171,728 | https://en.wikipedia.org/wiki/Drag%20equation | In fluid dynamics, the drag equation is a formula used to calculate the force of drag experienced by an object due to movement through a fully enclosing fluid. The equation is:
where
is the drag force, which is by definition the force component in the direction of the flow velocity,
is the mass density of the fluid,
is the flow velocity relative to the object,
is the reference area, and
is the drag coefficient – a dimensionless coefficient related to the object's geometry and taking into account both skin friction and form drag. If the fluid is a liquid, depends on the Reynolds number; if the fluid is a gas, depends on both the Reynolds number and the Mach number.
The equation is attributed to Lord Rayleigh, who originally used L2 in place of A (with L being some linear dimension).
The reference area A is typically defined as the area of the orthographic projection of the object on a plane perpendicular to the direction of motion. For non-hollow objects with simple shape, such as a sphere, this is exactly the same as the maximal cross sectional area. For other objects (for instance, a rolling tube or the body of a cyclist), A may be significantly larger than the area of any cross section along any plane perpendicular to the direction of motion. Airfoils use the square of the chord length as the reference area; since airfoil chords are usually defined with a length of 1, the reference area is also 1. Aircraft use the wing area (or rotor-blade area) as the reference area, which makes for an easy comparison to lift. Airships and bodies of revolution use the volumetric coefficient of drag, in which the reference area is the square of the cube root of the airship's volume. Sometimes different reference areas are given for the same object in which case a drag coefficient corresponding to each of these different areas must be given.
For sharp-cornered bluff bodies, like square cylinders and plates held transverse to the flow direction, this equation is applicable with the drag coefficient as a constant value when the Reynolds number is greater than 1000. For smooth bodies, like a cylinder, the drag coefficient may vary significantly until Reynolds numbers up to 107 (ten million).
Discussion
The equation is easier understood for the idealized situation where all of the fluid impinges on the reference area and comes to a complete stop, building up stagnation pressure over the whole area. No real object exactly corresponds to this behavior. is the ratio of drag for any real object to that of the ideal object. In practice a rough un-streamlined body (a bluff body) will have a around 1, more or less. Smoother objects can have much lower values of . The equation is precise – it simply provides the definition of (drag coefficient), which varies with the Reynolds number and is found by experiment.
Of particular importance is the dependence on flow velocity, meaning that fluid drag increases with the square of flow velocity. When flow velocity is doubled, for example, not only does the fluid strike with twice the flow velocity, but twice the mass of fluid strikes per second. Therefore, the change of momentum per time, i.e. the force experienced, is multiplied by four. This is in contrast with solid-on-solid dynamic friction, which generally has very little velocity dependence.
Relation with dynamic pressure
The drag force can also be specified as
where PD is the pressure exerted by the fluid on area A. Here the pressure PD is referred to as dynamic pressure due to the kinetic energy of the fluid experiencing relative flow velocity u. This is defined in similar form as the kinetic energy equation:
Derivation
The drag equation may be derived to within a multiplicative constant by the method of dimensional analysis. If a moving fluid meets an object, it exerts a force on the object. Suppose that the fluid is a liquid, and the variables involved – under some conditions – are the:
speed u,
fluid density ρ,
kinematic viscosity ν of the fluid,
size of the body, expressed in terms of its wetted area A, and
drag force Fd.
Using the algorithm of the Buckingham π theorem, these five variables can be reduced to two dimensionless groups:
drag coefficient cd and
Reynolds number Re.
That this is so becomes apparent when the drag force Fd is expressed as part of a function of the other variables in the problem:
This rather odd form of expression is used because it does not assume a one-to-one relationship. Here, fa is some (as-yet-unknown) function that takes five arguments. Now the right-hand side is zero in any system of units; so it should be possible to express the relationship described by fa in terms of only dimensionless groups.
There are many ways of combining the five arguments of fa to form dimensionless groups, but the Buckingham π theorem states that there will be two such groups. The most appropriate are the Reynolds number, given by
and the drag coefficient, given by
Thus the function of five variables may be replaced by another function of only two variables:
where fb is some function of two arguments.
The original law is then reduced to a law involving only these two numbers.
Because the only unknown in the above equation is the drag force Fd, it is possible to express it as
Thus the force is simply ρ A u2 times some (as-yet-unknown) function fc of the Reynolds number Re – a considerably simpler system than the original five-argument function given above.
Dimensional analysis thus makes a very complex problem (trying to determine the behavior of a function of five variables) a much simpler one: the determination of the drag as a function of only one variable, the Reynolds number.
If the fluid is a gas, certain properties of the gas influence the drag and those properties must also be taken into account. Those properties are conventionally considered to be the absolute temperature of the gas, and the ratio of its specific heats. These two properties determine the speed of sound in the gas at its given temperature. The Buckingham pi theorem then leads to a third dimensionless group, the ratio of the relative velocity to the speed of sound, which is known as the Mach number. Consequently when a body is moving relative to a gas, the drag coefficient varies with the Mach number and the Reynolds number.
The analysis also gives other information for free, so to speak. The analysis shows that, other things being equal, the drag force will be proportional to the density of the fluid. This kind of information often proves to be extremely valuable, especially in the early stages of a research project.
Air viscosity in a rotating sphere
Air viscosity in a rotating sphere has a coefficient, similar to the drag coefficient in the drag equation.
Experimental methods
To empirically determine the Reynolds number dependence, instead of experimenting on a large body with fast-flowing fluids (such as real-size airplanes in wind tunnels), one may just as well experiment using a small model in a flow of higher velocity because these two systems deliver similitude by having the same Reynolds number. If the same Reynolds number and Mach number cannot be achieved just by using a flow of higher velocity it may be advantageous to use a fluid of greater density or lower viscosity.
See also
Aerodynamic drag
Angle of attack
Morison equation
Newton's sine-square law of air resistance
Stall (flight)
Terminal velocity
References
External links
Drag (physics)
Equations of fluid dynamics
Aircraft wing design | Drag equation | [
"Physics",
"Chemistry"
] | 1,506 | [
"Drag (physics)",
"Equations of fluid dynamics",
"Equations of physics",
"Fluid dynamics"
] |
171,816 | https://en.wikipedia.org/wiki/The%20Extended%20Phenotype | The Extended Phenotype is a 1982 book by the evolutionary biologist Richard Dawkins, in which the author introduced a biological concept of the same name. The book's main idea is that phenotype should not be limited to biological processes such as protein biosynthesis or tissue growth, but extended to include all effects that a gene has on its environment, inside or outside the body of the individual organism.
Dawkins considers The Extended Phenotype to be a sequel to The Selfish Gene (1976) aimed at professional biologists, and as his principal contribution to evolutionary theory.
Summary
Genes as the unit of selection in evolution
The central thesis of The Extended Phenotype, and of its predecessor by the same author, The Selfish Gene, is that individual organisms are not the true units of natural selection. Instead, the gene — or the 'active, germ-line replicator' — is the unit upon which the forces of evolutionary selection and adaptation act. It is genes that succeed or fail in evolution, meaning that they either succeed or fail in replicating themselves across multiple generations.
These replicators are not subject to natural selection directly, but indirectly through their "phenotypical effects". These effects are all the effects that the gene (or replicator) has on the world at large, not just in the body of the organism in which it is contained. In taking as its starting point the gene as the unit of selection, The Extended Phenotype is a direct extension of Dawkins' first book, The Selfish Gene.
Genes synthesise only proteins
Dawkins argues that the only thing that genes control directly is the synthesis of proteins; restricting the idea of the phenotype to apply only to the phenotypic expression of an organism's genes in its own body is an arbitrary limitation that ignores the effect a gene may have on an organism's environment through that organism's behaviour.
Genes may affect more than the organism's body
Dawkins proposes there are three forms of extended phenotype. The first is the capacity of animals to modify their environment using architectural constructions, for which Dawkins provides as examples caddis houses and beaver dams.
The second form is manipulation of other organisms: The morphology of a living organism, and possibly of that organism's behaviour, may influence not just the fitness of the organism itself, but that of other living organisms as well. One example of this is parasite manipulation. This refers to the capacity, found in some parasite-host interactions, for the parasite to modify the behaviour of the host in a way that enhances the parasite's own fitness. One well-known example of this second type of extended phenotype is the suicidal drowning of crickets infected by hairworm, a behaviour that is essential to the parasite's reproductive cycle. Another example is seen in female mosquitoes carrying malaria parasites. The mosquitoes infected with the parasites whose preferred hosts are humans have been shown in a field experiment to be significantly more attracted to human breath and odours than uninfected mosquitoes when the parasites are at a point in their life cycle where they can infect a human target.
The third form of extended phenotype is action at a distance of the parasite on its host. A common example is the manipulation of host behaviour by cuckoo chicks, which elicit intensive feeding by the host birds. Here the cuckoo does not interact directly with the host (which could be meadow pipits, dunnocks or reed warblers). The relevant adaptation lies in the cuckoo producing eggs and chicks that resemble sufficiently those of the host species so that they are not immediately ejected from the nest. These behavioural modifications are not physically associated with individuals of the host species but influence the expression of its behavioural phenotype.
Dawkins summarizes these ideas in what he terms the Central Theorem of the Extended Phenotype:
Gene-centred view of life
In developing this argument, Dawkins aims to strengthen the case for a gene-centric view of the evolution of life forms, to the point where it is recognized that the organism itself needs to be explained. This is the challenge which he takes up in the final chapter entitled "Rediscovering the Organism". The concept of extended phenotype has been generalized in an organism-centered view of evolution with the concept of niche construction, in the case where natural selection pressures can be modified by the organisms during the evolutionary process.
Reception
A technical review of The Extended Phenotype in the Quarterly Review of Biology states that, it is an "interesting and thought provoking book, once one gets to the last five chapters." In the reviewer's opinion, the book poses interesting questions, such as "What is the survival value of packaging life into discrete units called 'organisms' even though the units of selection appear to be individual 'replicators'?" The reviewer states that no "satisfactory answer is given" to this question in the book, though Dawkins suggests that replicators that "interact favorably to create 'vehicles' (organisms) may be at an advantage over those that do not (Chapter 14)." The reviewer takes issue with the first nine chapters as being essentially a defense of Dawkin's first book, The Selfish Gene.
Another review in American Scientist praises the book for convincingly promoting the idea of replication as being central to the evolutionary process. However, in the reviewer's opinion, "its main theme - that the gene is the only unit of selection - results from incorrectly interpreting the constraints on organismal adaptation and from too narrow an interpretation of replication, a process of more general relevance than the author is willing to allow."
Uses and limitations
The concept of extended phenotype has provided a useful frame for subsequent scientific work. For example, research into the relationship between "the bacterial flora of the gut and their mammalian hosts" which "has become a hot topic of late" makes use of this concept.
Subsequent proponents expand the theory and posit that many organisms within an ecosystem can alter the selective pressures on all of them by modifying their environment in various ways. Dawkins himself asserted, "Extended phenotypes are worthy of the name only if they are candidate adaptations for the benefit of alleles responsible for variations in them". As an illustration, one might ask: could an architect's buildings be considered part of his or her extended phenotype, much as a beaver's dam is part of its extended phenotype? Dawkins' answer is No: in humans, an "architect's specific alleles are neither more nor less likely to be selected based on the design of his or her latest building."
See also
Group selection
Inclusive fitness
Kin selection
References
External links
The Tactless Meme - by Jon Seger, New Scientist
Book profile - from The World of Richard Dawkins
1982 non-fiction books
Books about evolution
Books by Richard Dawkins
English-language non-fiction books
English non-fiction books
Evolutionary biology concepts
Modern synthesis (20th century)
Oxford University Press books
Sequel books | The Extended Phenotype | [
"Biology"
] | 1,438 | [
"Evolutionary biology concepts"
] |
8,808,087 | https://en.wikipedia.org/wiki/Fatty%20acid%20synthase | Fatty acid synthase (FAS) is an enzyme that in humans is encoded by the FASN gene.
Fatty acid synthase is a multi-enzyme protein that catalyzes fatty acid synthesis. It is not a single enzyme but a whole enzymatic system composed of two identical 272 kDa multifunctional polypeptides, in which substrates are handed from one functional domain to the next.
Its main function is to catalyze the synthesis of palmitate (C16:0, a long-chain saturated fatty acid) from acetyl-CoA and malonyl-CoA, in the presence of NADPH.
The fatty acids are synthesized by a series of decarboxylative Claisen condensation reactions from acetyl-CoA and malonyl-CoA. Following each round of elongation the beta keto group is reduced to the fully saturated carbon chain by the sequential action of a ketoreductase (KR), dehydratase (DH), and enoyl reductase (ER). The growing fatty acid chain is carried between these active sites while attached covalently to the phosphopantetheine prosthetic group of an acyl carrier protein (ACP), and is released by the action of a thioesterase (TE) upon reaching a carbon chain length of 16 (palmitic acid).
Classes
There are two principal classes of fatty acid synthases.
Type I systems utilise a single large, multifunctional polypeptide and are common to both animals and fungi (although the structural arrangement of fungal and animal syntheses differ). A Type I fatty acid synthase system is also found in the CMN group of bacteria (corynebacteria, mycobacteria, and nocardia). In these bacteria, the FAS I system produces palmitic acid, and cooperates with the FAS II system to produce a greater diversity of lipid products.
Type II is found in archaea, bacteria and plant plastids, and is characterized by the use of discrete, monofunctional enzymes for fatty acid synthesis. Inhibitors of this pathway (FASII) are being investigated as possible antibiotics.
The mechanism of FAS I and FAS II elongation and reduction is the same, as the domains of the FAS II enzymes are largely homologous to their domain counterparts in FAS I multienzyme polypeptides. However, the differences in the organization of the enzymes - integrated in FAS I, discrete in FAS II - gives rise to many important biochemical differences.
The evolutionary history of fatty acid synthases are very much intertwined with that of polyketide synthases (PKS). Polyketide synthases use a similar mechanism and homologous domains to produce secondary metabolite lipids. Furthermore, polyketide synthases also exhibit a Type I and Type II organization. FAS I in animals is thought to have arisen through modification of PKS I in fungi, whereas FAS I in fungi and the CMN group of bacteria seem to have arisen separately through the fusion of FAS II genes.
Structure
Mammalian FAS consists of a homodimer of two identical protein subunits, in which three catalytic domains in the N-terminal section (-ketoacyl synthase (KS), malonyl/acetyltransferase (MAT), and dehydrase (DH)), are separated by a core region (known as the interdomain) of 600 residues from four C-terminal domains (enoyl reductase (ER), -ketoacyl reductase (KR), acyl carrier protein (ACP) and thioesterase (TE)). The interdomain region allows the two monomeric domains to form a dimer.
The conventional model for organization of FAS (see the 'head-to-tail' model on the right) is largely based on the observations that the bifunctional reagent 1,3-dibromopropanone (DBP) is able to crosslink the active site cysteine thiol of the KS domain in one FAS monomer with the phosphopantetheine prosthetic group of the ACP domain in the other monomer. Complementation analysis of FAS dimers carrying different mutations on each monomer has established that the KS and MAT domains can cooperate with the ACP of either monomer. and a reinvestigation of the DBP crosslinking experiments revealed that the KS active site Cys161 thiol could be crosslinked to the ACP 4'-phosphopantetheine thiol of either monomer. In addition, it has been recently reported that a heterodimeric FAS containing only one competent monomer is capable of palmitate synthesis.
The above observations seemed incompatible with the classical 'head-to-tail' model for FAS organization, and an alternative model has been proposed, predicting that the KS and MAT domains of both monomers lie closer to the center of the FAS dimer, where they can access the ACP of either subunit (see figure on the top right).
A low resolution X-ray crystallography structure of both pig (homodimer) and yeast FAS (heterododecamer) along with a ~6 Å resolution electron cryo-microscopy (cryo-EM) yeast FAS structure have been solved.
Substrate shuttling mechanism
The solved structures of yeast FAS and mammalian FAS show two distinct organizations of highly conserved catalytic domains/enzymes in this multi-enzyme cellular machine. Yeast FAS has a highly efficient rigid barrel-like structure with 6 reaction chambers which synthesize fatty acids independently, while the mammalian FAS has an open flexible structure with only two reaction chambers. However, in both cases the conserved ACP acts as the mobile domain responsible for shuttling the intermediate fatty acid substrates to various catalytic sites. A first direct structural insight into this substrate shuttling mechanism was obtained by cryo-EM analysis, where ACP is observed bound to the various catalytic domains in the barrel-shaped yeast fatty acid synthase. The cryo-EM results suggest that the binding of ACP to various sites is asymmetric and stochastic, as also indicated by computer-simulation studies
Regulation
Metabolism and homeostasis of fatty acid synthase is transcriptionally regulated by Upstream Stimulatory Factors (USF1 and USF2) and sterol regulatory element binding protein-1c (SREBP-1c) in response to feeding/insulin in living animals.
Although liver X receptors (LXRs) modulate the expression of sterol regulatory element binding protein-1c (SREBP-1c) in feeding, regulation of FAS by SREBP-1c is USF-dependent.
Acylphloroglucinols isolated from the fern Dryopteris crassirhizoma show a fatty acid synthase inhibitory activity.
Clinical significance
The FASN gene has been investigated as a possible oncogene. FAS is upregulated in breast and gastric cancers, as well as being an indicator of poor prognosis, and so may be worthwhile as a chemotherapeutic target. FAS inhibitors are therefore an active area of drug discovery research.
FAS may also be involved in the production of an endogenous ligand for the nuclear receptor PPARalpha, the target of the fibrate drugs for hyperlipidemia, and is being investigated as a possible drug target for treating the metabolic syndrome. Orlistat which is a gastrointestinal lipase inhibitor also inhibits FAS and has a potential as a medicine for cancer.
In some cancer cell lines, this protein has been found to be fused with estrogen receptor alpha (ER-alpha), in which the N-terminus of FAS is fused in-frame with the C-terminus of ER-alpha.
An association with uterine leiomyomata has been reported.
See also
Discovery and development of gastrointestinal lipase inhibitors
Fatty acid synthesis
Fatty acid metabolism
Fatty acid degradation
Enoyl-acyl carrier protein reductase
List of fatty acid metabolism disorders
References
Further reading
External links
Fatty Acid Synthesis: Rensselaer Polytechnic Institute
Fatty Acid Synthase: RCSB PDB Molecule of the Month
3D electron microscopy structures of fatty acid synthase from the EM Data Bank(EMDB)
PDBe-KB provides an overview of all the structure information available in the PDB for Human Fatty acid synthase
Transferases
EC 2.3.1
Metabolism
Fatty acids
NADPH-dependent enzymes
Enzymes of known structure | Fatty acid synthase | [
"Chemistry",
"Biology"
] | 1,816 | [
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
8,810,330 | https://en.wikipedia.org/wiki/Zonal%20and%20meridional%20flow | Zonal and meridional flow are directions and regions of fluid flow on a globe.
Zonal flow follows a pattern along latitudinal lines, latitudinal circles or in the west–east direction.
Meridional flow follows a pattern from north to south, or from south to north, along the Earth's longitude lines, longitudinal circles (meridian) or in the north–south direction.
These terms are often used in the atmospheric and earth sciences to describe global phenomena, such as "meridional wind", or "zonal average temperature".
In the context of physics, zonal flow connotes a tendency of flux to conform to a pattern parallel to the equator of a sphere. In meteorological term regarding atmospheric circulation, zonal flow brings a temperature contrast along the Earth's longitude. Extratropical cyclones in zonal flows tend to be weaker, moving faster and producing relatively little impact on local weather.
Extratropical cyclones in meridional flows tend to be stronger and move slower. This pattern is responsible for most instances of extreme weather, as not only are storms stronger in this type of flow regime, but temperatures can reach extremes as well, producing heat waves and cold waves depending on the equator-ward or poleward direction of the flow.
For vector fields (such as wind velocity), the zonal component (or x-coordinate) is denoted as u, while the meridional component (or y-coordinate) is denoted as v.
In plasma physics, "zonal flow" means poloidal, which is the opposite from the meaning in planetary atmospheres and weather/climate studies.
See also
Meridione
Zonal flow (plasma)
Zonal/poloidal
Notes
Orientation (geometry) | Zonal and meridional flow | [
"Physics",
"Mathematics"
] | 353 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
8,812,149 | https://en.wikipedia.org/wiki/CXCL2 | Chemokine (C-X-C motif) ligand 2 (CXCL2) is a small cytokine belonging to the CXC chemokine family that is also called macrophage inflammatory protein 2-alpha (MIP2-alpha), Growth-regulated protein beta (Gro-beta) and Gro oncogene-2 (Gro-2). CXCL2 is 90% identical in amino acid sequence as a related chemokine, CXCL1. This chemokine is secreted by monocytes and macrophages and is chemotactic for polymorphonuclear leukocytes and hematopoietic stem cells. The gene for CXCL2 is located on human chromosome 4 in a cluster of other CXC chemokines. CXCL2 mobilizes cells by interacting with a cell surface chemokine receptor called CXCR2.
CXCL2, like related chemokines, is also a powerful neutrophil chemoattractant and is involved in many immune responses including wound healing, cancer metastasis, and angiogenesis. A study was published in 2013 testing the role of CXCL2, CXCL3, and CXCL1 in the migration of airway smooth muscle cells (ASMCs) migration which plays a significant role in asthma. The results of this study showed that CXCL2 and CXCL3 both help with the mediation of normal and asthmatic ASMC migration through different mechanisms.
Clinical development
CXCL2 in combination with the CXCR4 inhibitor plerixafor rapidly mobilizes hematopoietic stem cells into the peripheral blood.
This rapid peripheral blood stem cell mobilization regimen entered Phase 2 clinical trials in 2021 in development by Magenta Therapeutics as a new method to collect stem cells for bone marrow transplantation.
References
Cytokines | CXCL2 | [
"Chemistry"
] | 406 | [
"Cytokines",
"Signal transduction"
] |
8,812,794 | https://en.wikipedia.org/wiki/Molecular%20replacement | Molecular replacement (MR) is a method of solving the phase problem in X-ray crystallography. MR relies upon the existence of a previously solved protein structure which is similar to our unknown structure from which the diffraction data is derived. This could come from a homologous protein, or from the lower-resolution protein NMR structure of the same protein.
The first goal of the crystallographer is to obtain an electron density map, density being related with diffracted wave as follows:
With usual detectors the intensity is being measured, and all the information about phase () is lost. Then, in the absence of phases (Φ), we are unable to complete the shown Fourier transform relating the experimental data from X-ray crystallography (in reciprocal space) to real-space electron density, into which the atomic model is built. MR tries to find the model which fits best experimental intensities among known structures.
Principles of Patterson-based molecular replacement
We can derive a Patterson map for the intensities, which is an interatomic vector map created by squaring the structure factor amplitudes and setting all phases to zero. This vector map contains a peak for each atom related to every other atom, with a large peak at 0,0,0, where vectors relating atoms to themselves "pile up". Such a map is far too noisy to derive any high resolution structural information—however if we generate Patterson maps for the data derived from our unknown structure, and from the structure of a previously solved homologue, in the correct orientation and position within the unit cell, the two Patterson maps should be closely correlated. This principle lies at the heart of MR, and can allow us to infer information about the orientation and location of an unknown molecule with its unit cell.
Due to historic limitations in computing power, an MR search is typically divided into two steps: rotation and translation.
Rotation function
In the rotation function, our unknown Patterson map is compared to Patterson maps derived from our known homologue structure in different orientations. Historically r-factors and/or correlation coefficients were used to score the rotation function, however, modern programs use maximum likelihood-based algorithms. The highest correlation (and therefore scores) are obtained when the two structures (known and unknown) are in similar orientation(s)—these can then be output in Euler angles or spherical polar angles.
Translation function
In the translation function, the now correctly oriented known model can be correctly positioned by translating it to the correct co-ordinates within the asymmetric unit. This is accomplished by moving the model, calculating a new Patterson map, and comparing it to the unknown-derived Patterson map. This brute-force search is computationally expensive and fast translation functions are now more commonly used. Positions with high correlations are output in Cartesian coordinates.
Using de novo predicted structures in molecular replacement
With the improvement of de novo protein structure prediction, many protocols including MR-Rosetta, QUARK, AWSEM-Suite and I-TASSER-MR can generate a lot of native-like decoy structures that are useful to solve the phase problem by molecular replacement.
The next step
Following this, we should have correctly oriented and translated phasing models, from which we can derive phases which are (hopefully) accurate enough to derive electron density maps. These can be used to build and refine an atomic model of our unknown structure.
References
External links
Phaser – One of the most commonly used molecular replacement programmes.
Molrep – Molecular replacement package within CCP4
Phaser article at PDBe – A helpful public domain introduction to the topic.
X-ray crystallography | Molecular replacement | [
"Chemistry",
"Materials_science"
] | 743 | [
"X-ray crystallography",
"Crystallography"
] |
3,071,999 | https://en.wikipedia.org/wiki/Sunroom | A sunroom, also frequently called a solarium (and sometimes a "Florida room", "garden conservatory", "garden room", "patio room", "sun parlor", "sun porch", "three season room" or "winter garden"), is a room that permits abundant daylight and views of the landscape while sheltering from adverse weather. Sunroom and solarium have the same denotation: solarium is Latin for "place of sun[light]". Solaria of various forms have been erected throughout European history. Currently, the sunroom or solarium is popular in Europe, Canada, the United States, Australia, and New Zealand. Sunrooms may feature passive solar building design to heat and illuminate them.
In Great Britain, which has a long history of formal conservatories, a small conservatory is sometimes denominated a "sunroom". In gardening, a garden room is a secluded and partly enclosed outside space within a garden that creates a room-like effect.
Design
Attached sunrooms typically are constructed of transparent tempered glazing atop a brick or wood "knee wall" or framed entirely of wood, aluminum, or PVC, and glazed on all sides. Frosted glass or glass block may be used to add privacy. Screens are a fundamental aspect of a "Florida room", and jalousie windows are often featured. An integrated sunroom is specifically designed with many windows and climate controls.
A solarium is typically distinguished from a sunroom by the former being specifically and primarily designed to collect sunlight for warmth and light as opposed to being primarily designed to feature scenic views, and by being composed of walls, save one, and a roof that are entirely of framed glass. These typically are erected in higher latitude (low angle of sunlight) or cold (higher altitude) locations. In contrast, a sunroom sensu stricto has an opaque roof.
Technologies
During the 1960s, professional re-modelling companies developed affordable systems to enclose a patio or deck, offering design, installation, and full service warranties. Patio rooms featured lightweight, engineered roof panels, single pane glass, and aluminium construction.
As technology advanced, insulated glass, vinyl, and vinyl-wood composite framework appeared. More recently, specialized blinds and curtains have been developed, many electrically operated by remote control. Specialized flooring, including radiant heat, may be adapted to both attached and integrated sunrooms.
See also
Arizona room
Conservatory (greenhouse)
Observation car
Porch
Smart glass
Notes
References
External links
Glass architecture
Rooms
Room | Sunroom | [
"Materials_science",
"Engineering"
] | 518 | [
"Glass engineering and science",
"Solar design",
"Energy engineering",
"Rooms",
"Glass architecture",
"Architecture"
] |
3,073,029 | https://en.wikipedia.org/wiki/Agitator%20%28device%29 | An agitator is a device or mechanism to put something into motion by shaking or stirring. There are several types of agitation machines, including washing machine agitators (which rotate back and forth) and magnetic agitators (which contain a magnetic bar rotating in a magnetic field). Agitators can come in many sizes and varieties, depending on the application.
In general, agitators usually consist of an impeller and a shaft. An impeller is a rotor located within a tube or conduit attached to the shaft. It helps enhance the pressure in order for the flow of a fluid be done. Modern industrial agitators incorporate process control to maintain better control over the mixing process.
Washing machine agitator
In a top load washing machine the agitator projects from the bottom of the wash basket and creates the wash action by rotating back and forth, rolling garments from the top of the load, down to the bottom, then back up again.
There are several types of agitators with the most common being the "straight-vane" and "dual-action" agitators. The "straight-vane" is a one-part agitator with bottom and side fins that usually turns back and forth. The Dual-action is a two-part agitator that has bottom washer fins that move back and forth and a spiral top that rotates clockwise to help guide the clothes to the bottom washer fins.
The modern agitator, which is dual-action, was first made in Kenmore Appliances washing machines in the 1980s to present. These agitators are known by the company as dual-rollover and triple-rollover action agitators.
Magnetic agitator
This is a device formed by a metallic bar (called the agitation bar) which is normally covered by a plastic layer, and a sheet that has underneath it a rotatory magnet or a series of electromagnets arranged in a circular form to create a magnetic rotatory field. Commonly, the sheet has an arrangement of electric resistances that can heat some chemical solutions.
During the operation of a typical magnetic agitator, the agitator bar is moved inside a container such as to dissolve a substance in a liquid. The container must be placed on the sheet, so that the magnetic field influences the agitation bar and makes it rotate. This allows it to mix different substances at high speeds.
Agitation rack
An agitation rack is a special form of agitator used to store platelets. It is composed of a series of clasps attached to motorised bars, that rock the specimens of platelets gently back-and-forth. This prevents them from becoming activated and adhering to one another, which cannot be reversed by any current means.
See also
Impeller
Tedder
Mixing (disambiguation)
Mixing paddle
References
1. Uses of Agitators, June 26, 2012
2. Agitator, May 30, 2016
3. Agitator tank device and drag reduction agent evaluation October, 23, 2018
4. Slurry Agitators October, 23, 2018
Specific
Mechanical engineering
Fluid dynamics
Laundry washing equipment | Agitator (device) | [
"Physics",
"Chemistry",
"Engineering"
] | 636 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
3,074,037 | https://en.wikipedia.org/wiki/ARGUS%20distribution | In physics, the ARGUS distribution, named after the particle physics experiment ARGUS, is the probability distribution of the reconstructed invariant mass of a decayed particle candidate in continuum background.
Definition
The probability density function (pdf) of the ARGUS distribution is:
for . Here and are parameters of the distribution and
where and are the cumulative distribution and probability density functions of the standard normal distribution, respectively.
Cumulative distribution function
The cumulative distribution function (cdf) of the ARGUS distribution is
.
Parameter estimation
Parameter c is assumed to be known (the kinematic limit of the invariant mass distribution), whereas χ can be estimated from the sample X1, …, Xn using the maximum likelihood approach. The estimator is a function of sample second moment, and is given as a solution to the non-linear equation
.
The solution exists and is unique, provided that the right-hand side is greater than 0.4; the resulting estimator is consistent and asymptotically normal.
Generalized ARGUS distribution
Sometimes a more general form is used to describe a more peaking-like distribution:
where Γ(·) is the gamma function, and Γ(·,·) is the upper incomplete gamma function.
Here parameters c, χ, p represent the cutoff, curvature, and power respectively.
The mode is:
The mean is:
where M(·,·,·) is the Kummer's confluent hypergeometric function.
The variance is:
p = 0.5 gives a regular ARGUS, listed above.
References
Further reading
Experimental particle physics
Continuous distributions | ARGUS distribution | [
"Physics"
] | 317 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
4,200,042 | https://en.wikipedia.org/wiki/Critical%20heat%20flux | In the study of heat transfer, critical heat flux (CHF) is the heat flux at which boiling ceases to be an effective form of transferring heat from a solid surface to a liquid.
Description
Boiling systems are those in which liquid coolant absorbs energy from a heated solid surface and undergoes a change in phase. In flow boiling systems, the saturated fluid progresses through a series of flow regimes as vapor quality is increased. In systems that utilize boiling, the heat transfer rate is significantly higher than if the fluid were a single phase (i.e. all liquid or all vapor). The more efficient heat transfer from the heated surface is due to heat of vaporization and sensible heat. Therefore, boiling heat transfer has played an important role in industrial heat transfer processes such as macroscopic heat transfer exchangers in nuclear and fossil power plants, and in microscopic heat transfer devices such as heat pipes and microchannels for cooling electronic chips.
The use of boiling as a means of heat removal is limited by a condition called critical heat flux (CHF). The most serious problem that can occur around CHF is that the temperature of the heated surface may increase dramatically due to significant reduction in heat transfer. In industrial applications such as electronics cooling or instrumentation in space, the sudden increase in temperature may possibly compromise the integrity of the device.
Two-phase heat transfer
The convective heat transfer between a uniformly heated wall and the working fluid is described by Newton's law of cooling:
where represents the heat flux, represents the proportionally constant called the heat transfer coefficient, represents the wall temperature and represents the fluid temperature. If decreases significantly due to the occurrence of the CHF condition, will increase for fixed and while will decrease for fixed .
Modes of CHF
The understanding of CHF phenomenon and an accurate prediction of the CHF condition are important for safe and economic design of many heat transfer units including nuclear reactors, fossil fuel boilers, fusion reactors, electronic chips, etc. Therefore, the phenomenon has been investigated extensively over the world since Nukiyama first characterized it. In 1950 Kutateladze suggested the hydrodynamical theory of the burnout crisis. Much of significant work has been done during the last decades with the development of water-cooled nuclear reactors. Now many aspects of the phenomenon are well understood and several reliable prediction models are available for conditions of common interests.
The use of the term critical heat flux (CHF) is inconsistent among authors. The United States Nuclear Regulatory Commission has suggested using the term “critical boiling transition” (CBT) to indicate the phenomenon associated with a significant reduction in two-phase heat transfer. For a single species, the liquid phase generally has considerably better heat transfer properties than the vapor phase, namely thermal conductivity. So in general CBT is the result of some degree of liquid deficiency to a local position along a heated surface. The two mechanisms that result in reaching CBT are: departure from nucleate boiling (DNB) and liquid film dryout.
DNB
Departure from nucleate boiling (DNB) occurs in sub-cooled flows and bubbly flow regimes. DNB happens when many bubbles near the heated surface coalesce and impede the ability of local liquid to reach the surface. The mass of vapor between the heated surface and local liquid may be referred to as a vapor blanket.
Dryout
Dryout means the disappearance of liquid on the heat transfer surface which results in the CBT. Dryout of liquid film occurs in annular flow. Annular flow is characterized by a vapor core, liquid film on the wall, and liquid droplets entrained within the core. Shear at the liquid-vapor interface drives the flow of the liquid film along the heated surface. In general, the two-phase HTC increases as the liquid-film thickness decreases. The process has been shown to occur over many instances of dryout events, which span a finite duration and are local to a position. The CBT occurs when the fraction of time a local position is subjected to dryout becomes significant. A single dryout event, or even several dryout events, may be followed by periods of sustained contact between the liquid film and the previously dry region . Many dryout events (hundreds or thousands) occurring in sequence are the mechanism for significant reduction in heat transfer-associated dryout CBT.
Post-CHF
Post-CHF is used to denote the general heat transfer deterioration in flow boiling process, and liquid could be in the form of dispersed spray of droplets, continuous liquid core, or transition between the former two cases. Post-dryout can be specifically used to denote the heat transfer deterioration in the condition when liquid is only in the form of dispersed droplets, and denote the other cases by the term Post-DNB.
Correlations
The critical heat flux is an important point on the boiling curve and it may be desirable to operate a boiling process near this point. However, one could become cautious of dissipating heat in excess of this amount. Zuber, through a hydrodynamic stability analysis of the problem has developed an expression to approximate this point.
Units: critical flux: kW/m; h: kJ/kg; σ: N/m; ρ: kg/m; g: m/s.
It is independent of the surface material and is weakly dependent upon the heated surface geometry described by the constant C. For large horizontal cylinders, spheres and large finite heated surfaces, the value of the Zuber constant . For large horizontal plates, a value of is more suitable.
The critical heat flux depends strongly on pressure. At low pressures (including atmospheric pressure), the pressure dependence is mainly through the change in vapor density leading to an increase in the critical heat flux with pressure. However, as pressures approach the critical pressure, both the surface tension and the heat of vaporization converge to zero, making them the dominant sources of pressure dependency.
For water at 1atm, the above equation calculates a critical heat flux of approximately 1000 kW/m.
See also
Leidenfrost effect
Nucleate boiling
References
External links
Modeling of the boiling crisis
Film dryout near critical heat flux - video
Thermodynamics | Critical heat flux | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,248 | [
"Thermodynamics",
"Dynamical systems"
] |
4,204,003 | https://en.wikipedia.org/wiki/Franz%E2%80%93Keldysh%20effect | The Franz–Keldysh effect is a change in optical absorption by a semiconductor when an electric field is applied. The effect is named after the German physicist Walter Franz and Russian physicist Leonid Keldysh.
Karl W. Böer observed first the shift of the optical absorption edge with electric fields during the discovery of high-field domains and named this the Franz-effect. A few months later, when the English translation of the Keldysh paper became available, he corrected this to the Franz–Keldysh effect.
As originally conceived, the Franz–Keldysh effect is the result of wavefunctions "leaking" into the band gap. When an electric field is applied, the electron and hole wavefunctions become Airy functions rather than plane waves. The Airy function includes a "tail" which extends into the classically forbidden band gap. According to Fermi's golden rule, the more overlap there is between the wavefunctions of a free electron and a hole, the stronger the optical absorption will be. The Airy tails slightly overlap even if the electron and hole are at slightly different potentials (slightly different physical locations along the field). The absorption spectrum now includes a tail at energies below the band gap and some oscillations above it. This explanation does, however, omit the effects of excitons, which may dominate optical properties near the band gap.
The Franz–Keldysh effect occurs in uniform, bulk semiconductors, unlike the quantum-confined Stark effect, which requires a quantum well. Both are used for electro-absorption modulators. The Franz–Keldysh effect usually requires hundreds of volts, limiting its usefulness with conventional electronics – although this is not the case for commercially available Franz–Keldysh-effect electro-absorption modulators that use a waveguide geometry to guide the optical carrier.
Effect on modulation spectroscopy
The absorption coefficient is related to the dielectric constant (especially the complex part 2). From Maxwell's equation, we can easily find out the relation,
n0 and k0 are the real and complex parts of the refractive index of the material.
We will consider the direct transition of an electron from the valence band to the conduction band induced by the incident light in a perfect crystal and try to take into account of the change of absorption coefficient for each Hamiltonian with a probable interaction like electron-photon, electron-hole, external field. These approach follows from. We put the 1st purpose on the theoretical background of Franz–Keldysh effect and third-derivative modulation spectroscopy.
One electron Hamiltonian in an electro-magnetic field
where A is the vector potential and V(r) is a periodic potential.
(kp and e are the wave vector of em field and unit vector.)
Neglecting the square term and using the relation within the Coulomb gauge , we obtain
Then using the Bloch function (j = v, c that mean valence band, conduction band)
the transition probability can be obtained such that
Power dissipation of the electromagnetic waves per unit time and unit volume gives rise to following equation
From the relation between the electric field and the vector potential, , we may put
And finally we can get the imaginary part of the dielectric constant and surely the absorption coefficient.
2-body(electron-hole) Hamiltonian with EM field
An electron in the valence band(wave vector k) is excited by photon absorption into the conduction band(the wave vector at the band is ) and leaves a hole in the valence band (the wave vector of the hole is ). In this case, we include the electron-hole interaction.()
Thinking about the direct transition, is almost same. But Assume the slight difference of the momentum due to the photon absorption is not ignored and the bound state- electron-hole pair is very weak and the effective mass approximation is valid for the treatment. Then we can make up the following procedure, the wave function and wave vectors of the electron and hole
(i, j are the band indices, and re, rh, ke, kh are the coordinates and wave vectors of the electron and hole respectively)
And we can take the center of mass momentum Q such that
and define the Hamiltonian
Then, Bloch functions of the electron and hole can be constructed with the phase term
If V varies slowly over the distance of the integral, the term can be treated like following.
here we assume that the conduction and valence bands are parabolic with scalar masses and that at the top of the valence band , i.e.
( is the energy gap)
Now, the Fourier transform of entering Eq.(), the effective mass equation for the exciton may be written as
then the solution of eq is given by
is called the envelope function of an exciton. The ground state of the exciton is given in analogy to the hydrogen atom.
then, the dielectric function is
detailed calculation is in.
Franz–Keldysh effect
Franz–Keldysh effect means an electron in a valence band can be allowed to be excited into a conduction band by absorbing a photon with its energy below the band gap. Now we're thinking about the effective mass equation for the relative motion of electron hole pair when the external field is applied to a crystal. But we are not to take a mutual potential of electron-hole pair into the Hamiltonian.
When the Coulomb interaction is neglected, the effective mass equation is
.
And the equation can be expressed,
( where is the value in the direction of the principal axis of the reduced effective mass tensor)
Using change of variables:
then the solution is
where
For example, the solution is given by
The dielectric constant can be obtained inserting this expression into Eq.(), and changing the summation with respect to λ to
The integral with respect to is given by the joint density of states for the two-D band. (the Joint density of states is nothing but the meaning of DOS of both electron and hole at the same time.)
where
Then we put
And think about the case we find , thus with the asymptotic solution for the Airy function in this limit.
Finally,
Therefore, the dielectric function for the incident photon energy below the band gap exist! These results indicate that absorption occurs for an incident photon.
See also
Quantum-confined Stark effect
References
General references
W. Franz, Einfluß eines elektrischen Feldes auf eine optische Absorptionskante, Z. Naturforschung 13a (1958) 484–489.
L. V. Keldysh, Behaviour of Non-Metallic Crystals in Strong Electric Fields, J. Exptl. Theoret. Phys. (USSR) 33 (1957) 994–1003, translation: Soviet Physics JETP 6 (1958) 763–770.
L. V. Keldysh, Ionization in the Field of a Strong Electromagnetic Wave, J. Exptl. Theoret. Phys. (USSR) 47 (1964) 1945–1957, translation: Soviet Physics JETP 20 (1965) 1307–1314.
J. I. Pankove, Optical Processes in Semiconductors, Dover Publications Inc. New York (1971).
H. Haug and S. W. Koch, "Quantum Theory of the Optical and Electronic Properties of Semiconductors", World Scientific (1994).
C. Kittel, "Introduction to Solid State Physics", Wiley (1996).
Optoelectronics
Electronic engineering | Franz–Keldysh effect | [
"Technology",
"Engineering"
] | 1,566 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
4,205,059 | https://en.wikipedia.org/wiki/EuropaBio | EuropaBio ("The European Association for Bioindustries") is Europe's largest and most influential biotech industry group, whose members include leading large-size healthcare and industrial biotechnology companies. EuropaBio is located in Brussels, Belgium. The organisation was initiated in 1996 to represent the interests of the biotechnology industry at the European level, and therefore influence legislation that serves the interests of biotechnology companies in Europe.
Activity and goals
EuropaBio is engaged in dialogue with the European Parliament, the European Commission, and the Council of Ministers to influence legislation on biotechnology.
EuropaBio represents two sectors of the biotech industry.
White or industrial biotechnology is the application of biotechnology for industrial purposes, including manufacturing, alternative energy (or "bioenergy") biofuels, and biomaterials.
Red or healthcare biotechnology is the application of biotechnology for the production of medicines and therapies.
EuropaBio's stated goals are:
promoting an innovative, coherent, and dynamic biotechnology-based industry in Europe;
advocating free and open markets and the removal of barriers to competitiveness with other areas of the world;
committing to an open, transparent, and informed dialogue with all stakeholders about the ethical, social, and economic aspects of biotechnology and its benefits;
championing the socially responsible use of biotechnology to ensure that its potential is fully used to the benefit of humans and their environment.
EuropaBio's primary focus is the European Union but because of the global character of the biotech business, it also represents its members in transatlantic and worldwide forums.
Organisation
EuropaBio has a board of management made up of representatives from among its industry members. Since 2023, Dr. Sarah Reisinger representing dsm-firmenich is chair of the board.
The board is assisted by sectoral councils representing the main segments of EuropaBio – healthcare (red biotech), and industrial (white biotech).
Additionally, National Associations are represented through the National Associations Council.
Experts from member companies and national associations participate in EuropaBio's working groups which cover a very wide range of issues and areas of concern of biotech enterprises.
Since November 2020 EuropaBio Director General is Dr. Claire Skentelbery.
Members
In 2021, the association represents 79 corporate and associate members and BioRegions, and 17 national biotechnology associations in turn representing over 1800 biotech SMEs.
See also
CropLife International
European Federation of Biotechnology (EFB)
European Federation of Pharmaceutical Industries and Associations (EFPIA)
Genetically modified food controversies
Regulation of the release of genetic modified organisms
Citations
References
Transforming Europe’s position on GM food - ambassadors programme executive summary The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
Biotech group bids to recruit high-profile GM 'ambassadors' John Vidal and Hanna Gersmann, The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
Draft letter from EuropaBio to potential GM ambassadors The Guardian, Thursday 20 October 2011, Guardian News and Media Limited.
External links
EuropaBio
Biotechnology in the EU
Biotech Informa
BIO
GMO Compass
Lobbying organizations in Europe
Pan-European biotechnology organisations
Organizations established in 1996
Organisations based in Brussels
1996 establishments in Belgium | EuropaBio | [
"Engineering",
"Biology"
] | 633 | [
"Biotechnology organizations",
"Pan-European biotechnology organisations"
] |
1,581,131 | https://en.wikipedia.org/wiki/Electrostatic%20motor | An electrostatic motor or capacitor motor is a type of electric motor based on the attraction and repulsion of electric charge.
An alternative type of electrostatic motor is the spacecraft electrostatic ion drive thruster where forces and motion are created by electrostatically accelerating ions.
Overview
An electrostatic motor is based on the attraction and repulsion of electric charge. Usually, electrostatic motors are the dual of conventional coil-based motors. They typically require a high voltage power supply, although very small motors employ lower voltages. Conventional electric motors instead employ magnetic attraction and repulsion, and require high current at low voltages. In the 1740s and 1750s, the first electrostatic motors were developed by Andrew Gordon and by Benjamin Franklin. Today the electrostatic motor finds frequent use in micro-mechanical (MEMS) systems where their drive voltages are below 100 volts, and where moving, charged plates are far easier to fabricate than coils and iron cores.
Corona-discharge motor
The corona-discharge motor, also known as corona motor, has been known for centuries.
Nanotube nanomotor
In 2004, researchers at University of California, Berkeley, developed rotational bearings based upon multiwall carbon nanotubes. By attaching a gold plate (with dimensions of the order of 100 nm) to the outer shell of a suspended multiwall carbon nanotube (like nested carbon cylinders), they are able to electrostatically rotate the outer shell relative to the inner core. These bearings are very robust; devices have been oscillated thousands of times with no indication of wear. These nanoelectromechanical systems (NEMS) represent a promising direction in miniaturization and may find their way into commercial applications in the future.
Electrostatic ion drive
Electric motors, in general, produce motion when powered by electric currents. The common type of spacecraft ion thruster uses electrostatic forces to accelerate ions to generate forces to create motion, and thus can be considered as unconventional electric motors.
Gridded electrostatic ion thrusters commonly utilize xenon gas. This gas has no charge and is ionized by bombarding it with energetic electrons. These electrons can be provided from a hot-filament cathode and accelerated in the electrical field of the cathode fall to the anode (Kaufman type ion thruster). Alternatively, the electrons can be accelerated by the oscillating electric field induced by an alternating magnetic field of a coil, which results in a self-sustaining discharge and omits any cathode (radiofrequency ion thruster).
Patents
The prime classifications of electrostatic motors by the USPTO are:
Class 310 ELECTRICAL GENERATOR OR MOTOR STRUCTURE
300 NON-DYNAMOELECTRIC
308 Charge accumulating
309 Electrostatic
-- J. Gallegos -- "Static electric Machine"
-- E. Thomson -- "Electrostatic motor"
-- Harold B. Smith -- "Apparatus for transforming electrical energy into mechanical energy"
-- W. G. Cady -- "Electromechanical System"
—- T. T. Brown -- "Electrostatic motor" (1934-09-25)
-- B. Bollee -- "Electrostatic Motor" (ed. Electrostatics from Atmospheric Electricity)
-- B. Bollee -- "Electrostatic Motor"
-- MITSUBISHI CHEM CORP -- "Electrostatic actuator"
-- Robert, et al. -- "Electrostatic Motor"
See also
Electrostatic generator
Nanomotor
Oxford Electric Bell
References
External articles and further reading
de Queiroz, Antonio Carlos M., "An Electrostatic Linear Motor". 24 January 2002.
William J. Beaty, "Simple Electrostatic Motor".
"ElectrostaticMotor", tm.net.
Fast and Flexible Electrostatic Motors at Univ. Tokyo"".
Heavy Lifting Electrostatic Motors at Univ. Tokyo"".
E. Sarajlic et al., 3-Phase Electrostatic Stepper Micromotors
Electrostatics
Electric motors | Electrostatic motor | [
"Technology",
"Engineering"
] | 829 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
1,581,163 | https://en.wikipedia.org/wiki/Piezoelectric%20motor | A piezoelectric motor or piezo motor is a type of electric motor based on the change in shape of a piezoelectric material when an electric field is applied, as a consequence of the converse piezoelectric effect. An electrical circuit makes acoustic or ultrasonic vibrations in the piezoelectric material, most often lead zirconate titanate and occasionally lithium niobate or other single-crystal materials, which can produce linear or rotary motion depending on their mechanism. Examples of types of piezoelectric motors include inchworm motors, stepper and slip-stick motors as well as ultrasonic motors which can be further categorized into standing wave and travelling wave motors. Piezoelectric motors typically use a cyclic stepping motion, which allows the oscillation of the crystals to produce an arbitrarily large motion, as opposed to most other piezoelectric actuators where the range of motion is limited by the static strain that may be induced in the piezoelectric element.
The growth and forming of piezoelectric crystals is a well-developed industry, yielding very uniform and consistent distortion for a given applied potential difference. This, combined with the minute scale of the distortions, gives the piezoelectric motor the ability to make very fine steps. Manufacturers claim precision to the nanometer scale. High response rate and fast distortion of the crystals also let the steps happen at very high frequencies—upwards of 5 MHz. This provides a maximum linear speed of approximately 800 mm per second, or nearly 2.9 km/h.
A unique capability of piezoelectric motors is their ability to operate in strong magnetic fields. This extends their usefulness to applications that cannot use traditional electromagnetic motors—such as inside nuclear magnetic resonance antennas. The maximum operating temperature is limited by the Curie temperature of the used piezoelectric ceramic and can exceed +250 °C.
The main benefits of piezoelectric motors are the high positioning precision, stability of position while unpowered, and the ability to be fabricated at very small sizes or in unusual shapes such as thin rings. Common applications of piezoelectric motors include focusing systems in camera lenses as well as precision motion control in specialised applications such as microscopy.
Resonant motor types
Ultrasonic motor
Ultrasonic motors differ from other piezoelectric motors in several ways, though both typically use some form of piezoelectric material. The most obvious difference is the use of resonance to amplify the vibration of the stator in contact with the rotor in ultrasonic motors.
Two different ways are generally available to control the friction along the stator-rotor contact interface, traveling-wave vibration and standing-wave vibration. Some of the earliest versions of practical motors in the 1970s, by Sashida, for example, used standing-wave vibration in combination with fins placed at an angle to the contact surface to form a motor, albeit one that rotated in a single direction. Later designs by Sashida and researchers at Matsushita, ALPS, Xeryon and Canon made use of traveling-wave vibration to obtain bi-directional motion, and found that this arrangement offered better efficiency and less contact interface wear. An exceptionally high-torque 'hybrid transducer' ultrasonic motor uses circumferentially-poled and axially-poled piezoelectric elements together to combine axial and torsional vibration along the contact interface, representing a driving technique that lies somewhere between the standing and traveling-wave driving methods.
Non-resonant motor types
Inchworm motor
The inchworm motor uses piezoelectric ceramics to push a stator using a walking-type motion. These piezoelectric motors use three groups of crystals—two 'locking', and one 'motive' that permanently connects to either the motor's casing or stator (not both). The motive group, sandwiched between the other two, provides the motion.
The non-powered behaviour of this piezoelectric motor is one of two options: 'normally locked' or 'normally free'. A normally free type allows free movement when unpowered but can still be locked by applying a voltage.
Inchworm motors can achieve nanometre-scale positioning by varying the voltage applied to the motive crystal while one set of locking crystals is engaged.
Stepping actions
The actuation process of the inchworm motor is a multistep cyclical process:
First, one group of 'locking' crystals is activated to lock one side and unlock other side of the 'sandwich' of piezo crystals.
Next, the 'motive' crystal group is triggered and held. The expansion of this group moves the unlocked 'locking' group along the motor path. This is the only stage where the motor moves.
Then the 'locking' group triggered in stage one releases (in 'normally locking' motors, in the other it triggers).
Then the 'motive' group releases, retracting the 'trailing locking' group.
Finally, both 'locking' groups return to their default states.
Stepper or walk-drive motor
Not to be confused with the similarly named electromagnetic stepper motor, these motors are similar to the inchworm motor, however, the piezoelectric elements can be bimorph actuators which bend to feed the slider rather than using a separate expanding and contracting element.
Slip-stick motor
The mechanism of slip-stick motors rely on the inertia in combination with the difference between static and dynamic friction. The stepping action consists of a slow extension phase where static friction is not overcome, followed by a rapid contraction phase where static friction is overcome and the point of contact between the motor and moving part is changed.
Direct drive motors
The direct drive piezoelectric motor creates movement through continuous ultrasonic vibration. Its control circuit applies a two-channel sinusoidal or square wave to the piezoelectric elements that matches the bending resonant frequency of the threaded tube—typically an ultrasonic frequency of 40 kHz to 200 kHz. This creates orbital motion that drives the screw.
A second drive type, the squiggle motor, uses piezoelectric elements bonded orthogonally to a nut. Their ultrasonic vibrations rotate a central lead screw.
Single action
Very simple single-action stepping motors can be made with piezoelectric crystals. For example, with a hard and rigid rotor-spindle coated with a thin layer of a softer material (like a polyurethane rubber), a series of angled piezoelectric transducers can be arranged. (see Fig. 2). When the control circuit triggers one group of transducers, they push the rotor one step. This design cannot make steps as small or precise as more complex designs, but can reach higher speeds and is cheaper to manufacture.
Patents
The first U.S. patent to disclose a vibrationally-driven motor may be "Method and Apparatus for Delivering Vibratory Energy" (U.S. Pat. No. 3,184,842, Maropis, 1965). The Maropis patent describes a "vibratory apparatus wherein longitudinal vibrations in a resonant coupling element are converted to torsional vibrations in a toroid type resonant terminal element." The first practical piezomotors were designed and produced by V. Lavrinenko in Piezoelectronic Laboratory, starting 1964, Kyiv Polytechnic Institute, USSR. Other important patents in the early development of this technology include:
"Electrical motor", V. Lavrinenko, M. Nekrasov, Patent USSR # 217509, priority May 10, 1965.
"Piezoelectric motor structures" (U.S. Pat. No. 4,019,073, Vishnevsky, et al., 1977)
"Piezoelectrically driven torsional vibration motor" (U.S. Pat. No. 4,210,837, Vasiliev, et al., 1980)
See also
Ultrasonic motor
Ultrasonic Motor Drive as used in the Canon EF Mount
Ultrasonic homogenizer
References
Electric motors | Piezoelectric motor | [
"Technology",
"Engineering"
] | 1,653 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
1,581,694 | https://en.wikipedia.org/wiki/Methyl%20jasmonate | Methyl jasmonate (abbreviated MeJA) is a volatile organic compound used in plant defense and many diverse developmental pathways such as seed germination, root growth, flowering, fruit ripening, and senescence. Methyl jasmonate is derived from jasmonic acid and the reaction is catalyzed by S-adenosyl--methionine:jasmonic acid carboxyl methyltransferase.
Description
Plants produce jasmonic acid and methyl jasmonate in response to many biotic and abiotic stresses (in particular, herbivory and wounding), which build up in the damaged parts of the plant. The methyl jasmonate can be used to signal the original plant's defense systems or it can be spread by physical contact or through the air to produce a defensive reaction in unharmed plants. The unharmed plants absorb the airborne MeJA through either the stomata or diffusion through the leaf cell cytoplasm. An herbivorous attack on a plant causes it to produce MeJA both for internal defense and for a signaling compound to other plants.
Defense chemicals
MeJA can induce the plant to produce multiple different types of defense chemicals such as phytoalexins (antimicrobial), nicotine or protease inhibitors. The protease inhibitors interfere with the insect digestive process and discourage the insect from eating the plant again.
MeJA has been used to stimulate traumatic resin duct production in Norway spruce trees. This can be used as a defense against many insect attackers as a type of vaccine.
Experiments
External application of methyl jasmonate has been shown to induce plant defensive responses against both biotic and abiotic stressors. When treatments of methyl jasmonate were applied to Picea abies (Norway spruce), the accumulation of monoterpene and sesquiterpene compounds doubled in the spruce needle tissues, a response that normally is only triggered when the tissue is damaged.
In an experiment testing the effect of methyl jasmonate treatments on drought tolerance, strawberry plants were shown to alter their metabolism and were better able to withstand water stress and drought conditions by lowering the amount of transpiration, and membrane-lipid peroxidation.
External application of methyl jasmonate has also shown a propensity for inducing an increased resistance to insect herbivory in some agricultural crops, such as brassicas and tobacco. Plants treated with methyl jasmonate and exposed to insect herbivores had significantly lower levels of herbivory, and the insect herbivores had slower development, when compared to untreated plants.
In recent experiments, methyl jasmonate has been shown to be effective at preventing bacterial growth in plants when applied in a spray to the leaves. The antibacterial effect is thought to be because of methyl jasmonate inducing resistance.
MeJA is also a plant hormone involved in tendril (root) coiling, flowering, seed and fruit maturation. An increase of the hormone affects flowering time, flower morphology and the number of open flowers. MeJA induces ethylene-forming enzyme activity, which increases the amount of ethylene to the amount necessary for fruit maturation.
Increased amounts of methyl jasmonate in plant roots have shown to inhibit their growth. It is predicted that the higher amounts of MeJA activate previously unexpressed genes within the roots to cause the growth inhibition.
Cancer cells
Methyl jasmonate induces cytochrome C release in the mitochondria of cancer cells, leading to cell death, but does not harm normal cells. Specifically, it can cause cell death in B-cell chronic lymphocytic leukemia cells taken from human patients with this disease and then treated in tissue culture with methyl jasmonate. Treatment of isolated normal human blood lymphocytes did not result in cell death.
See also
Jasmonate
Methyl dihydrojasmonate
References
External links
General information about methyl jasmonate
Jasmonate: pharmaceutical composition for treatment of cancer. US Patent Issued on October 22, 2002
Plant stress hormones suppress the proliferation and induce apoptosis in human cancer cells, Leukemia, Nature, April 2002, Volume 16, Number 4, Pages 608–616
Jasmonates induce nonapoptotic death in high-resistance mutant p53-expressing B-lymphoma cells, British Journal of Pharmacology (2005) 146, 800–808. ; published online 19 September 2005
Acetate esters
Plant hormones
Ketones
Alkene derivatives
Methyl esters | Methyl jasmonate | [
"Chemistry"
] | 924 | [
"Ketones",
"Functional groups"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.