id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
11,345,456 | https://en.wikipedia.org/wiki/Prognostic%20equation | Prognostic equation - in the context of physical (and especially geophysical) simulation, a prognostic equation predicts the value of variables for some time in the future on the basis of the values at the current or previous times.
For instance, the well-known Navier-Stokes equations that describe the time evolution of a fluid are prognostic equations that predict the future distribution of velocities in that fluid on the basis of current fields such as the pressure gradient.
See also
diagnostic equation
References
James R. Holton (2004) An Introduction to Dynamic Meteorology, Academic Press, International Geophysics Series Volume 88, Fourth Edition, 535 p., , .
See also
http://glossary.ametsoc.org/wiki/Prognostic_equation
Atmospheric dynamics | Prognostic equation | [
"Chemistry"
] | 165 | [
"Atmospheric dynamics",
"Fluid dynamics"
] |
11,350,161 | https://en.wikipedia.org/wiki/Human-rating%20certification | Human-rating certification, also known as man-rating or crew-rating, is the certification of a spacecraft or launch vehicle as capable of safely transporting humans. There is no one particular standard for human-rating a spacecraft or launch vehicle, and the various entities that launch or plan to launch such spacecraft specify requirements for their particular systems to be human-rated.
NASA
One entity that applies human rating is the US government civilian space agency, NASA. NASA's human-rating requires not just that a system be designed to be tolerant of failure and to protect the crew even if an unrecoverable failure occurs, but also that astronauts aboard a human-rated spacecraft have some control over it. This set of technical requirements and the associated certification process for crewed space systems are in addition to the standards and requirements for all of NASA's space flight programs.
The development of the Space Shuttle and the International Space Station pre-dated later NASA human-rating requirements. After the Challenger and Columbia accidents, the criteria used by NASA for human-rating spacecraft were made more stringent.
Commercial Crew Program (CCP)
The NASA CCP human-rating standards require that the probability of a loss on ascent does not exceed 1 in 500, and that the probability of a loss on descent did not exceed 1 in 500. The overall mission loss risk, which includes vehicle risk from micrometeorites and orbital debris while in orbit for up to 210 days, is required to be no more than 1 in 270. Maximum sustained acceleration is limited to 3 g.
The United Launch Alliance (ULA) published a paper submitted to AIAA detailing the modifications to its Delta IV and Atlas V launch vehicles that would be needed to conform to NASA Standard 8705.2B. ULA has since been awarded $6.7 million under NASA's Commercial Crew Development (CCDev) program for development of an Emergency Detection System, one of the final pieces that would be needed to make these launchers suitable for human spaceflight.
SpaceX is using Dragon 2, launched on a Falcon 9 Block 5 rocket, to deliver crew to the ISS. Dragon 2 made its first uncrewed test flight in March 2019 and has been conducting crewed flights since Demo-2 in May 2020.
Boeing's Starliner spacecraft is also a part of the Commercial Crew Program since Boeing CFT in June 2024.
CMSA
The China Manned Space Agency (CMSA) operates and oversees crewed spaceflight activities launched from China, including the Shenzhou spacecraft and Tiangong space station.
Roscosmos
Roscosmos, a Russian state corporation, conducts and oversees human spaceflights launched from Russia. This includes Soyuz spacecraft and the Russian Orbital Segment of the International Space Station.
ISRO
The space agency of India, ISRO, oversees planned human spaceflights launched from India.
On 13 February 2024 the CE-20 engine, after a series of ground qualification tests, was certified for crewed Gaganyaan spaceflight missions. The CE-20 will power the upper stage of the human-rated version of the LVM3 (formerly known as GSLV Mk III) launch vehicle.
Private spaceflight companies
Each private spaceflight system builder typically sets up their own specific criteria to be met before carrying humans on a space transport system.
See also
FAA
List of human spaceflight programs
References
Astronautics
Aerospace engineering
Human spaceflight
Spaceflight concepts
Space policy
Transport safety | Human-rating certification | [
"Physics",
"Engineering"
] | 694 | [
"Physical systems",
"Transport",
"Transport safety",
"Aerospace engineering"
] |
11,351,776 | https://en.wikipedia.org/wiki/Volunteer%27s%20dilemma | The volunteer's dilemma is a game that models a situation in which each player can either make a small sacrifice that benefits everybody, or instead wait in hope of benefiting from someone else's sacrifice.
One example is a scenario in which the electricity supply has failed for an entire neighborhood. All inhabitants know that the electricity company will fix the problem as long as at least one person calls to notify them, at some cost. If no one volunteers, the worst possible outcome is obtained for all participants. If any one person elects to volunteer, the rest benefit by not doing so.
A public good is only produced if at least one person volunteers to pay an arbitrary cost. In this game, bystanders decide independently on whether to sacrifice themselves for the benefit of the group. Because the volunteer receives no benefit, there is a greater incentive for freeriding than to sacrifice oneself for the group. If no one volunteers, everyone loses. The social phenomena of the bystander effect and diffusion of responsibility heavily relate to the volunteer's dilemma.
Payoff matrix
The payoff matrix for the game is shown below:
When the volunteer's dilemma takes place between only two players, the game gets the character of the game "chicken". As seen by the payoff matrix, there is no dominant strategy in the volunteer's dilemma. In a mixed-strategy Nash equilibrium, an increase in N players will decrease the likelihood that at least one person volunteers, which is consistent with the bystander effect.
Examples in real life
The murder of Kitty Genovese
The story of Kitty Genovese is often cited as an example of the volunteer's dilemma. Genovese was stabbed to death outside her apartment building in Queens, New York, in 1964. According to a highly influential New York Times account, dozens of people witnessed the assault but did not get involved because they thought others would contact the police anyway and did not want to incur the personal cost of getting involved. Subsequent investigations have shown the original account to have been unfounded, and although it inspired sound scientific research, its use as a simplistic parable in psychology textbooks has been criticized.
The meerkat
The meerkat exhibits the volunteer's dilemma in nature. One or more meerkats act as sentries while the others forage for food. If a predator approaches, the sentry meerkat lets out a warning call so the others can burrow to safety. However, the altruism of this meerkat puts it at risk of being discovered by the predator.
Quantum volunteer's dilemma
One significant volunteer's dilemma variant was introduced by Weesie and Franzen in 1998 and involves cost-sharing among volunteers. In this variant of the volunteer's dilemma, if there is no volunteer, all players receive a payoff of 0. If there is at least one volunteer, the reward of b units is distributed to all players. In contrast, the total cost of c units incurred by volunteering is divided equally among all the volunteers. It is shown that for classical mixed strategies setting, there is a unique symmetric Nash equilibrium and it is obtained by setting the probability of volunteering for each player to be the unique root in the open interval (0,1) of the degree-n polynomial given by
In 2024, a quantum variant of the classical volunteer’s dilemma was introduced with b=2 and c=1. This generalizes the classical setting by allowing players to utilize quantum strategies. This is achieved by employing the Eisert–Wilkens–Lewenstein quantization framework. In this setting, the players receive an entangled n-qubit state with each player controlling one qubit. The decision of each player can be viewed as determining two angles. Symmetric Nash equilibria that attain a payoff value of for each player is shown, and each player volunteers at this Nash equilibrium. Furthermore, these Nash equilibria are Pareto optimal. It is shown that the payoff function of Nash equilibria in the quantum setting is higher than the payoff of Nash equilibria in the classical setting.
See also
Bystander effect
Civil courage
Death of Cristina and Violetta Djeordsevic (Italy)
Death of Wang Yue (China)
Mamihlapinatapai
Prisoner's dilemma
Social loafing
Tragedy of the Commons
References
Non-cooperative games
Dilemmas | Volunteer's dilemma | [
"Mathematics"
] | 893 | [
"Game theory",
"Non-cooperative games"
] |
1,110,017 | https://en.wikipedia.org/wiki/Dump%20truck | A dump truck, known also as a dumping truck, dump trailer, dumper trailer, dump lorry or dumper lorry or a dumper for short, is used for transporting materials (such as dirt, gravel, or demolition waste) for construction as well as coal. A typical dump truck is equipped with an open-box bed, which is hinged at the rear and equipped with hydraulic rams to lift the front, allowing the material in the bed to be deposited ("dumped") on the ground behind the truck at the site of delivery. In the UK, Australia, South Africa and India the term applies to off-road construction plants only and the road vehicle is known as a tip lorry, tipper lorry (UK, India), tipper truck, tip truck, tip trailer or tipper trailer or simply a tipper (Australia, New Zealand, South Africa).
History
The dump truck is thought to have been first conceived in the farms of late 19th century western Europe. Thornycroft developed a steam dust-cart in 1896 with a tipper mechanism. The first motorized dump trucks in the United States were developed by small equipment companies such as The Fruehauf Trailer Corporation, Galion Buggy Co. and Lauth-Juergens among many others around 1910. Hydraulic dump beds were introduced by Wood Hoist Co. shortly after. Such companies flourished during World War I due to massive wartime demand. August Fruehauf had obtained military contracts for his semi-trailer, invented in 1914 and later created the partner vehicle, the semi-truck for use in World War I. After the war, Fruehauf introduced hydraulics in his trailers. They offered hydraulic lift gates, hydraulic winches and a dump trailer for sales in the early 1920s. Fruehauf became the premier supplier of dump trailers and their famed "bathtub dump" was considered to be the best by heavy haulers, road and mining construction firms.
Companies like Galion Buggy Co. continued to grow after the war by manufacturing a number of express bodies and some smaller dump bodies that could be easily installed on either stock or converted (heavy-duty suspension and drivetrain) Model T chassis prior to 1920. Galion and Wood Mfg. Co. built all of the dump bodies offered by Ford on their heavy-duty AA and BB chassis during the 1930s. Galion (now Galion Godwin Truck Body Co.) is the oldest known truck body manufacturer still in operation today.
The first known Canadian dump truck was developed in Saint John, New Brunswick, when Robert T. Mawhinney attached a dump box to a flatbed truck in 1920. The lifting device was a winch attached to a cable that fed over sheave (pulley) mounted on a mast behind the cab. The cable was connected to the lower front end of the wooden dump box which was attached by a pivot at the back of the truck frame. The operator turned a crank to raise and lower the box.
From the 1930s Euclid, International-Harvester and Mack contributed to ongoing development. Mack modified its existing trucks with varying success. In 1934 Euclid became the first manufacturer in the world to successfully produce a dedicated off-highway truck.
Types
Today, virtually all dump trucks operate by hydraulics and they come in a variety of configurations each designed to accomplish a specific task in the construction material supply chain.
Standard dump truck
A standard dump truck is a truck chassis with a dump body mounted to the frame. The bed is raised by a vertical hydraulic ram mounted under the front of the body (known as a front post hoist configuration), or a horizontal hydraulic ram and lever arrangement between the frame rails (known as an underbody hoist configuration), and the back of the bed is hinged at the back of the truck. The tailgate (sometimes referred to as an end gate) can be configured to swing up on top hinges (and sometimes also to fold down on lower hinges) or it can be configured in the "High Lift Tailgate" format wherein pneumatic or hydraulic rams lift the gate open and up above the dump body. Some bodies, typically for hauling grain, have swing-out doors for entering the box and a metering gate/chute in the center for a more controlled dumping.
In the United States most standard dump trucks have one front steering axle and one (4x2 4-wheeler) or two (6x4 6-wheeler) rear axles which typically have dual wheels on each side. Tandem rear axles are almost always powered, front steering axles are also sometimes powered (4x4, 6x6). Unpowered axles are sometimes used to support extra weight. Most unpowered rear axles can be raised off the ground to minimize wear when the truck is empty or lightly loaded, and are commonly called "lift axles".
European Union heavy trucks often have two steering axles. Dump truck configurations are two, three, and four axles. The four-axle eight wheeler has two steering axles at the front and two powered axles at the rear and is limited to gross weight in most EU countries. The largest of the standard European dump trucks is commonly called a "centipede" and has seven axles. The front axle is the steering axle, the rear two axles are powered, and the remaining four are lift axles.
The shorter wheelbase of a standard dump truck often makes it more maneuverable than the higher capacity semi-trailer dump trucks.
Semi trailer end dump truck
A semi end dump is a tractor-trailer combination wherein the trailer itself contains the hydraulic hoist. In the US a typical semi end dump has a 3-axle tractor pulling a 2-axle trailer with dual tires, in the EU trailers often have 3 axles and single tires. The key advantage of a semi end dump is a large payload. A key disadvantage is that they are very unstable when raised in the dumping position limiting their use in many applications where the dumping location is uneven or off level. Some end dumps make use of an articulated arm (known as a stabilizer) below the box, between the chassis rails, to stabilize the load in the raised position.
Frame and Frameless end dump truck
Depending on the structure, semi trailer end dump truck can also be divided into frame trailer and frameless trailer.
The main difference between them is the different structure. The frame dump trailer has a large beam that runs along the bottom of the trailer to support it. The frameless dump trailer has no frame under the trailer but has ribs that go around the body for support and the top rail of the trailer serves as a suspension bridge for support.
The difference in structure also brings with it a difference in weight. Frame dump trailers are heavier. For the same length, a frame dump trailer weighs around 5 ton more than a frameless dump trailer.
Transfer dump truck
A transfer dump truck is a standard dump truck pulling a separate trailer with a movable cargo container, which can also be loaded with construction aggregate, gravel, sand, asphalt, klinkers, snow, wood chips, triple mix, etc.
The second aggregate container on the trailer ("B" box), is powered by an electric motor, a pneumatic motor or a hydraulic line. It rolls on small wheels, riding on rails from the trailer's frame into the empty main dump container ("A" box). This maximizes payload capacity without sacrificing the maneuverability of the standard dump truck. Transfer dump trucks are typically seen in the western United States due to the peculiar weight restrictions on highways there.
Another configuration is called a triple transfer train, consisting of a "B" and "C" box. These are common on Nevada and Utah Highways, but not in California. Depending on the axle arrangement, a triple transfer can haul up to with a special permit in certain American states. , a triple transfer costs a contractor about $105 an hour, while a A/B configuration costs about $85 per hour.
Transfer dump trucks typically haul between of aggregate per load, each truck is capable of 3–5 loads per day, generally speaking.
Truck and pup
A truck and pup is very similar to a transfer dump. It consists of a standard dump truck pulling a dump trailer. The pup trailer, unlike the transfer, has its own hydraulic ram and is capable of self-unloading.
Superdump truck
A super dump is a straight dump truck equipped with a trailing axle, a liftable, load-bearing axle rated as high as . Trailing behind the rear tandem, the trailing axle stretches the outer "bridge" measurement—the distance between the first and last axles—to the maximum overall length allowed. This increases the gross weight allowed under the federal bridge formula, which sets standards for truck size and weight. Depending on the vehicle length and axle configuration, Superdumps can be rated as high as GVW and carry of payload or more. When the truck is empty or ready to offload, the trailing axle toggles up off the road surface on two hydraulic arms to clear the rear of the vehicle. Truck owners call their trailing axle-equipped trucks Superdumps because they far exceed the payload, productivity, and return on investment of a conventional dump truck. The Superdump and trailing axle concept were developed by Strong Industries of Houston, Texas.
Semi trailer bottom dump truck
A semi bottom dump, bottom hopper, or belly dump is a (commonly) 3-axle tractor pulling a 2-axle trailer with a clam shell type dump gate in the belly of the trailer. The key advantage of a semi bottom dump is its ability to lay material in a windrow, a linear heap. In addition, a semi bottom dump is maneuverable in reverse, unlike the double and triple trailer configurations described below. These trailers may be found either of the windrow type shown in the photo or may be of the cross spread type, with the gate opening front to rear instead of left and right. The cross spread type gate will actually spread the cereal grains fairly and evenly from the width of the trailer. By comparison, the windrow-type gate leaves a pile in the middle. The cross spread type gate, on the other hand, tends to jam and may not work very well with coarse materials.
Double and triple trailer bottom dump truck
Double and triple bottom dumps consist of a 2-axle tractor pulling one single-axle semi-trailer and an additional full trailer (or two full trailers in the case of triples). These dump trucks allow the driver to lay material in windrows without leaving the cab or stopping the truck. The main disadvantage is the difficulty in backing double and triple units.
The specific type of dump truck used in any specific country is likely to be closely keyed to the weight and axle limitations of that jurisdiction. Rock, dirt, and other types of materials commonly hauled in trucks of this type are quite heavy, and almost any style of truck can be easily overloaded. Because of that, this type of truck is frequently configured to take advantage of local weight limitations to maximize the cargo. For example, within the United States, the maximum weight limit is throughout the country, except for specific bridges with lower limits. Individual states, in some instances, are allowed to authorize trucks up to . Most states that do so require that the trucks be very long, to spread the weight over more distance. It is in this context that double and triple bottoms are found within the United States.
Bumper Pull Dump Trailer
Bumper Pull personal and commercial Dump Trailers come in a variety of sizes from smaller 6x10 7,000 GVWR models to larger 7x16 High Side 14,000 GVWR models.
Dump trailers come with a range of options and features such as tarp kits, high side options, dump/spread/swing gates, remote control, scissor, telescop, dual or single cylinder lifts, and metal locking toolboxes. They offer the perfect solution for a variety of applications, including roofing, rock and mulch delivery, general contractors, skid steer grading, trash out, and recycling.
Side dump truck
A side dump truck (SDT) consists of a 3-axle tractor pulling a 2-axle semi-trailer. It has hydraulic rams that tilt the dump body onto its side, spilling the material to either the left or right side of the trailer. The key advantages of the side dump are that it allows rapid unloading and can carry more weight in the western United States. In addition, it is almost immune to upset (tipping over) while dumping, unlike the semi end dumps which are very prone to tipping over. It is, however, highly likely that a side dump trailer will tip over if dumping is stopped prematurely. Also, when dumping loose materials or cobble sized stone, the side dump can become stuck if the pile becomes wide enough to cover too much of the trailer's wheels. Trailers that dump at the appropriate angle (50° for example) avoid the problem of the dumped load fouling the path of the trailer wheels by dumping their loads further to the side of the truck, in some cases leaving sufficient clearance to walk between the dumped load and the trailer.
Winter service vehicles
Many winter service vehicles are based on dump trucks, to allow the placement of ballast to weigh the truck down or to hold sodium or calcium chloride salts for spreading on snow and ice-covered surfaces. Plowing is severe service and needs heavy-duty trucks.
Roll-off trucks
A Roll-off has a hoist and subframe, but no body, it carries removable containers. The container is loaded on the ground, then pulled onto the back of the truck with a winch and cable. The truck goes to the dumpsite, after it has been dumped the empty container is taken and placed to be loaded or stored. The hoist is raised and the container slides down the subframe so the rear is on the ground. The container has rollers on the rear and can be moved forward or back until the front of it is lowered onto the ground. The containers are usually open-topped boxes used for rubble and building debris, but rubbish compactor containers are also carried. A newer hook-lift system ("roller container" in the UK) does the same job, but lifts, lowers, and dumps the container with a boom arrangement instead of a cable and hoist.
Off-highway dump trucks
Off-highway dump trucks are heavy construction equipment and share little resemblance to highway dump trucks. Bigger off-highway dump trucks are used strictly off-road for mining and heavy dirt hauling jobs. There are two primary forms: rigid frame and articulating frame.
The term "dump" truck is not generally used by the mining industry, or by the manufacturers that build these machines. The more appropriate U.S. term for this strictly off-road vehicle is "haul truck" and the equivalent European term is "dumper".
Haul truck
Haul trucks are used in large surface mines and quarries. They have a rigid frame and conventional steering with drive at the rear wheel. As of late 2013, the largest ever production haul truck is the 450 metric ton BelAZ 75710, followed by the Liebherr T 282B, the Bucyrus MT6300AC and the Caterpillar 797F, which each have payload capacities of up to . The previous record holder being the Canadian-built Terex 33-19 "Titan", having held the record for over 25 years. Most large-size haul trucks employ Diesel-electric powertrains, using the Diesel engine to drive an AC alternator or DC generator that sends electric power to electric motors at each rear wheel. The Caterpillar 797 is unique for its size, as it employs a Diesel engine to power a mechanical powertrain, typical of most road-going vehicles and intermediary size haul trucks.
Other major manufacturers of haul trucks include SANY, XCMG, Hitachi, Komatsu, DAC, Terex, and BelAZ.
Articulated hauler
An articulated dumper is an all-wheel-drive, off-road dump truck. It has a hinge between the cab and the dump box but is distinct from a semi-trailer truck in that the power unit is a permanent fixture, not a separable vehicle. Steering is accomplished via hydraulic cylinders that pivot the entire tractor in relation to the trailer, rather than rack and pinion steering on the front axle as in a conventional dump truck. By this way of steering, the trailer's wheels follow the same path as the front wheels. Together with all-wheel drive and low center of gravity, it is highly adaptable to rough terrain. Major manufacturers include Volvo CE, Terex, John Deere, and Caterpillar.
U-shaped dump truck
U-shaped dump trucks, also known as tub-body trucks, is used to transport construction waste, it is made of high-strength super wear-resistant special steel plate directly bent, and has the characteristics of impact resistance, alternating stress resistance, corrosion resistance and so on.
1. Cleaner unloading
U-shaped dump truck, there is no dead angle at the corners of the cargo box, it is not easy to stick to the box when unloading, and the unloading is cleaner.
2. Lightweight
The U-shaped cargo box reduces its own weight through structural optimization. Now the most common U-shaped dump is to use high-strength plates. Under the premise of ensuring the strength of the car body, the thickness of the plate is reduced by about 20%, and the self-weight of the car is reduced by about 1 ton, which effectively improves the utilization factor of the load mass.
3. Strong carrying capacity. Using high-strength steel plate, high yield strength, better impact resistance and fatigue resistance. For users of ore transportation, it can reduce the damage of ore to the container.
4. Low center of gravity The U-shaped structure has a lower center of gravity, which makes the ride more stable, especially when cornering, and avoids spilling cargo.
5. Save tires The U-shaped cargo box can keep the cargo in the center, and the tires on both sides are more evenly stressed, which is beneficial to improve the life of the tires.
Dangers
Collisions
Dump trucks are normally built for some amount of off-road or construction site driving; as the driver is protected by the chassis and height of the driver's seat, bumpers are either placed high or omitted for added ground clearance. The disadvantage is that in a collision with a standard car, the entire motor section or luggage compartment goes under the truck. Thus, the passengers in the car could be more severely injured than would be common in a collision with another car. Several countries have made rules that new trucks should have bumpers approximately above ground in order to protect other drivers. There are also rules about how long the load or construction of the truck can go beyond the rear bumper to prevent cars that rear-end the truck from going under it.
Tipping
Another safety consideration is the leveling of the truck before unloading. If the truck is not parked on relatively horizontal ground, the sudden change of weight and balance due to lifting of the body and dumping of the material can cause the truck to slide, or even to tip over. The live bottom trailer is an approach to eliminate this danger.
Back-up accidents
Because of their size and the difficulty of maintaining visual contact with on-foot workers, dump trucks can be a threat, especially when backing up. Mirrors and back-up alarms provide some level of protection, and having a spotter working with the driver also decreases back-up injuries and fatalities.
Manufacturers
Ashok Leyland
Asia MotorWorks
Astra Veicoli Industriali
BelAZ
BEML
Case CE
Caterpillar Inc.
DAC
Daewoo
Dart (commercial vehicle)
Eicher Motors
Euclid Trucks
FAP
HEPCO
Hitachi Construction Machinery
Hitachi Construction Machinery (Europe)
Iveco
John Deere
Kamaz
Kenworth
Kioleides
Komatsu
KrAZ
Leader Trucks
Liebherr Group
Mack Trucks
Mahindra Trucks & Buses Ltd.
MAN SE
Mercedes-Benz
Navistar International
New Holland
Peterbilt
SANY
Scania AB
ST Kinetics
Tata
Tatra (company)
Terex Corporation
Volvo Construction Equipment
Volvo Trucks
XCMG
See also
Cement mixer truck
Road roller
Combine harvester
Tractor
Crane construction (truck)
Bulldozer
Forklift
Dumper
Garbage truck
Live bottom trailer
Rear-eject haul truck bodies
Notes
References
Canadian inventions
Engineering vehicles
Trailers | Dump truck | [
"Engineering"
] | 4,188 | [
"Engineering vehicles",
"Dump trucks"
] |
1,110,270 | https://en.wikipedia.org/wiki/Plastoquinone | Plastoquinone (PQ) is a terpenoid-quinone (meroterpenoid) molecule involved in the electron transport chain in the light-dependent reactions of photosynthesis. The most common form of plastoquinone, known as PQ-A or PQ-9, is a 2,3-dimethyl-1,4-benzoquinone molecule with a side chain of nine isoprenyl units. There are other forms of plastoquinone, such as ones with shorter side chains like PQ-3 (which has 3 isoprenyl side units instead of 9) as well as analogs such as PQ-B, PQ-C, and PQ-D, which differ in their side chains. The benzoquinone and isoprenyl units are both nonpolar, anchoring the molecule within the inner section of a lipid bilayer, where the hydrophobic tails are usually found.
Plastoquinones are very structurally similar to ubiquinone, or coenzyme Q10, differing by the length of the isoprenyl side chain, replacement of the methoxy groups with methyl groups, and removal of the methyl group in the 2 position on the quinone. Like ubiquinone, it can come in several oxidation states: plastoquinone, plastosemiquinone (unstable), and plastoquinol, which differs from plastoquinone by having two hydroxyl groups instead of two carbonyl groups.
Plastoquinol, the reduced form, also functions as an antioxidant by reducing reactive oxygen species, some produced from the photosynthetic reactions, that could harm the cell membrane. One example of how it does this is by reacting with superoxides to form hydrogen peroxide and plastosemiquinone.
The prefix plasto- means either plastid or chloroplast, alluding to its location within the cell.
Role in photosynthesis
The role that plastoquinone plays in photosynthesis, more specifically in the light-dependent reactions of photosynthesis, is that of a mobile electron carrier through the membrane of the thylakoid.
Plastoquinone is reduced when it accepts two electrons from photosystem II and two hydrogen cations (H+) from the stroma of the chloroplast, thereby forming plastoquinol (PQH2). It transfers the electrons further down the electron transport chain to plastocyanin, a mobile, water-soluble electron carrier, through the cytochrome b6f protein complex. The cytochrome b6f protein complex catalyzes the electron transfer between plastoquinone and plastocyanin, but also transports the two protons into the lumen of thylakoid discs. This proton transfer forms an electrochemical gradient, which is used by ATP synthase at the end of the light dependent reactions in order to form ATP from ADP and Pi.
Within photosystem II
Plastoquinone is found within photosystem II in two specific binding sites, known as QA and QB. The plastoquinone at QA, the primary binding site, is very tightly bound, compared to the plastoquinone at QB, the secondary binding site, which is much more easily removed. QA is only transferred a single electron, so it has to transfer an electron to QB twice before QB is able to pick up two protons from the stroma and be replaced by another plastoquinone molecule. The protonated QB then joins a pool of free plastoquinone molecules in the membrane of the thylakoid. The free plastoquinone molecules eventually transfer electrons to the water-soluble plastocyanin so as to continue the light-dependent reactions. There are additional plastoquinone binding sites within photosystem II (QC and possibly QD), but their function and/or existence have not been fully elucidated.
Biosynthesis
The p-hydroxyphenylpyruvate is synthesized from tyrosine, while the solanesyl diphosphate is synthesized through the MEP/DOXP pathway. Homogentisate is formed from p-hydroxyphenylpyruvate and is then combined with solanesyl diphosphate through a condensation reaction. The resulting intermediate, 2-methyl-6-solanesyl-1,4-benzoquinol is then methylated to form the final product, plastoquinol-9. This pathway is used in most photosynthetic organisms, like algae and plants. However, cyanobacteria appear to not use homogentisate for synthesizing plastoquinol, possibly resulting in a pathway different from the one shown below.
Derivatives
Some derivatives that were designed to penetrate mitochondrial cell membranes (SkQ1 (plastoquinonyl-decyl-triphenylphosphonium), SkQR1 (the rhodamine-containing analog of SkQ1), SkQ3) have anti-oxidant and protonophore activity. SkQ1 has been proposed as an anti-aging treatment, with the possible reduction of age-related vision issues due to its antioxidant ability. This antioxidant ability results from both its antioxidant ability to reduce reactive oxygen species (derived from the part of the molecule containing plastoquinonol), which are often formed within mitochondria, as well as its ability to increase ion exchange across membranes (derived from the part of the molecule containing cations that can dissolve within membranes). Specifically, like plastoquinol, SkQ1 has been shown to scavenge superoxides both within cells (in vivo) and outside of cells (in vitro). SkQR1 and SkQ1 have also been proposed as a possible way to treat brain issues like Alzheimer's due to their ability to potentially fix damages caused by amyloid beta. Additionally, SkQR1 has been shown as a way to reduce the issues caused by brain trauma through its antioxidant abilities, which help prevent cell death signals by reducing the amounts of reactive oxygen species coming from mitochondria.
References
External links
Plastoquinones History, absorption spectra, and analogs.
Photosynthesis
Light reactions
1,4-Benzoquinones
Meroterpenoids | Plastoquinone | [
"Chemistry",
"Biology"
] | 1,367 | [
"Biochemistry",
"Light reactions",
"Photosynthesis",
"Biochemical reactions"
] |
1,110,600 | https://en.wikipedia.org/wiki/Oxaloacetic%20acid | Oxaloacetic acid (also known as oxalacetic acid or OAA) is a crystalline organic compound with the chemical formula HO2CC(O)CH2CO2H. Oxaloacetic acid, in the form of its conjugate base oxaloacetate, is a metabolic intermediate in many processes that occur in animals. It takes part in gluconeogenesis, the urea cycle, the glyoxylate cycle, amino acid synthesis, fatty acid synthesis and the citric acid cycle.
Properties
Oxaloacetic acid undergoes successive deprotonations to give the dianion:
HO2CC(O)CH2CO2H −O2CC(O)CH2CO2H + H+, pKa = 2.22
−O2CC(O)CH2CO2H −O2CC(O)CH2CO2− + H+, pKa = 3.89
At high pH, the enolizable proton is ionized:
−O2CC(O)CH2CO2− −O2CC(O−)CHCO2− + H+, pKa = 13.03
The enol forms of oxaloacetic acid are particularly stable. Keto-enol tautomerization is catalyzed by the enzyme oxaloacetate tautomerase. trans-Enol-oxaloacetate also appears when tartrate is the substrate for fumarase.
Biosynthesis
Oxaloacetate forms in several ways in nature. A principal route is upon oxidation of L-malate, catalyzed by malate dehydrogenase, in the citric acid cycle. Malate is also oxidized by succinate dehydrogenase in a slow reaction with the initial product being enol-oxaloacetate.
It also arises from the condensation of pyruvate with carbonic acid, driven by the hydrolysis of ATP:
CH3C(O)CO2− + HCO3− + ATP → −O2CCH2C(O)CO2− + ADP + Pi
Occurring in the mesophyll of plants, this process proceeds via phosphoenolpyruvate, catalysed by phosphoenolpyruvate carboxylase. Oxaloacetate can also arise from trans- or de- amination of aspartic acid.
Biochemical functions
Oxaloacetate is an intermediate of the citric acid cycle, where it reacts with acetyl-CoA to form citrate, catalyzed by citrate synthase. It is also involved in gluconeogenesis, the urea cycle, the glyoxylate cycle, amino acid synthesis, and fatty acid synthesis. Oxaloacetate is also a potent inhibitor of complex II.
Gluconeogenesis
Gluconeogenesis is a metabolic pathway consisting of a series of eleven enzyme-catalyzed reactions, resulting in the generation of glucose from non-carbohydrates substrates. The beginning of this process takes place in the mitochondrial matrix, where pyruvate molecules are found. A pyruvate molecule is carboxylated by a pyruvate carboxylase enzyme, activated by a molecule each of ATP and water. This reaction results in the formation of oxaloacetate. NADH reduces oxaloacetate to malate. This transformation is needed to transport the molecule out of the mitochondria. Once in the cytosol, malate is oxidized to oxaloacetate again using NAD+. Then oxaloacetate remains in the cytosol, where the rest of reactions will take place. Oxaloacetate is later decarboxylated and phosphorylated by phosphoenolpyruvate carboxykinase and becomes 2-phosphoenolpyruvate using guanosine triphosphate (GTP) as phosphate source. Glucose is obtained after further downstream processing.
Urea cycle
The urea cycle is a metabolic pathway that results in the formation of urea using one ammonium molecule from degraded amino acids, another ammonium group from aspartate and one bicarbonate molecule. This route commonly occurs in hepatocytes. The reactions related to the urea cycle produce NADH, and NADH can be produced in two different ways. One of these uses oxaloacetate. In the cytosol there are fumarate molecules. Fumarate can be transformed into malate by the actions of the enzyme fumarase. Malate is acted on by malate dehydrogenase to become oxaloacetate, producing a molecule of NADH. After that, oxaloacetate will be recycled to aspartate, as transaminases prefer these keto acids over the others. This recycling maintains the flow of nitrogen into the cell.
Glyoxylate cycle
The glyoxylate cycle is a variant of the citric acid cycle. It is an anabolic pathway occurring in plants and bacteria utilizing the enzymes isocitrate lyase and malate synthase. Some intermediate steps of the cycle are slightly different from the citric acid cycle; nevertheless oxaloacetate has the same function in both processes. This means that oxaloacetate in this cycle also acts as the primary reactant and final product. In fact the oxaloacetate is a net product of the glyoxylate cycle because its loop of the cycle incorporates two molecules of acetyl-CoA.
Fatty acid synthesis
In previous stages acetyl-CoA is transferred from the mitochondria to the cytoplasm where fatty acid synthase resides. The acetyl-CoA is transported as a citrate, which has been previously formed in the mitochondrial matrix from acetyl-CoA and oxaloacetate. This reaction usually initiates the citric acid cycle, but when there is no need of energy it is transported to the cytoplasm where it is broken down to cytoplasmic acetyl-CoA and oxaloacetate.
Another part of the cycle requires NADPH for the synthesis of fatty acids. Part of this reducing power is generated when the cytosolic oxaloacetate is returned to the mitochondria as long as the internal mitochondrial layer is non-permeable for oxaloacetate. Firstly the oxaloacetate is reduced to malate using NADH. Then the malate is decarboxylated to pyruvate. Now this pyruvate can easily enter the mitochondria, where it is carboxylated again to oxaloacetate by pyruvate carboxylase. In this way, the transfer of acetyl-CoA that is from the mitochondria into the cytoplasm produces a molecule of NADH. The overall reaction, which is spontaneous, may be summarized as:
HCO3– + ATP + acetyl-CoA → ADP + Pi + malonyl-CoA
Amino acid synthesis
Six essential amino acids and three nonessential are synthesized from oxaloacetate and pyruvate. Aspartate and alanine are formed from oxaloacetate and pyruvate, respectively, by transamination from glutamate. Asparagine is synthesized by amidation of aspartate, with glutamine donating the NH4.
These are nonessential amino acids, and their simple biosynthetic pathways occur in all organisms. Methionine, threonine, lysine, isoleucine, valine, and leucine are essential amino acids in humans and most vertebrates. Their biosynthetic pathways in bacteria are complex and interconnected.
Oxalate biosynthesis
Oxaloacetate produces oxalate by hydrolysis.
oxaloacetate + H2O oxalate + acetate
This process is catalyzed by the enzyme oxaloacetase. This enzyme is seen in plants, but is not known in the animal kingdom.
Interactive pathway map
See also
Dioxosuccinic acid
Glycolysis
Oxidative phosphorylation
Citric acid cycle
References
Citric acid cycle compounds
Dicarboxylic acids
Alpha-keto acids
Beta-keto acids
Metabolic intermediates
Biomolecules | Oxaloacetic acid | [
"Chemistry",
"Biology"
] | 1,744 | [
"Natural products",
"Biochemistry",
"Organic compounds",
"Citric acid cycle compounds",
"Metabolic intermediates",
"Biomolecules",
"Molecular biology",
"Structural biology",
"Metabolism"
] |
1,110,611 | https://en.wikipedia.org/wiki/Muscle%20contraction | Muscle contraction is the activation of tension-generating sites within muscle cells. In physiology, muscle contraction does not necessarily mean muscle shortening because muscle tension can be produced without changes in muscle length, such as when holding something heavy in the same position. The termination of muscle contraction is followed by muscle relaxation, which is a return of the muscle fibers to their low tension-generating state.
For the contractions to happen, the muscle cells must rely on the change in action of two types of filaments: thin and thick filaments.
The major constituent of thin filaments is a chain formed by helical coiling of two strands of actin, and thick filaments dominantly consist of chains of the motor-protein myosin. Together, these two filaments form myofibrils - the basic functional organelles in the skeletal muscle system.
In vertebrates, skeletal muscle contractions are neurogenic as they require synaptic input from motor neurons. A single motor neuron is able to innervate multiple muscle fibers, thereby causing the fibers to contract at the same time. Once innervated, the protein filaments within each skeletal muscle fiber slide past each other to produce a contraction, which is explained by the sliding filament theory. The contraction produced can be described as a twitch, summation, or tetanus, depending on the frequency of action potentials. In skeletal muscles, muscle tension is at its greatest when the muscle is stretched to an intermediate length as described by the length-tension relationship.
Unlike skeletal muscle, the contractions of smooth and cardiac muscles are myogenic (meaning that they are initiated by the smooth or heart muscle cells themselves instead of being stimulated by an outside event such as nerve stimulation), although they can be modulated by stimuli from the autonomic nervous system. The mechanisms of contraction in these muscle tissues are similar to those in skeletal muscle tissues.
Muscle contraction can also be described in terms of two variables: length and tension. In natural movements that underlie locomotor activity, muscle contractions are multifaceted as they are able to produce changes in length and tension in a time-varying manner. Therefore, neither length nor tension is likely to remain the same in skeletal muscles that contract during locomotion. Contractions can be described as isometric if the muscle tension changes but the muscle length remains the same. In contrast, a muscle contraction is described as isotonic if muscle tension remains the same throughout the contraction. If the muscle length shortens, the contraction is concentric; if the muscle length lengthens, the contraction is eccentric.
Types
Muscle contractions can be described based on two variables: force and length. Force itself can be differentiated as either tension or load. Muscle tension is the force exerted by the muscle on an object whereas a load is the force exerted by an object on the muscle. When muscle tension changes without any corresponding changes in muscle length, the muscle contraction is described as isometric. If the muscle length changes while muscle tension remains the same, then the muscle contraction is isotonic. In an isotonic contraction, the muscle length can either shorten to produce a concentric contraction or lengthen to produce an eccentric contraction. In natural movements that underlie locomotor activity, muscle contractions are multifaceted as they are able to produce changes in length and tension in a time-varying manner. Therefore, neither length nor tension is likely to remain constant when the muscle is active during locomotor activity.
Isometric contraction
An isometric contraction of a muscle generates tension without changing length. An example can be found when the muscles of the hand and forearm grip an object; the joints of the hand do not move, but muscles generate sufficient force to prevent the object from being dropped.
Isotonic contraction
In isotonic contraction, the tension in the muscle remains constant despite a change in muscle length. This occurs when a muscle's force of contraction matches the total load on the muscle.
Concentric contraction
In concentric contraction, muscle tension is sufficient to overcome the load, and the muscle shortens as it contracts. This occurs when the force generated by the muscle exceeds the load opposing its contraction.
During a concentric contraction, a muscle is stimulated to contract according to the sliding filament theory. This occurs throughout the length of the muscle, generating a force at the origin and insertion, causing the muscle to shorten and changing the angle of the joint. In relation to the elbow, a concentric contraction of the biceps would cause the arm to bend at the elbow as the hand moved from the leg to the shoulder (a biceps curl). A concentric contraction of the triceps would change the angle of the joint in the opposite direction, straightening the arm and moving the hand towards the leg.
Eccentric contraction
In eccentric contraction, the tension generated while isometric is insufficient to overcome the external load on the muscle and the muscle fibers lengthen as they contract. Rather than working to pull a joint in the direction of the muscle contraction, the muscle acts to decelerate the joint at the end of a movement or otherwise control the repositioning of a load. This can occur involuntarily (e.g., when attempting to move a weight too heavy for the muscle to lift) or voluntarily (e.g., when the muscle is 'smoothing out' a movement or resisting gravity such as during downhill walking). Over the short-term, strength training involving both eccentric and concentric contractions appear to increase muscular strength more than training with concentric contractions alone. However, exercise-induced muscle damage is also greater during lengthening contractions.
During an eccentric contraction of the biceps muscle, the elbow starts the movement while bent and then straightens as the hand moves away from the shoulder. During an eccentric contraction of the triceps muscle, the elbow starts the movement straight and then bends as the hand moves towards the shoulder. Desmin, titin, and other z-line proteins are involved in eccentric contractions, but their mechanism is poorly understood in comparison to cross-bridge cycling in concentric contractions.
Though the muscle is doing a negative amount of mechanical work, (work is being done on the muscle), chemical energy (of fat or glucose, or temporarily stored in ATP) is nevertheless consumed, although less than would be consumed during a concentric contraction of the same force. For example, one expends more energy going up a flight of stairs than going down the same flight.
Muscles undergoing heavy eccentric loading suffer greater damage when overloaded (such as during muscle building or strength training exercise) as compared to concentric loading. When eccentric contractions are used in weight training, they are normally called negatives. During a concentric contraction, contractile muscle myofilaments of myosin and actin slide past each other, pulling the Z-lines together. During an eccentric contraction, the myofilaments slide past each other the opposite way, though the actual movement of the myosin heads during an eccentric contraction is not known. Exercise featuring a heavy eccentric load can actually support a greater weight (muscles are approximately 40% stronger during eccentric contractions than during concentric contractions) and also results in greater muscular damage and delayed onset muscle soreness one to two days after training. Exercise that incorporates both eccentric and concentric muscular contractions (i.e., involving a strong contraction and a controlled lowering of the weight) can produce greater gains in strength than concentric contractions alone. While unaccustomed heavy eccentric contractions can easily lead to overtraining, moderate training may confer protection against injury.
Eccentric contractions in movement
Eccentric contractions normally occur as a braking force in opposition to a concentric contraction to protect joints from damage. During virtually any routine movement, eccentric contractions assist in keeping motions smooth, but can also slow rapid movements such as a punch or throw. Part of training for rapid movements such as pitching during baseball involves reducing eccentric braking allowing a greater power to be developed throughout the movement.
Eccentric contractions are being researched for their ability to speed rehabilitation of weak or injured tendons. Achilles tendinitis and patellar tendonitis (also known as jumper's knee or patellar tendonosis) have been shown to benefit from high-load eccentric contractions.
Vertebrate
In vertebrate animals, there are three types of muscle tissues: skeletal, smooth, and cardiac. Skeletal muscle constitutes the majority of muscle mass in the body and is responsible for locomotor activity. Smooth muscle forms blood vessels, the gastrointestinal tract, and other areas in the body that produce sustained contractions. Cardiac muscle makes up the heart, which pumps blood. Skeletal and cardiac muscles are called striated muscle because of their striped appearance under a microscope, which is due to the highly organized alternating pattern of A bands and I bands.
Skeletal muscle
Excluding reflexes, all skeletal muscle contractions occur as a result of signals originating in the brain. The brain sends electrochemical signals through the nervous system to the motor neuron that innervates several muscle fibers. In the case of some reflexes, the signal to contract can originate in the spinal cord through a feedback loop with the grey matter. Other actions such as locomotion, breathing, and chewing have a reflex aspect to them: the contractions can be initiated either consciously or unconsciously.
Neuromuscular junction
A neuromuscular junction is a chemical synapse formed by the contact between a motor neuron and a muscle fiber. It is the site in which a motor neuron transmits a signal to a muscle fiber to initiate muscle contraction. The sequence of events that results in the depolarization of the muscle fiber at the neuromuscular junction begins when an action potential is initiated in the cell body of a motor neuron, which is then propagated by saltatory conduction along its axon toward the neuromuscular junction. Once it reaches the terminal bouton, the action potential causes a ion influx into the terminal by way of the voltage-gated calcium channels. The influx causes synaptic vesicles containing the neurotransmitter acetylcholine to fuse with the plasma membrane, releasing acetylcholine into the synaptic cleft between the motor neuron terminal and the neuromuscular junction of the skeletal muscle fiber. Acetylcholine diffuses across the synapse and binds to and activates nicotinic acetylcholine receptors on the neuromuscular junction. Activation of the nicotinic receptor opens its intrinsic sodium/potassium channel, causing sodium to rush in and potassium to trickle out. As a result, the sarcolemma reverses polarity and its voltage quickly jumps from the resting membrane potential of -90mV to as high as +75mV as sodium enters. The membrane potential then becomes hyperpolarized when potassium exits and is then adjusted back to the resting membrane potential. This rapid fluctuation is called the end-plate potential. The voltage-gated ion channels of the sarcolemma next to the end plate open in response to the end plate potential. They are sodium and potassium specific and only allow one through. This wave of ion movements creates the action potential that spreads from the motor end plate in all directions. If action potentials stop arriving, then acetylcholine ceases to be released from the terminal bouton. The remaining acetylcholine in the synaptic cleft is either degraded by active acetylcholine esterase or reabsorbed by the synaptic knob and none is left to replace the degraded acetylcholine.
Excitation–contraction coupling
Excitation–contraction coupling (ECC) is the process by which a muscular action potential in the muscle fiber causes myofibrils to contract. In skeletal muscles, excitation–contraction coupling relies on a direct coupling between two key proteins, the sarcoplasmic reticulum (SR) calcium release channel identified as the ryanodine receptor 1 (RYR1) and the voltage-gated L-type calcium channel identified as dihydropyridine receptors, (DHPRs). DHPRs are located on the sarcolemma (which includes the surface sarcolemma and the transverse tubules), while the RyRs reside across the SR membrane. The close apposition of a transverse tubule and two SR regions containing RyRs is described as a triad and is predominantly where excitation–contraction coupling takes place.
Excitation–contraction coupling (ECC) occurs when depolarization of skeletal muscles (usually through neural innervation) results in a muscle action potential. This action potential spreads across the muscle's surface and into the muscle fiber's network of T-tubules, depolarizing the inner portion of the muscle fiber. This activates dihydropyridine receptors in the terminal cisternae, which are in close proximity to ryanodine receptors in the adjacent sarcoplasmic reticulum. The activated dihydropyridine receptors physically interact with ryanodine receptors to activate them via foot processes (involving conformational changes that allosterically activates the ryanodine receptors). As ryanodine receptors open, Ca2+ is released from the sarcoplasmic reticulum into the local junctional space and diffuses into the bulk cytoplasm to cause a calcium spark. The action potential creates a near synchronous activation of thousands of calcium sparks and causes a cell-wide increase in calcium giving rise to the upstroke of the calcium transient. The Ca2+ released into the cytosol binds to Troponin C by the actin filaments. This bond allows the actin filaments to perform cross-bridge cycling, producing force and, in some situations, motion.
When the desired motion is accomplished, relaxation can be achieved quickly through numerous pathways. Relaxation is quickly achieved through a Ca2+ buffer with various cytoplasmic proteins binding to Ca2+ with very high affinity. These cytoplasmic proteins allow for quick relaxation in fast twitch muscles. Although slower, the sarco/endoplasmic reticulum calcium-ATPase (SERCA) actively pumps Ca2+ back into the sarcoplasmic reticulum, resulting in a permanent relaxation until the next action potential arrives.
Mitochondria also participate in Ca2+ reuptake, ultimately delivering their gathered Ca2+ to SERCA for storage in the sarcoplasmic reticulum. A few of the relaxation mechanisms (NCX, Ca2+ pumps and Ca2+ leak channels) move Ca2+ completely out of the cells as well. As Ca2+ concentration declines to resting levels, Ca2+ releases from Troponin C, disallowing cross bridge-cycling, causing the force to decline and relaxation to occur. Once relaxation has fully occurred, the muscle is able to contract again, thus fully resetting the cycle.
Sliding filament theory
The sliding filament theory describes a process used by muscles to contract. It is a cycle of repetitive events that cause a thin filament to slide over a thick filament and generate tension in the muscle. It was independently developed by Andrew Huxley and Rolf Niedergerke and by Hugh Huxley and Jean Hanson in 1954. Physiologically, this contraction is not uniform across the sarcomere; the central position of the thick filaments becomes unstable and can shift during contraction but this is countered by the actions of the elastic myofilament of titin. This fine myofilament maintains uniform tension across the sarcomere by pulling the thick filament into a central position.
Cross-bridge cycle
Cross-bridge cycling is a sequence of molecular events that underlies the sliding filament theory. A cross-bridge is a myosin projection, consisting of two myosin heads, that extends from the thick filaments. Each myosin head has two binding sites: one for adenosine triphosphate (ATP) and another for actin. The binding of ATP to a myosin head detaches myosin from actin, thereby allowing myosin to bind to another actin molecule. Once attached, the ATP is hydrolyzed by myosin, which uses the released energy to move into the "cocked position" whereby it binds weakly to a part of the actin binding site. The remainder of the actin binding site is blocked by tropomyosin. With the ATP hydrolyzed, the cocked myosin head now contains adenosine diphosphate (ADP) + Pi. Two ions bind to troponin C on the actin filaments. The troponin- complex causes tropomyosin to slide over and unblock the remainder of the actin binding site. Unblocking the rest of the actin binding sites allows the two myosin heads to close and myosin to bind strongly to actin. The myosin head then releases the inorganic phosphate and initiates a power stroke, which generates a force of 2 pN. The power stroke moves the actin filament inwards, thereby shortening the sarcomere. Myosin then releases ADP but still remains tightly bound to actin. At the end of the power stroke, ADP is released from the myosin head, leaving myosin attached to actin in a rigor state until another ATP binds to myosin. A lack of ATP would result in the rigor state characteristic of rigor mortis. Once another ATP binds to myosin, the myosin head will again detach from actin and another cross-bridge cycle occurs.
Cross-bridge cycling is able to continue as long as there are sufficient amounts of ATP and in the cytoplasm. Termination of cross-bridge cycling can occur when is actively pumped back into the sarcoplasmic reticulum. When is no longer present on the thin filament, the tropomyosin changes conformation back to its previous state so as to block the binding sites again. The myosin ceases binding to the thin filament, and the muscle relaxes. The ions leave the troponin molecule to maintain the ion concentration in the sarcoplasm. The active pumping of ions into the sarcoplasmic reticulum creates a deficiency in the fluid around the myofibrils. This causes the removal of ions from the troponin. Thus, the tropomyosin-troponin complex again covers the binding sites on the actin filaments and contraction ceases.
Gradation of skeletal muscle contractions
The strength of skeletal muscle contractions can be broadly separated into twitch, summation, and tetanus. A twitch is a single contraction and relaxation cycle produced by an action potential within the muscle fiber itself. The time between a stimulus to the motor nerve and the subsequent contraction of the innervated muscle is called the latent period, which usually takes about 10 ms and is caused by the time taken for nerve action potential to propagate, the time for chemical transmission at the neuromuscular junction, then the subsequent steps in excitation-contraction coupling.
If another muscle action potential were to be produced before the complete relaxation of a muscle twitch, then the next twitch will simply sum onto the previous twitch, thereby producing a summation. Summation can be achieved in two ways: frequency summation and multiple fiber summation. In frequency summation, the force exerted by the skeletal muscle is controlled by varying the frequency at which action potentials are sent to muscle fibers. Action potentials do not arrive at muscles synchronously, and, during a contraction, some fraction of the fibers in the muscle will be firing at any given time. In a typical circumstance, when humans are exerting their muscles as hard as they are consciously able, roughly one-third of the fibers in each of those muscles will fire at once, though this ratio can be affected by various physiological and psychological factors (including Golgi tendon organs and Renshaw cells). This 'low' level of contraction is a protective mechanism to prevent avulsion of the tendon—the force generated by a 95% contraction of all fibers is sufficient to damage the body. In multiple fiber summation, if the central nervous system sends a weak signal to contract a muscle, the smaller motor units, being more excitable than the larger ones, are stimulated first. As the strength of the signal increases, more motor units are excited in addition to larger ones, with the largest motor units having as much as 50 times the contractile strength as the smaller ones. As more and larger motor units are activated, the force of muscle contraction becomes progressively stronger. A concept known as the size principle, allows for a gradation of muscle force during weak contraction to occur in small steps, which then become progressively larger when greater amounts of force are required.
Finally, if the frequency of muscle action potentials increases such that the muscle contraction reaches its peak force and plateaus at this level, then the contraction is a tetanus.
Length-tension relationship
Length-tension relationship relates the strength of an isometric contraction to the length of the muscle at which the contraction occurs. Muscles operate with greatest active tension when close to an ideal length (often their resting length). When stretched or shortened beyond this (whether due to the action of the muscle itself or by an outside force), the maximum active tension generated decreases. This decrease is minimal for small deviations, but the tension drops off rapidly as the length deviates further from the ideal. Due to the presence of elastic proteins within a muscle cell (such as titin) and extracellular matrix, as the muscle is stretched beyond a given length, there is an entirely passive tension, which opposes lengthening. Combined, there is a strong resistance to lengthening an active muscle far beyond the peak of active tension.
Force-velocity relationships
Force–velocity relationship relates the speed at which a muscle changes its length (usually regulated by external forces, such as load or other muscles) to the amount of force that it generates. Force declines in a hyperbolic fashion relative to the isometric force as the shortening velocity increases, eventually reaching zero at some maximum velocity. The reverse holds true for when the muscle is stretched – force increases above isometric maximum, until finally reaching an absolute maximum. This intrinsic property of active muscle tissue plays a role in the active damping of joints that are actuated by simultaneously active opposing muscles. In such cases, the force-velocity profile enhances the force produced by the lengthening muscle at the expense of the shortening muscle. This favoring of whichever muscle returns the joint to equilibrium effectively increases the damping of the joint. Moreover, the strength of the damping increases with muscle force. The motor system can thus actively control joint damping via the simultaneous contraction (co-contraction) of opposing muscle groups.
Smooth muscle
Smooth muscles can be divided into two subgroups: single-unit and multiunit. Single-unit smooth muscle cells can be found in the gut and blood vessels. Because these cells are linked together by gap junctions, they are able to contract as a functional syncytium. Single-unit smooth muscle cells contract myogenically, which can be modulated by the autonomic nervous system.
Unlike single-unit smooth muscle cells, multiunit smooth muscle cells are found in the muscle of the eye and in the base of hair follicles. Multiunit smooth muscle cells contract by being separately stimulated by nerves of the autonomic nervous system. As such, they allow for fine control and gradual responses, much like motor unit recruitment in skeletal muscle.
Mechanisms of smooth muscle contraction
The contractile activity of smooth muscle cells can be tonic (sustained) or phasic (transient) and is influenced by multiple inputs such as spontaneous electrical activity, neural and hormonal inputs, local changes in chemical composition, and stretch. This is in contrast to the contractile activity of skeletal muscle cells, which relies on a single neural input. Some types of smooth muscle cells are able to generate their own action potentials spontaneously, which usually occur following a pacemaker potential or a slow wave potential. These action potentials are generated by the influx of extracellular , and not . Like skeletal muscles, cytosolic ions are also required for crossbridge cycling in smooth muscle cells.
The two sources for cytosolic in smooth muscle cells are the extracellular entering through calcium channels and the ions that are released from the sarcoplasmic reticulum. The elevation of cytosolic results in more binding to calmodulin, which then binds and activates myosin light-chain kinase. The calcium-calmodulin-myosin light-chain kinase complex phosphorylates myosin on the 20 kilodalton (kDa) myosin light chains on amino acid residue-serine 19, enabling the molecular interaction of myosin and actin, and initiating contraction and activating the myosin ATPase. Unlike skeletal muscle cells, smooth muscle cells lack troponin, even though they contain the thin filament protein tropomyosin and other notable proteins – caldesmon and calponin. Thus, smooth muscle contractions are initiated by the -activated phosphorylation of myosin rather than binding to the troponin complex that regulates myosin binding sites on actin like in skeletal and cardiac muscles.
Termination of crossbridge cycling (and leaving the muscle in latch-state) occurs when myosin light chain phosphatase removes the phosphate groups from the myosin heads. Phosphorylation of the 20 kDa myosin light chains correlates well with the shortening velocity of smooth muscle. During this period, there is a rapid burst of energy use as measured by oxygen consumption. Within a few minutes of initiation, the calcium level markedly decreases, the 20 kDa myosin light chains' phosphorylation decreases, and energy use decreases; however, force in tonic smooth muscle is maintained. During contraction of muscle, rapidly cycling crossbridges form between activated actin and phosphorylated myosin, generating force. It is hypothesized that the maintenance of force results from dephosphorylated "latch-bridges" that slowly cycle and maintain force. A number of kinases such as rho kinase, DAPK3, and protein kinase C are believed to participate in the sustained phase of contraction, and flux may be significant.
Neuromodulation
Although smooth muscle contractions are myogenic, the rate and strength of their contractions can be modulated by the autonomic nervous system. Postganglionic nerve fibers of parasympathetic nervous system release the neurotransmitter acetylcholine, which binds to muscarinic acetylcholine receptors (mAChRs) on smooth muscle cells. These receptors are metabotropic, or G-protein coupled receptors that initiate a second messenger cascade. Conversely, postganglionic nerve fibers of the sympathetic nervous system release the neurotransmitters epinephrine and norepinephrine, which bind to adrenergic receptors that are also metabotropic. The exact effects on the smooth muscle depend on the specific characteristics of the receptor activated—both parasympathetic input and sympathetic input can be either excitatory (contractile) or inhibitory (relaxing).
Cardiac muscle
There are two types of cardiac muscle cells: autorhythmic and contractile. Autorhythmic cells do not contract, but instead set the pace of contraction for other cardiac muscle cells, which can be modulated by the autonomic nervous system. In contrast, contractile muscle cells (cardiomyocytes) constitute the majority of the heart muscle and are able to contract.
Excitation-contraction coupling
In both skeletal and cardiac muscle excitation-contraction (E-C) coupling, depolarization conduction and Ca2+ release processes occur. However, though the proteins involved are similar, they are distinct in structure and regulation. The dihydropyridine receptors (DHPRs) are encoded by different genes, and the ryanodine receptors (RyRs) are distinct isoforms. Besides, DHPR contacts with RyR1 (main RyR isoform in skeletal muscle) to regulate Ca2+ release in skeletal muscle, while the L-type calcium channel (DHPR on cardiac myocytes) and RyR2 (main RyR isoform in cardiac muscle) are not physically coupled in cardiac muscle, but face with each other by a junctional coupling.
Unlike skeletal muscle, E-C coupling in cardiac muscle is thought to depend primarily on a mechanism called calcium-induced calcium release, which is based on the junctional structure between T-tubule and sarcoplasmic reticulum. Junctophilin-2 (JPH2) is essential to maintain this structure, as well as the integrity of T-tubule. Another protein, receptor accessory protein 5 (REEP5), functions to keep the normal morphology of junctional SR. Defects of junctional coupling can result from deficiencies of either of the two proteins. During the process of calcium-induced calcium release, RyR2s are activated by a calcium trigger, which is brought about by the flow of Ca2+ through the L-type calcium channels. After this, cardiac muscle tends to exhibit diad structures, rather than triads.
Excitation-contraction coupling in cardiac muscle cells occurs when an action potential is initiated by pacemaker cells in the sinoatrial node or atrioventricular node and conducted to all cells in the heart via gap junctions. The action potential travels along the surface membrane into T-tubules (the latter are not seen in all cardiac cell types) and the depolarisation causes extracellular to enter the cell via L-type calcium channels and possibly sodium-calcium exchanger (NCX) during the early part of the plateau phase. Although this Ca2+ influx only count for about 10% of the Ca2+ needed for activation, it is relatively larger than that of skeletal muscle. This influx causes a small local increase in intracellular . The increase of intracellular is detected by RyR2 in the membrane of the sarcoplasmic reticulum, which releases in a positive feedback physiological response. This positive feedback is known as calcium-induced calcium release and gives rise to calcium sparks ( sparks). The spatial and temporal summation of ~30,000 sparks gives a cell-wide increase in cytoplasmic calcium concentration. The increase in cytosolic calcium following the flow of calcium through the cell membrane and sarcoplasmic reticulum is moderated by calcium buffers, which bind a large proportion of intracellular calcium. As a result, a large increase in total calcium leads to a relatively small rise in free .
The cytoplasmic calcium binds to Troponin C, moving the tropomyosin complex off the actin binding site allowing the myosin head to bind to the actin filament. From this point on, the contractile mechanism is essentially the same as for skeletal muscle (above). Briefly, using ATP hydrolysis, the myosin head pulls the actin filament toward the centre of the sarcomere.
Following systole, intracellular calcium is taken up by the sarco/endoplasmic reticulum ATPase (SERCA) pump back into the sarcoplasmic reticulum ready for the next cycle to begin. Calcium is also ejected from the cell mainly by the sodium-calcium exchanger (NCX) and, to a lesser extent, a plasma membrane calcium ATPase. Some calcium is also taken up by the mitochondria. An enzyme, phospholamban, serves as a brake for SERCA. At low heart rates, phospholamban is active and slows down the activity of the ATPase so that does not have to leave the cell entirely. At high heart rates, phospholamban is phosphorylated and deactivated thus taking most from the cytoplasm back into the sarcoplasmic reticulum. Once again, calcium buffers moderate this fall in concentration, permitting a relatively small decrease in free concentration in response to a large change in total calcium. The falling concentration allows the troponin complex to dissociate from the actin filament thereby ending contraction. The heart relaxes, allowing the ventricles to fill with blood and begin the cardiac cycle again.
Invertebrate
Circular and longitudinal muscles
In annelids such as earthworms and leeches, circular and longitudinal muscles cells form the body wall of these animals and are responsible for their movement. In an earthworm that is moving through a soil, for example, contractions of circular and longitudinal muscles occur reciprocally while the coelomic fluid serves as a hydroskeleton by maintaining turgidity of the earthworm. When the circular muscles in the anterior segments contract, the anterior portion of animal's body begins to constrict radially, which pushes the incompressible coelomic fluid forward and increasing the length of the animal. As a result, the front end of the animal moves forward. As the front end of the earthworm becomes anchored and the circular muscles in the anterior segments become relaxed, a wave of longitudinal muscle contractions passes backwards, which pulls the rest of animal's trailing body forward. These alternating waves of circular and longitudinal contractions is called peristalsis, which underlies the creeping movement of earthworms.
Obliquely striated muscles
Invertebrates such as annelids, mollusks, and nematodes, possess obliquely striated muscles, which contain bands of thick and thin filaments that are arranged helically rather than transversely, like in vertebrate skeletal or cardiac muscles. In bivalves, the obliquely striated muscles can maintain tension over long periods without using too much energy. Bivalves use these muscles to keep their shells closed.
Asynchronous muscles
Advanced insects such as wasps, flies, bees, and beetles possess asynchronous muscles that constitute the flight muscles in these animals. These flight muscles are often called fibrillar muscles because they contain myofibrils that are thick and conspicuous. A remarkable feature of these muscles is that they do not require stimulation for each muscle contraction. Hence, they are called asynchronous muscles because the number of contractions in these muscles do not correspond (or synchronize) with the number of action potentials. For example, a wing muscle of a tethered fly may receive action potentials at a frequency of 3 Hz but it is able to beat at a frequency of 120 Hz. The high frequency beating is made possible because the muscles are connected to a resonant system, which is driven to a natural frequency of vibration.
History
In 1780, Luigi Galvani discovered that the muscles of dead frogs' legs twitched when struck by an electrical spark. This was one of the first forays into the study of bioelectricity, a field that still studies the electrical patterns and signals in tissues such as nerves and muscles.
In 1952, the term excitation–contraction coupling was coined to describe the physiological process of converting an electrical stimulus to a mechanical response. This process is fundamental to muscle physiology, whereby the electrical stimulus is usually an action potential and the mechanical response is contraction. Excitation–contraction coupling can be dysregulated in many diseases. Though excitation–contraction coupling has been known for over half a century, it is still an active area of biomedical research. The general scheme is that an action potential arrives to depolarize the cell membrane. By mechanisms specific to the muscle type, this depolarization results in an increase in cytosolic calcium that is called a calcium transient. This increase in calcium activates calcium-sensitive contractile proteins that then use ATP to cause cell shortening.
The mechanism for muscle contraction evaded scientists for years and requires continued research and updating. The sliding filament theory was independently developed by Andrew F. Huxley and Rolf Niedergerke and by Hugh Huxley and Jean Hanson. Their findings were published as two consecutive papers published in the 22 May 1954 issue of Nature under the common theme "Structural Changes in Muscle During Contraction".
See also
Anatomical terms of motion
Calcium-induced calcium release
Cardiac action potential
Cramp
Dystonia
Exercise physiology
Fasciculation
Hill's muscle model
Hypnic jerk
In vitro muscle testing
Lombard's paradox
Myoclonus
Rigor mortis
Spasm
Uterine contraction
References
Further reading
Krans, J. L. (2010) The Sliding Filament Theory of Muscle Contraction. Nature Education 3(9):66
Saladin, Kenneth S., Stephen J. Sullivan, and Christina A. Gan. (2015). Anatomy & Physiology: The Unity of Form and Function. 7th ed. New York: McGraw-Hill Education.
External links
Animation: Myofilament Contraction
Sliding Filament Model of Muscle Contraction
Exercise physiology
Muscular system
Skeletal muscle
Musculoskeletal system
Neurology | Muscle contraction | [
"Biology"
] | 7,739 | [
"Organ systems",
"Musculoskeletal system"
] |
1,110,671 | https://en.wikipedia.org/wiki/Copper-clad%20steel | Copper-clad steel (CCS), also known as copper-covered steel or the trademarked name Copperweld is a bi-metallic product, mainly used in the wire industry that combines the high mechanical strength of steel with the conductivity and corrosion resistance of copper.
It is mainly used for grounding purposes, line tracing to locate underground utilities, drop wire of telephone cables, and inner conductor of coaxial cables, including thin hookup cables like RG-174 and CATV cable. It is also used in some antennas for RF conducting wires.
History
The first recorded attempt to make copper clad steel wire took place in the early 1860s. Although for over 100 years people had been suggesting various ways of uniting copper and steel, it was not until the period mentioned that Farmer and Milliken tried wrapping a strip of copper about a steel wire. American engineers in 1883 and again in the 1890s made attempts to produce a copper-steel wire, in one instance at least, by electroplating copper on steel.
The Duplex Metals Co. traces its beginning to John Ferreol Monnot between 1900 and 1905. He had been very interested in the work of Mr. Martin in Paris.
"After several years devoted to experimenting, [he] organized the Duplex Metals Company. Prior to his discovery of the process under which this company operates in producing its copper clad, probably almost every other possible way of welding copper and steel together had been tried by Mr. Monnot, but found useless for the purpose."
Uses
Copper-clad steel wire find applications in grounding, connection of ground rods to metallic structures, ground grid meshes, substations, power installations, and lightning arresters.
This wire is also sometimes used for power transmission.
Copper coated welding wire has become common since wire welding equipment has become popular.
Copper-clad steel is occasionally used for making durable radio antennas, where its HF conductivity is nearly identical to a same-diameter solid copper conductor. It is most often used in antennas with long spans of unsupported wire, which need extra strength to withstand high tension which would cause solid copper or aluminum wire to break or stretch excessively.
Properties
The main properties of these conductors include:
Good corrosion resistance of copper
High tensile strength of steel
Resistance against material fatigue
Advantages
Since the outer conductor layer is low-impedance copper, and only the center is higher impedance steel, the skin effect gives RF transmission lines with heavy copper-cladding a low impedance at high frequencies, equivalent to that of a solid copper wire.
Tensile strength of copper-clad steel conductors is greater than that of ordinary copper conductors permitting greater span lengths than with copper.
Another advantage is that smaller diameter copper-clad steel conductors may be used in coaxial cables, permitting higher impedance and smaller cable diameter than with copper conductors of similar strength.
Due to the inseparable union of the two metals and the low amount of the more costly one, it deters theft since copper recovery is impractical and thus has very little scrap value.
Installations with copper-clad steel conductors are generally accepted as fulfilling the legal specifications for a good electrical ground. For this reason its use is preferred by industrial companies and utilities when cost is a concern.
See also
Copper conductor
Copper-clad aluminium wire
References
External links
Electrical wiring
Composite materials
Steel
Bimetal | Copper-clad steel | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 678 | [
"Electrical systems",
"Building engineering",
"Metallurgy",
"Composite materials",
"Bimetal",
"Physical systems",
"Materials",
"Electrical engineering",
"Electrical wiring",
"Matter"
] |
1,112,273 | https://en.wikipedia.org/wiki/Heat%20of%20combustion | The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it.
The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities:
energy/mole of fuel
energy/mass of fuel
energy/volume of the fuel
There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like are allowed to condense.
The high heat values are conventionally measured with a bomb calorimeter. Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation ΔH of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion)..
For a fuel of composition CcHhOoNn, the (higher) heat of combustion is usually to a good approximation (±3%), though it gives poor results for some compounds such as (gaseous) formaldehyde and carbon monoxide, and can be significantly off if , such as for glycerine dinitrate, .
By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, ΔH°comb, is the heat of reaction of the following process:
(std.) + (c + - ) (g) → c (g) + (l) + (g)
Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and or gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids, respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water.
Ways of determination
Gross and net
Zwolinski and Wilhoit defined, in 1972, "gross" and "net" values for heats of combustion. In the gross definition the products are the most stable compounds, e.g. (l), (l), (s) and (l). In the net definition the products are the gases produced when the compound is burned in an open flame, e.g. (g), (g), (g) and (g). In both definitions the products for C, F, Cl and N are (g), (g), (g) and (g), respectively.
Dulong's Formula
The heating value of a fuel can be calculated with the results of ultimate analysis of fuel. From analysis, percentages of the combustibles in the fuel (carbon, hydrogen, sulfur) are known. Since the heat of combustion of these elements is known, the heating value can be calculated using Dulong's Formula:
HHV [kJ/g]= 33.87mC + 122.3(mH - mO ÷ 8) + 9.4mS
where mC, mH, mO, mN, and mS are the contents of carbon, hydrogen, oxygen, nitrogen, and sulfur on any (wet, dry or ash free) basis, respectively.
Higher heating value
The higher heating value (HHV; gross energy, upper heating value, gross calorific value GCV, or higher calorific value; HCV) indicates the upper limit of the available thermal energy produced by a complete combustion of fuel. It is measured as a unit of energy per unit mass or volume of substance. The HHV is determined by bringing all the products of combustion back to the original pre-combustion temperature, including condensing any vapor produced. Such measurements often use a standard temperature of . This is the same as the thermodynamic heat of combustion since the enthalpy change for the reaction assumes a common temperature of the compounds before and after combustion, in which case the water produced by combustion is condensed to a liquid. The higher heating value takes into account the latent heat of vaporization of water in the combustion products, and is useful in calculating heating values for fuels where condensation of the reaction products is practical (e.g., in a gas-fired boiler used for space heat). In other words, HHV assumes all the water component is in liquid state at the end of combustion (in product of combustion) and that heat delivered at temperatures below can be put to use.
Lower heating value
The lower heating value (LHV; net calorific value; NCV, or lower calorific value; LCV) is another measure of available thermal energy produced by a combustion of fuel, measured as a unit of energy per unit mass or volume of substance. In contrast to the HHV, the LHV considers energy losses such as the energy used to vaporize wateralthough its exact definition is not uniformly agreed upon. One definition is simply to subtract the heat of vaporization of the water from the higher heating value. This treats any H2O formed as a vapor that is released as a waste. The energy required to vaporize the water is therefore lost.
LHV calculations assume that the water component of a combustion process is in vapor state at the end of combustion, as opposed to the higher heating value (HHV) (a.k.a. gross calorific value or gross CV) which assumes that all of the water in a combustion process is in a liquid state after a combustion process.
Another definition of the LHV is the amount of heat released when the products are cooled to . This means that the latent heat of vaporization of water and other reaction products is not recovered. It is useful in comparing fuels where condensation of the combustion products is impractical, or heat at a temperature below cannot be put to use.
One definition of lower heating value, adopted by the American Petroleum Institute (API), uses a reference temperature of .
Another definition, used by Gas Processors Suppliers Association (GPSA) and originally used by API (data collected for API research project 44), is the enthalpy of all combustion products minus the enthalpy of the fuel at the reference temperature (API research project 44 used 25 °C. GPSA currently uses 60 °F), minus the enthalpy of the stoichiometric oxygen (O2) at the reference temperature, minus the heat of vaporization of the vapor content of the combustion products.
The definition in which the combustion products are all returned to the reference temperature is more easily calculated from the higher heating value than when using other definitions and will in fact give a slightly different answer.
Gross heating value
Gross heating value accounts for water in the exhaust leaving as vapor, as does LHV, but gross heating value also includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal, which will usually contain some amount of water prior to burning.
Measuring heating values
The higher heating value is experimentally determined in a bomb calorimeter. The combustion of a stoichiometric mixture of fuel and oxidizer (e.g. two moles of hydrogen and one mole of oxygen) in a steel container at is initiated by an ignition device and the reactions allowed to complete. When hydrogen and oxygen react during combustion, water vapor is produced. The vessel and its contents are then cooled to the original 25 °C and the higher heating value is determined as the heat released between identical initial and final temperatures.
When the lower heating value (LHV) is determined, cooling is stopped at 150 °C and the reaction heat is only partially recovered. The limit of 150 °C is based on acid gas dew-point.
Note: Higher heating value (HHV) is calculated with the product of water being in liquid form while lower heating value (LHV) is calculated with the product of water being in vapor form.
Relation between heating values
The difference between the two heating values depends on the chemical composition of the fuel. In the case of pure carbon or carbon monoxide, the two heating values are almost identical, the difference being the sensible heat content of carbon dioxide between 150 °C and 25 °C (sensible heat exchange causes a change of temperature, while latent heat is added or subtracted for phase transitions at constant temperature. Examples: heat of vaporization or heat of fusion). For hydrogen, the difference is much more significant as it includes the sensible heat of water vapor between 150 °C and 100 °C, the latent heat of condensation at 100 °C, and the sensible heat of the condensed water between 100 °C and 25 °C. In all, the higher heating value of hydrogen is 18.2% above its lower heating value (142MJ/kg vs. 120MJ/kg). For hydrocarbons, the difference depends on the hydrogen content of the fuel. For gasoline and diesel the higher heating value exceeds the lower heating value by about 10% and 7%, respectively, and for natural gas about 11%.
A common method of relating HHV to LHV is:
where Hv is the heat of vaporization of water, n,out is the number of moles of water vaporized and nfuel,in is the number of moles of fuel combusted.
Most applications that burn fuel produce water vapor, which is unused and thus wastes its heat content. In such applications, the lower heating value must be used to give a 'benchmark' for the process.
However, for true energy calculations in some specific cases, the higher heating value is correct. This is particularly relevant for natural gas, whose high hydrogen content produces much water, when it is burned in condensing boilers and power plants with flue-gas condensation that condense the water vapor produced by combustion, recovering heat which would otherwise be wasted.
Usage of terms
Engine manufacturers typically rate their engines fuel consumption by the lower heating values since the exhaust is never condensed in the engine, and doing this allows them to publish more attractive numbers than are used in conventional power plant terms. The conventional power industry had used HHV (high heat value) exclusively for decades, even though virtually all of these plants did not condense exhaust either. American consumers should be aware that the corresponding fuel-consumption figure based on the higher heating value will be somewhat higher.
The difference between HHV and LHV definitions causes endless confusion when quoters do not bother to state the convention being used. since there is typically a 10% difference between the two methods for a power plant burning natural gas. For simply benchmarking part of a reaction the LHV may be appropriate, but HHV should be used for overall energy efficiency calculations if only to avoid confusion, and in any case, the value or convention should be clearly stated.
Accounting for moisture
Both HHV and LHV can be expressed in terms of AR (all moisture counted), MF and MAF (only water from combustion of hydrogen). AR, MF, and MAF are commonly used for indicating the heating values of coal:
AR (as received) indicates that the fuel heating value has been measured with all moisture- and ash-forming minerals present.
MF (moisture-free) or dry indicates that the fuel heating value has been measured after the fuel has been dried of all inherent moisture but still retaining its ash-forming minerals.
MAF (moisture- and ash-free) or DAF (dry and ash-free) indicates that the fuel heating value has been measured in the absence of inherent moisture- and ash-forming minerals.
Heat of combustion tables
Note
There is no difference between the lower and higher heating values for the combustion of carbon, carbon monoxide and sulfur since no water is formed during the combustion of those substances.
BTU/lb values are calculated from MJ/kg (1 MJ/kg = 430 BTU/lb).
Higher heating values of natural gases from various sources
The International Energy Agency reports the following typical higher heating values per Standard cubic metre of gas:
Algeria: 39.57MJ/Sm3
Bangladesh: 36.00MJ/Sm3
Canada: 39.00MJ/Sm3
China: 38.93MJ/Sm3
Indonesia: 40.60MJ/Sm3
Iran: 39.36MJ/Sm3
Netherlands: 33.32MJ/Sm3
Norway: 39.24MJ/Sm3
Pakistan: 34.90MJ/Sm3
Qatar: 41.40MJ/Sm3
Russia: 38.23MJ/Sm3
Saudi Arabia: 38.00MJ/Sm3
Turkmenistan: 37.89MJ/Sm3
United Kingdom: 39.71MJ/Sm3
United States: 38.42MJ/Sm3
Uzbekistan: 37.89MJ/Sm3
The lower heating value of natural gas is normally about 90% of its higher heating value. This table is in Standard cubic metres (1atm, 15°C), to convert to values per Normal cubic metre (1atm, 0°C), multiply above table by 1.0549.
See also
Adiabatic flame temperature
Cost of electricity by source
Electrical efficiency
Energy content of fuel
Energy conversion efficiency
Energy density
Energy value of coal
Exothermic reaction
Figure of merit
Fire
Food energy
Internal energy
ISO 15971
Mechanical efficiency
Thermal efficiency
Wobbe index: heat density
References
Further reading
External links
NIST Chemistry WebBook
Engineering thermodynamics
Combustion
Fuels
Thermodynamic properties
Nuclear physics
Thermochemistry | Heat of combustion | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,887 | [
"Thermodynamic properties",
"Physical quantities",
"Thermochemistry",
"Chemical energy sources",
"Engineering thermodynamics",
"Quantity",
"Combustion",
"Thermodynamics",
"Fuels",
"Nuclear physics",
"Mechanical engineering"
] |
13,901,387 | https://en.wikipedia.org/wiki/Phosphogluconate%20dehydrogenase%20%28decarboxylating%29 | In enzymology, a phosphogluconate dehydrogenase (decarboxylating) () is an enzyme that catalyzes the chemical reaction
6-phospho-D-gluconate + NADP+ D-ribulose 5-phosphate + CO2 + NADPH
Thus, the two substrates of this enzyme are 6-phospho-D-gluconate and NADP+, whereas its 3 products are D-ribulose 5-phosphate, CO2, and NADPH.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 6-phospho-D-gluconate:NADP+ 2-oxidoreductase (decarboxylating). Other names in common use include phosphogluconic acid dehydrogenase, 6-phosphogluconic dehydrogenase, 6-phosphogluconic carboxylase, 6-phosphogluconate dehydrogenase (decarboxylating), and 6-phospho-D-gluconate dehydrogenase. This enzyme participates in pentose phosphate pathway. It employs one cofactor, manganese.
Enzyme Structure
The general structure, as well as several critical residues, on 6-phosphogluconate dehydrogenase appear to be well conserved over various species. The enzyme is a dimer, with each subunit containing three domains. The N-terminal coenzyme binding domain contains a Rossmann fold with additional α/β units. The second domain consists of a number of alpha helical structures, and the C-terminal domain consists of a short tail. The tails of the two subunits interact with each other to form a mobile lid on the enzyme's active site.
As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , and .
Enzyme Mechanism
The conversion of 6-phosphogluconate and NADP to ribulose 5-phosphate, carbon dioxide, and NADPH is believed to follow a sequential mechanism with ordered product release. 6-phosphogluconate is first oxidized to 3-keto-6-phosphogluconate and NADPH is formed and released. Then, the intermediate is decarboxylated, yielding a 1,2-enediol of ribulose 5-phosphate, which tautomerizes to form ribulose 5-phosphate. High levels of NADPH are believed to inhibit the enzyme, while 6-phosphogluconate acts to activate the enzyme.
Biological Function
6-phosphogluconate dehydrogenase is involved in the production of ribulose 5-phosphate, which is used in nucleotide synthesis, and functions in the pentose phosphate pathway as the main generator of cellular NADPH.
Disease Relevance
Since NADPH is required by both thioredoxin reductase and glutathione reductase to reduce oxidized thioredoxin and glutathionine, 6-phosphogluconate dehydrogenase is believed to be involved in protecting cells from oxidative damage. Several studies have linked oxidative stress to diseases such as Alzheimer's disease, as well as cancer, These studies have found phosphogluconate dehydrogenase activity to be up-regulated, both in tumor cells and in relevant cortical regions of Alzheimer's patient brains, most likely as a compensatory reaction to highly oxidative environments.
Recently, phosphogluconate dehydrogenase has been posited as a potential drug target for African sleeping sickness (trypanosomiasis). The pentose phosphate pathway protects the trypanosomes from oxidative stress via the generation of NADPH and provides carbohydrate intermediates used in nucleotide synthesis. Structural differences between mammalian and trypanosome 6-phosphogluconate dehydrogenase have allowed for the development of selective inhibitors of the enzyme. Phosphorylated carbohydrate substrate and transition state analogues, non-carbohydrate substrate analogues and triphenylmethane-based compounds are currently being explored.
References
EC 1.1.1
NADPH-dependent enzymes
Manganese enzymes
Enzymes of known structure
Pentose phosphate pathway | Phosphogluconate dehydrogenase (decarboxylating) | [
"Chemistry"
] | 975 | [
"Carbohydrate metabolism",
"Pentose phosphate pathway"
] |
13,901,570 | https://en.wikipedia.org/wiki/Testosterone%2017b-dehydrogenase%20%28NADP%2B%29 | Testosterone 17beta-dehydrogenase (NADP+) (, 17-ketoreductase, NADP-dependent testosterone-17beta-oxidoreductase, testosterone 17beta-dehydrogenase (NADP)) is an enzyme with systematic name 17beta-hydroxysteroid:NADP+ 17-oxidoreductase. This enzyme catalyses the following chemical reaction
testosterone + NADP+ androstenedione + NADPH + H+
Also oxidizes 3-hydroxyhexobarbital to 3-oxohexobarbital.
References
External links
EC 1.1.1
Steroid hormone biosynthesis | Testosterone 17b-dehydrogenase (NADP+) | [
"Chemistry",
"Biology"
] | 149 | [
"Steroid hormone biosynthesis",
"Biosynthesis"
] |
13,903,462 | https://en.wikipedia.org/wiki/Condenser%20%28heat%20transfer%29 | In systems involving heat transfer, a condenser is a heat exchanger used to condense a gaseous substance into a liquid state through cooling. In doing so, the latent heat is released by the substance and transferred to the surrounding environment. Condensers are used for efficient heat rejection in many industrial systems. Condensers can be made according to numerous designs and come in many sizes ranging from rather small (hand-held) to very large (industrial-scale units used in plant processes). For example, a refrigerator uses a condenser to get rid of heat extracted from the interior of the unit to the outside air.
Condensers are used in air conditioning, industrial chemical processes such as distillation, steam power plants, and other heat-exchange systems. The use of cooling water or surrounding air as the coolant is common in many condensers.
History
The earliest laboratory condenser, a "Gegenstromkühler" (counter-flow condenser), was invented in 1771 by the Swedish-German chemist Christian Weigel. By the mid-19th century, German chemist Justus von Liebig would provide his own improvements on the preceding designs of Weigel and Johann Friedrich August Göttling, with the device becoming known as the Liebig condenser.
Principle of operation
A condenser is designed to transfer heat from a working fluid (e.g. water in a steam power plant) to a secondary fluid or the surrounding air. The condenser relies on the efficient heat transfer that occurs during phase changes, in this case during the condensation of a vapor into a liquid. The vapor typically enters the condenser at a temperature above that of the secondary fluid. As the vapor cools, it reaches the saturation temperature, condenses into liquid, and releases large quantities of latent heat. As this process occurs along the condenser, the quantity of vapor decreases and the quantity of liquid increases; at the outlet of the condenser, only liquid remains. Some condenser designs contain an additional length to subcool this condensed liquid below the saturation temperature.
Countless variations exist in condenser design, with design variables including the working fluid, the secondary fluid, the geometry, and the material. Common secondary fluids include water, air, refrigerants, or phase-change materials.
Condensers have two significant design advantages over other cooling technologies:
Heat transfer by latent heat is much more efficient than heat transfer by sensible heat only
The temperature of the working fluid stays relatively constant during condensation, which maximizes the temperature difference between the working and secondary fluid.
Examples of condensers
Surface condenser
A surface condenser is one in which condensing medium and vapors are physically separated and used when direct contact is not desired. It is a shell and tube heat exchanger installed at the outlet of every steam turbine in thermal power stations. Commonly, the cooling water flows through the tube side and the steam enters the shell side where the condensation occurs on the outside of the heat transfer tubes. The condensate drips down and collects at the bottom, often in a built-in pan called a hotwell. The shell side often operates at a vacuum or partial vacuum, produced by the difference in specific volume between the steam and condensate. Conversely, the vapor can be fed through the tubes with the coolant water or air flowing around the outside.
Chemistry
In chemistry, a condenser is the apparatus that cools hot vapors, causing them to condense into a liquid. Examples include the Liebig condenser, Graham condenser, and Allihn condenser. This is not to be confused with a condensation reaction which links two fragments into a single molecule by an addition reaction and an elimination reaction.
In laboratory distillation, reflux, and rotary evaporators, several types of condensers are commonly used. The Liebig condenser is simply a straight tube within a cooling water jacket and is the simplest (and relatively least expensive) form of condenser. The Graham condenser is a spiral tube within a water jacket, and the Allihn condenser has a series of large and small constrictions on the inside tube, each increasing the surface area upon which the vapor constituents may condense. Being more complex shapes to manufacture, these latter types are also more expensive to purchase. These three types of condensers are laboratory glassware items since they are typically made of glass. Commercially available condensers usually are fitted with ground glass joints and come in standard lengths of 100, 200, and 400 mm. Air-cooled condensers are unjacketed, while water-cooled condensers contain a jacket for the water.
Industrial distillation
Larger condensers are also used in industrial-scale distillation processes to cool distilled vapor into liquid distillate. Commonly, the coolant flows through the tube side and distilled vapor through the shell side with distillate collecting at or flowing out the bottom.
Air conditioning
A condenser unit used in central air conditioning systems typically has a heat exchanger section to cool down and condense incoming refrigerant vapor into liquid, a compressor to raise the pressure of the refrigerant and move it along, and a fan for blowing outside air through the heat exchanger section to cool the refrigerant inside. A typical configuration of such a condenser unit is as follows: The heat exchanger section wraps around the sides of the unit with the compressor inside. In this heat exchanger section, the refrigerant goes through multiple tube passes, which are surrounded by heat transfer fins through which cooling air can circulate from outside to inside the unit. This also increases the surface area. There is a motorized fan inside the condenser unit near the top, which is covered by some grating to keep any objects from accidentally falling inside on the fan. The fan is used to pull outside cooling air in through the heat exchanger section at the sides and blow it out the top through the grating. These condenser units are located on the outside of the building they are trying to cool, with tubing between the unit and building, one for vapor refrigerant entering and another for liquid refrigerant leaving the unit. Of course, an electric power supply is needed for the compressor and fan inside the unit.
Direct-contact
In a direct-contact condenser, hot vapor and cool liquid are introduced into a vessel and allowed to mix directly, rather than being separated by a barrier such as the wall of a heat exchanger tube. The vapor gives up its latent heat and condenses to a liquid, while the liquid absorbs this heat and undergoes a temperature rise. The entering vapor and liquid typically contain a single condensable substance, such as a water spray being used to cool air and adjust its humidity.
Equation
For an ideal single-pass condenser whose coolant has constant density, constant heat capacity, linear enthalpy over the temperature range, perfect cross-sectional heat transfer, and zero longitudinal heat transfer, and whose tubing has constant perimeter, constant thickness, and constant heat conductivity, and whose condensible fluid is perfectly mixed and at a constant temperature, the coolant temperature varies along its tube according to:
where:
is the distance from the coolant inlet
is the coolant temperature, and T(0) the coolant temperature at its inlet
is the hot fluid's temperature
is the number of transfer units
is the coolant's mass (or other) flow rate
is the coolant's heat capacity at constant pressure per unit mass (or other)
is the heat transfer coefficient of the coolant tube
is the perimeter of the coolant tube
is the heat conductance of the coolant tube (often denoted )
is the length of the coolant tube
See also
Condenser (laboratory)
Air well (condenser)
References
Heating, ventilation, and air conditioning
Gas technologies
Heat exchangers
Heat transfer
Laboratory glassware
Steam turbine technology | Condenser (heat transfer) | [
"Physics",
"Chemistry",
"Engineering"
] | 1,669 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Chemical equipment",
"Heat exchangers",
"Thermodynamics"
] |
13,907,939 | https://en.wikipedia.org/wiki/Z-Push | Z-Push (presumably Z is for Zarafa) is a FOSS implementation of the Microsoft Exchange ActiveSync protocol which is used to synchronize email, personal contacts and other items between a central server and a mobile device.
Z-Push enables any PHP and non-PHP based groupware package to become fully sync-able with any ActiveSync compliant device with an appropriate backend.
Currently, Z-Push includes four backends: the IMAP and the Maildir backends for e-mail synchronization, the vCard and the CardDAV backends for contact synchronization, CalDAV for calendar synchronization, stickynotes for Sticky Notes Synchronization and one for the Kopano and Zarafa package which is sold by allowing full synchronization of E-mail, Calendar, Contacts, and Tasks.
There is also a 3rd party project that implements a Zimbra Backend, allowing Z-Push to be used with a ZCS server (Including opensource edition).
Since version 2.3.0, released in July 2016, significant performance improvements have been achieved, as well as significantly lower memory usage. Connecting to Outlook 2013 and 2016 via EAS is also officially supported. With the optional Kopano Outlook Extension (available only for paid subscribers of Zarafa/Kopano), additional Outlook features are enabled such as Out of Office replies and opening of shared and public folders.
The future sustainability of the Z-Push project was in question after Kopano (formerly Zarafa) announced that they would no longer be supporting the project. Z-Push now has a new maintainer from the community for future support of the project.
Technical background and architecture
The ActiveSync protocol is a binary XML (WBXML) protocol across HTTP. The protocol is specifically designed with efficient use from mobile devices in mind. As such the protocol is optimized for low bandwidth, high latency connections. Also the protocol is designed for minimum number of whole request round trips. This means that the protocol can use many of the same techniques used to speed up access to websites. This is as opposed to IMAP or SMTP which is a two way handshaking TCP protocol which can be both quite slow and costly in terms of battery consumption across high latency connections.
Getting ActiveSync instant push notification on mobile devices – particularly iOS devices – may well be the primary reason some people wish to use ActiveSync. However this will introduce significant additional server loading, so it should be carefully considered and monitored whether this is actually a desirable feature on the server.
The client (i.e. phone) uses long polling HTTP with 30 minute timeouts on the HTTP requests. This means if no mail arrives there is no traffic on the TCP channel and the phone radios remain in low power receive only mode. However, server-side these HTTP requests are served as long polling web requests and processed in PHP. When the request comes in it consumes the resources of a web request - see further discussion below - and keeps this open for up to 30 minutes, or until a new message arrives, or the client device is switched off.
References
External links
Z-Push project website
Download Old Z-Push 2.2.x
Install Z-Push from Repo
Zimbra Backend for Z-Push
Email
Wireless email
Data synchronization
Open standards
Groupware
Synchronization
Software using the GNU Affero General Public License | Z-Push | [
"Engineering"
] | 718 | [
"Telecommunications engineering",
"Synchronization"
] |
4,400,311 | https://en.wikipedia.org/wiki/Rydberg%20molecule | A Rydberg molecule is an electronically excited chemical species. Electronically excited molecular states are generally quite different in character from electronically excited atomic states. However, particularly for highly electronically excited molecular systems, the ionic core interaction with an excited electron can take on the general aspects of the interaction between the proton and the electron in the hydrogen atom. The spectroscopic assignment of these states follows the Rydberg formula, named after the Swedish physicist Johannes Rydberg, and they are called Rydberg states of molecules. Rydberg series are associated with partially removing an electron from the ionic core.
Each Rydberg series of energies converges on an ionization energy threshold associated with a particular ionic core configuration. These quantized Rydberg energy levels can be associated with the quasiclassical Bohr atomic picture. The closer you get to the ionization threshold energy, the higher the principal quantum number, and the smaller the energy difference between near threshold Rydberg states. As the electron is promoted to higher energy levels in a Rydberg series, the spatial excursion of the electron from the ionic core increases and the system is more like the Bohr quasiclassical picture.
The Rydberg states of molecules with low principal quantum numbers can interact with the other excited electronic states of the molecule. This can cause shifts in energy. The assignment of molecular Rydberg states often involves following a Rydberg series from intermediate to high principal quantum numbers. The energy of Rydberg states can be refined by including a correction called the quantum defect in the Rydberg formula. The quantum defect correction can be associated with the presence of a distributed ionic core.
The experimental study of molecular Rydberg states has been conducted with traditional methods for generations. However, the development of laser-based techniques such as Resonance Ionization Spectroscopy has allowed relatively easy access to these Rydberg molecules as intermediates. This is particularly true for Resonance Enhanced Multiphoton Ionization (REMPI) spectroscopy, since multiphoton processes involve different selection rules from single photon processes. The study of high principal quantum number Rydberg states has spawned a number of spectroscopic techniques. These "near threshold Rydberg states" can have long lifetimes, particularly for the higher orbital angular momentum states that do not interact strongly with the ionic core.
Rydberg molecules can condense to form clusters of Rydberg matter which has an extended lifetime against de-excitation.
Dihelium (He2*) was the first known Rydberg molecule.
Other types
In 2009, a different kind of Rydberg molecule was finally created by researchers from the University of Stuttgart. There, the interaction between a Rydberg atom and a ground state atom leads to a novel bond type. Two rubidium atoms were used to create the molecule which survived for 18 microseconds.
In 2015, a 'trilobite' Rydberg molecule was observed by researchers from the University of Oklahoma. This molecule was theorized in 2000 and is characterized by an electron density distribution that resembles the shape of a trilobite when plotted in cylindrical coordinates. These molecules have lifetimes of tens of microseconds and electric dipole moments of up to 2000 Debye.
In 2016, a butterfly Rydberg molecule was observed by a collaboration involving researchers from the Kaiserslautern University of Technology and Purdue University. A butterfly Rydberg molecule is a weak pairing of a Rydberg atom and a ground state atom that is enhanced by the presence of a shape resonance in the scattering between the Rydberg electron and the ground state atom. This new kind of atomic bond was theorized in 2002 and is characterized by an electron density distribution that resembles the shape of a butterfly. As a consequence of the unconventional binding mechanism, butterfly Rydberg molecules show peculiar properties such as multiple vibrational ground states at different bond lengths and giant dipole moments in excess of 500 debye.
See also
Rydberg atom
Rydberg matter
References
Further reading
Molecular Spectra and Molecular Structure, Vol. I, II and III Gerhard Herzberg, Krieger Pub. Co, revised ed. 1991.
Atoms and Molecules: An Introduction for Students of Physical Chemistry, Martin Karplus and Richard N. Porter, Benjamin & Company, Inc., 1970.
Atomic physics
Spectroscopy
Atomic, molecular, and optical physics | Rydberg molecule | [
"Physics",
"Chemistry"
] | 853 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
4,402,244 | https://en.wikipedia.org/wiki/Haag%E2%80%93%C5%81opusza%C5%84ski%E2%80%93Sohnius%20theorem | In theoretical physics, the Haag–Łopuszański–Sohnius theorem states that if both commutating and anticommutating generators are considered, then the only way to nontrivially mix spacetime and internal symmetries is through supersymmetry. The anticommutating generators must be spin-1/2 spinors which can additionally admit their own internal symmetry known as R-symmetry. The theorem is a generalization of the Coleman–Mandula theorem to Lie superalgebras. It was proved in 1975 by Rudolf Haag, Jan Łopuszański, and Martin Sohnius as a response to the development of the first supersymmetric field theories by Julius Wess and Bruno Zumino in 1974.
History
During the 1960s, a set of theorems investigating how internal symmetries can be combined with spacetime symmetries were proved, with the most general being the Coleman–Mandula theorem. It showed that the Lie group symmetry of an interacting theory must necessarily be a direct product of the Poincaré group with some compact internal group. Unaware of this theorem, during the early 1970s a number of authors independently came up with supersymmetry, seemingly in contradiction to the theorem since there some generators do transform non-trivially under spacetime transformations.
In 1974 Jan Łopuszański visited Karlsruhe from Wrocław shortly after Julius Wess and Bruno Zumino constructed the first supersymmetric quantum field theory, the Wess–Zumino model. Speaking to Wess, Łopuszański was interested in figuring out how these new theories managed to overcome the Coleman–Mandula theorem. While Wess was too busy to work with Łopuszański, his doctoral student Martin Sohnius was available. Over the next few weeks they devised a proof of their theorem after which Łopuszański went to CERN where he worked with Rudolf Haag to significantly refine the argument and also extend it to the massless case. Later, after Łopuszański went back to Wrocław, Sohnius went to CERN to finish the paper with Haag, which was published in 1975.
Theorem
The main assumptions of the Coleman–Mandula theorem are that the theory includes an S-matrix with analytic scattering amplitudes such that any two-particle state must undergo some reaction at almost all energies and scattering angles. Furthermore, there must only be a finite number of particle types below any mass, disqualifying massless particles. The theorem then restricts the Lie algebra of the theory to be a direct sum of the Poincare algebra with some internal symmetry algebra.
The Haag–Łopuszański–Sohnius theorem is based on the same assumptions, except for allowing additional anticommutating generators, elevating the Lie algebra to a Lie superalgebra. In four dimensions, the theorem states that the only nontrivial anticommutating generators that can be added are a set of pairs of supercharges and , indexed by , which commute with the momentum generator and transform as left-handed and right-handed Weyl spinors. The undotted and dotted index notation, known as Van der Waerden notation, distinguishes left-handed and right-handed Weyl spinors from each other. Generators of other spin, such spin-3/2 or higher, are disallowed by the theorem. In a basis where , these supercharges satisfy
where are known as central charges, which commute with all generators of the superalgebra. Together with the Poincaré algebra, this Lie superalgebra is known as the super-Poincaré algebra. Since four dimensional Minkowski spacetime also admits Majorana spinors as fundamental spinor representations, the algebra can equivalently be written in terms of four-component Majorana spinor supercharges, with the algebra expressed in terms of gamma matrices and the charge conjugation operator rather than Pauli matrices used for the two-component Weyl spinors.
The supercharges can also admit an additional Lie algebra symmetry known as R-symmetry, whose generators satisfy
where are Hermitian representation matrices of the generators in the -dimensional representation of the R-symmetry group. For the central charge must vanish and the R-symmetry is given by a group, while for extended supersymmetry the central charges need not vanish, while the R-symmetry is a group.
If massless particles are allowed, then the algebra can additionally be extended using conformal generators: the dilaton generator and the special conformal transformations generator . For supercharges, there must also be the same number of superconformal generators which satisfy
with both the supercharges and the superconformal generators being charged under a R-symmetry. This algebra is an example of a superconformal algebra, which in this four dimensional case is denoted by . Unlike for non-conformal supersymmetric algebras, R-symmetry is always present in superconformal algebras.
Limitations
The Haag–Łopuszański–Sohnius theorem was originally derived in four dimensions, however the result that supersymmetry is the only nontrivial extension to spacetime symmetries holds in all dimensions greater than two. The form of the supersymmetry algebra however changes. Depending on the dimension, the supercharges can be Weyl, Majorana, Weyl–Majorana, or symplectic Weyl–Majorana spinors. Furthermore, R-symmetry groups differ according to the dimensionality and the number of supercharges. This superalgebra also only applies in Minkowski spacetime, being modified in other spacetimes. For example, there exists an extension to anti-de Sitter space for one or more supercharges, while an extension to de Sitter space only works if multiple supercharges are present.
In two or fewer dimensions the theorem breaks down. The reason for this is that analyticity of the scattering amplitudes can no longer hold since for example in two dimensions the only scattering is forward and backward scattering. The theorem also does not apply to discrete symmetries or to spontaneously broken symmetries since these are not symmetries at the level of the S-matrix.
See also
Supergravity
S-matrix
References
Supersymmetry
Quantum field theory
Theorems in quantum mechanics
No-go theorems | Haag–Łopuszański–Sohnius theorem | [
"Physics",
"Mathematics"
] | 1,320 | [
"Theorems in quantum mechanics",
"Quantum field theory",
"No-go theorems",
"Equations of physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Supersymmetry",
"Physics beyond the Standard Model",
"Symmetry",
"Physics theorems"
] |
4,402,513 | https://en.wikipedia.org/wiki/Ammonium%20uranyl%20carbonate | Ammonium uranyl carbonate (UO2CO3·2(NH4)2CO3) is known in the uranium processing industry as AUC and is also called uranyl ammonium carbonate. This compound is important as a component in the conversion process of uranium hexafluoride (UF6) to uranium dioxide (UO2). The ammonium uranyl carbonate is combined with steam and hydrogen at 500–600 °C to yield UO2. In another process aqueous uranyl nitrate, known as uranyl nitrate liquor (UNL) is treated with ammonium bicarbonate to form ammonium uranyl carbonate as a solid precipitate. This is separated from the solution, dried with methanol and then calcinated with hydrogen directly to UO2 to obtain a sinterable grade powder. The ex-AUC uranium dioxide powder is free-flowing, relatively coarse (10 μ) and porous with specific surface area in the range of 5 m2/g and suitable for direct pelletisation, avoiding the granulation step. Conversion to UO2 is often performed as the first stage of nuclear fuel fabrication.
The AUC process is followed in South Korea and Argentina. In the AUC route, calcination, reduction and stabilization are simultaneously carried out in a vertical fluidized bed reactor. In most countries, sinterable grade UO2 powder for nuclear fuel is obtained by the ammonium diuranate (ADU) process, which requires several more steps.
Ammonium uranyl carbonate is also one of the many forms called yellowcake; in this case it is the product obtained by the heap leach process.
References
Further reading
Ammonium compounds
Uranyl compounds
Carbonates
Nuclear materials | Ammonium uranyl carbonate | [
"Physics",
"Chemistry"
] | 363 | [
"Salts",
"Materials",
"Nuclear materials",
"Ammonium compounds",
"Matter"
] |
4,402,572 | https://en.wikipedia.org/wiki/Supersonic%20business%20jet | A supersonic business jet (SSBJ) is a business jet travelling above the speed of sound: a supersonic aircraft. Some manufacturers are designing or have been designing SSBJs, but none are currently available. Usually intended to transport about ten passengers, proposed SSBJs would be about the same size as subsonic business jets.
Only two commercial supersonic transports entered service: the Aérospatiale/British Aerospace Concorde and the Tupolev Tu-144. Both were designed with government subsidies and did not recoup development costs. They had high operating costs and high noise.
Some manufacturers believe these concerns can be addressed at a smaller scale, offering high speed transport for small groups of high-value passengers, executives or heads of state.
Current proposals include SAI Quiet Supersonic Transport and Spike S-512.
Former proposals include the Aerion SBJ, Aerion AS2, HyperMach SonicStar, Sukhoi-Gulfstream S-21 and Tupolev Tu-444.
Several companies, including Gulfstream Aerospace, work on sonic booms-mitigation technologies like the Quiet Spike.
Timeline
In 1997, Dassault Aviation was considering a Mach 1.8 supersonic business jet powered by three non-afterburning engines derived from subsonic aircraft, with a cabin similar to the Falcon 50, capable of flying between Paris and New York.
With a MTOW and over of fuel, it would cover a range of 7,200 km (4,500 nmi).
In September 2004, the European Commission selected the HISAC High Speed Aircraft program, launched with Dassault in 2005 and evaluating the feasibility of a small supersonic aircraft.
By 2019, Dassault was reserved about the prospects for a supersonic business jet.
In January 2018, Vladimir Putin proposed a civil SSBJ variant of the Tu-160 bomber, for a potential market of 20-30 units in Russia alone at $100–120 million each.
UAC previously studied a SSBJ, displaying a scale model at MAKS Air Show 2017, to be designed and built in seven years with an existing engine like the NK-32 and a titanium airframe, a limited production would be worth an expected $150 million price.
In August 2020, Virgin Galactic announced the design of a high speed delta wing aircraft for 9 to 19 people, targeting Mach 3 above , in partnership with Rolls-Royce plc for its propulsion.
Gallery
See also
References
External links
Business aircraft
Business jet | Supersonic business jet | [
"Physics"
] | 503 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
4,403,052 | https://en.wikipedia.org/wiki/Operating%20empty%20weight | Empty weight (EW) is the sum of the ‘as built’ manufacturer's empty weight (MEW), plus any standard items (SI) plus any operator items (OI), EW = MEW + SI + OI. The EW is calculated for each aircraft series and each unique configuration of an aircraft and is confirmed by periodically weighing it.
The "Operating empty weight" (OEW) is the sum of the empty weight and the crew plus their baggage.
Standard items include all structural modification or configuration orders that may have altered the MEW, including all fluids necessary for operation such as engine oil, engine coolant, water, hydraulic fluid and unusable fuel.
Operator items include fixed, optional equipment added by the operator for service reasons.
The weight added to the aircraft above its OEW for a given flight is variable and includes fuel for the flight and the cargo. Cargo depends upon the type of aircraft; i.e., passengers plus baggage for a transport or commuter airplane, materiel for a cargo airplane, stores for fighters/bombers and service loads such as meals and beverages. Fuel and cargo weights may alter the centre of gravity and flight performance, and require careful calculation before each flight.
Aircraft purchase price by type is a close linear function of EW.
See also
Maximum takeoff weight
Aircraft gross weight
Zero-fuel weight
References
Aircraft weight measurements | Operating empty weight | [
"Physics",
"Engineering"
] | 281 | [
"Aircraft weight measurements",
"Mass",
"Matter",
"Aerospace engineering"
] |
4,403,462 | https://en.wikipedia.org/wiki/Safety%20and%20Health%20in%20Construction%20Convention%2C%201988 | Safety and Health in Construction Convention, 1988 is an International Labour Organization Convention.
It was established in 1988, with the preamble stating:
Ratifications
As of April 2024, the convention had been ratified by 34 states.
External links
Text
Ratifications.
Health treaties
International Labour Organization conventions
Occupational safety and health treaties
Treaties entered into force in 1991
Treaties concluded in 1988
Construction law
Treaties of Albania
Treaties of Algeria
Treaties of Belarus
Treaties of Belgium
Treaties of Bolivia
Treaties of Brazil
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of Czechoslovakia
Treaties of the Czech Republic
Treaties of Denmark
Treaties of the Dominican Republic
Treaties of Finland
Treaties of Gabon
Treaties of Guinea
Treaties of Germany
Treaties of the Hungarian People's Republic
Treaties of Ba'athist Iraq
Treaties of Italy
Treaties of Kazakhstan
Treaties of Lesotho
Treaties of Luxembourg
Treaties of Mexico
Treaties of Mongolia
Treaties of Montenegro
Treaties of Norway
Treaties of Panama
Treaties of Russia
Treaties of Serbia
Treaties of Slovakia
Treaties of Sweden
Treaties of Turkey
Treaties of Uruguay
Treaties of Guatemala
1988 in labor relations | Safety and Health in Construction Convention, 1988 | [
"Engineering"
] | 203 | [
"Construction",
"Construction law"
] |
4,403,651 | https://en.wikipedia.org/wiki/Sheep%E2%80%93goat%20hybrid | A sheep–goat hybrid (called a geep in popular media or sometimes a shoat) is a hybrid between a sheep and a goat.
While sheep and goats are similar and can be mated, they belong to different genera in the subfamily Caprinae of the family Bovidae. Sheep belong to the genus Ovis and have 54 chromosomes, while goats belong to the genus Capra and have 60 chromosomes. The offspring of a sheep–goat pairing is generally stillborn. Despite widespread shared pasturing of goats and sheep, hybrids are very rare, demonstrating the genetic distance between the two species. They are not to be confused with sheep–goat chimera, which are artificially created by combining the embryos of a goat and a sheep.
Characteristics
There is a long-standing belief in sheep–goat hybrids, which is presumably due to the animals' resemblance to each other. Some primitive varieties of sheep may be misidentified as goats. In Darwinism – An Exposition of the Theory of Natural Selection with Some of Its Applications (1889), Alfred Russel Wallace wrote:
[...] the following statement of Mr. Low: "It has been long known to shepherds, though questioned by naturalists, that the progeny of the cross between the sheep and goat is fertile. Breeds of this mixed race are numerous in the north of Europe." Nothing appears to be known of such hybrids either in Scandinavia or in Italy; but Professor Giglioli of Florence has kindly given me some useful references to works in which they are described. The following extract from his letter is very interesting: "I need not tell you that there being such hybrids is now generally accepted as a fact. Buffon (Supplements, tom. iii. p. 7, 1756) obtained one such hybrid in 1751 and eight in 1752. Sanson (La Culture, vol. vi. p. 372, 1865) mentions a case observed in the Vosges, France. Geoff. St. Hilaire (Hist. Nat. Gén. des reg. org., vol. iii. p. 163) was the first to mention, I believe, that in different parts of South America the ram is more usually crossed with the she-goat than the sheep with the he-goat. The well-known 'pellones' of Chile are produced by the second and third generation of such hybrids (Gay, 'Hist, de Chile,' vol. i. p. 466, Agriculture, 1862). Hybrids bred from goat and sheep are called 'chabin' in French, and 'cabruno' in Spanish. In Chile such hybrids are called 'carneros lanudos'; their breeding inter se appears to be not always successful, and often the original cross has to be recommenced to obtain the proportion of three-eighths of he-goat and five-eighths of sheep, or of three-eighths of ram and five-eighths of she-goat; such being the reputed best hybrids."
Supposedly, most sheep–goat hybrids die as embryos. Hybrid male mammals are often sterile, demonstrating a phenomenon known as Haldane's rule. The Haldane phenomenon may apply even when the parent species have the same number of chromosomes, as in most cat-species hybrids. It sometimes does not apply when the species chromosome number is different, as in wild horse (chromosome number = 66) with domestic horse (chromosome number = 64) hybrids. Hybrid female fertility tends to decrease with increasing divergence in chromosome similarity between parent species. Presumably, this is due to mismatch problems during meiosis and the resulting production of eggs with unbalanced genetic complements. However, a buck–ewe hybrid born in 2014 died of pregnancy related complications in 2018 raising the question if the parent–species combination has an influence on hybrid fertility.
Blood transcriptome analysis of a buck–ewe hybrid and its parents revealed significant deviations from previously described imprinting schemes and a higher contribution from the goat genome to the genes expressed in the hybrid's blood. Due to the common genome, buck–ewe hybrid share 870 common genes with the maternal goat and 368 genes with the paternal sheep.
Alleged and confirmed cases
At the Botswana Ministry of Agriculture in 2000, a male sheep impregnated a female goat resulting in a live male offspring. This hybrid had 57 chromosomes, intermediate between sheep (54) and goats (60) and was intermediate between the two parent species in type. It had a coarse outer coat, a woolly inner coat, long goat-like legs and a heavy sheep-like body. Although infertile, the hybrid had a very active libido, mounting both ewes and does even when they were not in heat. He was castrated when he was 10 months old, as were the other kids and lambs in the herd.
A male sheep impregnated a female goat in New Zealand resulting in a mixed litter of kids and a female sheep–goat hybrid with 57 chromosomes. The hybrid was subsequently shown to be fertile when mated with a ram.
In France, natural mating of a doe with a ram produced a female hybrid carrying 57 chromosomes. This animal backcrossed in the veterinary college of Nantes to a ram delivered a stillborn and a living male offspring with 54 chromosomes.
In March 2014, a buck–ewe hybrid was born on a farm close to Göttingen in Germany. Also in March 2014, a male buck–ewe hybrid was born in Ireland.
There was a reported case of live births of twin geep on a farm in Ireland in 2018.
There was reported case of a live birth of a sheep–goat hybrid on a farm in Tábor in Czech Republic in 2020. Her name was Barunka and she had health complications after being born. Her owners did not know if she was a goat or sheep, since neither goats nor sheep accepted her. Upon further inspection, it was discovered she was a sheep–goat hybrid.
In May 2021, a healthy doe–ram hybrid was born on a farm in Kentucky, USA despite complications during labor. Her status as a hybrid was confirmed by genetic testing: she has a hybrid karyotype of 57, XX.
Sheep–goat chimera
History
A sheep–goat chimera (sometimes called a geep in popular media) is a chimera produced by combining the embryos of a goat and a sheep; the resulting animal has cells of both sheep and goat origin. A sheep–goat chimera should not be confused with a sheep–goat hybrid, which can result when a goat mates with a sheep. The first sheepgoat chimeras were created by researchers at the Institute of Animal Physiology in Cambridge, England by combining sheep embryos with goat embryos. They reported their results in 1984. The successful chimeras were a mosaic of goat and sheep cells and tissues.
Characteristics
In a chimera, each set of cells (germ line) keeps its own species' identity instead of being intermediate in type between the parental species. Because the chimera contains cells from (at least) two genetically different embryos, and each of these arose by fertilization of an egg by a sperm cell, it has (at least) four genetic parents. In contrast, a hybrid has only two. Although the individual cells in interspecies chimeras are entirely of one of the component species, their behaviour is influenced by the environment in which they find themselves. The sheep–goat chimeras have yielded several demonstrations of this. The most obvious was that the woolly areas of their fleece, tufts of goat wool (angora-type) grew intermingled with ordinary sheep wool, even though the goat breed used in the experiments did not exhibit any wool whatsoever.
Sheepgoat chimeras as a general rule may be assumed to be fertile, with the reservations that apply to chimeras generally (which again reflect that the parent embryos may have been of different sex, so that the animal, apart from being a chimera, may also be intersex). But in accordance with the mosaic- (as opposed to hybrid-) nature of the interspecies chimera, any individual sperm or egg cell it produces must be of either the pure sheep or the pure goat variety. Whether in fact viable germ cells of both species may be, or have been, produced within the same individual is unknown.
The term shoat is sometimes used for sheepgoat hybrids and chimeras, but that term more conventionally means a young piglet.
References
Notes
External links
Farmersjournal.ie
Bovid hybrids
Sheep
Goats
Intergeneric hybrids | Sheep–goat hybrid | [
"Biology"
] | 1,776 | [
"Intergeneric hybrids",
"Hybrid organisms"
] |
4,404,095 | https://en.wikipedia.org/wiki/Slumping | Slumping is a technique in which items are made in a kiln by means of shaping glass over molds at high temperatures.
The slumping of a pyrometric cone is often used to measure temperature in a kiln.
Technique
Slumping glass is a highly technical operation that is subject to many variations, both controlled and uncontrolled. When an item is being slumped in a kiln, the mold over which it is being formed (which can be made of either ceramic, sand or metal) must be coated with a release agent that will stop the molten glass from sticking to the mold. Such release agents, a typical one being boron nitride, give off toxic fumes when they are first heated and must be used in a ventilated area.
The glass is cut to the shape of the mold (but slightly larger to allow for shrinkage) and placed on top of it, before the kiln is heated.
The stages of the firing can be varied but typically start to climb at quite a rapid rate until the heat places the glass in an "orange state" i.e., flexible. At that point, gravity will allow the glass to slump into the mold and the temperature is held at a constant for a period that is known as the "soak". Following this stage, the kiln is allowed to cool slowly so that the slumped glass can anneal and be removed from the kiln. If two differing colours of glass are used in a single piece of work, the same CoE (coefficient of thermal expansion) glass must be used, or the finished piece will suffer from fractures as the glass will shrink at differing rates and allow tension to build up to the point of destruction. To compensate for this, many glass manufacturers subscribe to make glass to the same CoE. Examples include Spectrum glass system 96 or uroboros 96 series, and the use of this glass will allow the cooling to remain uniform and ensure that no tension builds up as the work cools.
History
During the Roman period open vessels, such as bowls and plates, could be produced by forming a glass sheet over a core or former. This technique resulted in vessels with rough surfaces, which could then be ground or polished to a smooth finish. An additional technique, used in the production of Roman pillar-moulded bowls, utilised a slotted tool to impress ribs on the glass sheet prior to slumping. This created a bowl with a ribbed exterior, and these were then polished around the rim and sometimes given horizontal cut lines inside for further decoration.
See also
Glass art
Warm glass
References
Warm glass
Glass art
Glass production | Slumping | [
"Materials_science",
"Engineering"
] | 533 | [
"Glass engineering and science",
"Glass production"
] |
9,887,669 | https://en.wikipedia.org/wiki/Gunnison%20Tunnel | The Gunnison Tunnel is an irrigation tunnel constructed between 1905 and 1909 by the U.S. Bureau of Reclamation in Montrose County, Colorado. The tunnel diverts water from the Gunnison River to the arid Uncompahgre Valley around Montrose, Colorado.
History
At the time of its completion, it was the longest irrigation tunnel in the world and quickly made the area around Montrose into profitable agricultural lands. In 1972, the tunnel was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers (ASCE).
The idea for a tunnel is credited to Frank Lauzon, a miner and prospector. By the early 1890s he was farming in Montrose. Popular lore is that idea came to him in a dream that the waters of the Gunnison River should be brought to the valley. In the late 1890s, the campaign for the tunnel was led by Omer Madison Kem. National funding was approved in 1902.
As construction was undertaken, two advances in technology made work safer and easier. Jackhammers fed by a compressor replaced hand-turned drill bits to set holes for blasting charges. Dynamite replaced black powder for blasting. By 1906 shifts of workers up to 30 at a time worked in the tunnel.
The tunnel opened in 1909 to much fanfare with a dedication ceremony attended by President William Howard Taft.
It was listed on the National Register of Historic Places in 1979.
The tunnel is long and is in cross-section, with square corners at the bottom and an arched roof. It drops about over its length. At the deepest, it is about beneath the surface of Vernal Mesa.
In 2009, some communities in the Uncompahgre Valley, celebrated the centennial anniversary of the Gunnison Tunnel opening.
See also
List of Historic Civil Engineering Landmarks
List of tunnels documented by the Historic American Engineering Record in Colorado
References
Further reading
External links
American Society of Civil Engineers site - The Gunnison Tunnel article
Gunnison River
Tunnels in Colorado
Water tunnels in the United States
Water tunnels on the National Register of Historic Places
Transportation buildings and structures in Montrose County, Colorado
Curecanti National Recreation Area
Historic American Engineering Record in Colorado
Interbasin transfer
Industrial buildings and structures on the National Register of Historic Places in Colorado
National Register of Historic Places in Montrose County, Colorado
National Register of Historic Places in national parks
Transportation buildings and structures on the National Register of Historic Places in Colorado
Water supply infrastructure on the National Register of Historic Places
Historic Civil Engineering Landmarks
Tunnels completed in 1909 | Gunnison Tunnel | [
"Engineering",
"Environmental_science"
] | 494 | [
"Hydrology",
"Civil engineering",
"Interbasin transfer",
"Historic Civil Engineering Landmarks"
] |
9,889,395 | https://en.wikipedia.org/wiki/Data-Link%20Switching | Data-Link Switching (DLSw) is a tunneling protocol designed to tunnel unroutable, non-IP based protocols such as IBM Systems Network Architecture (SNA) and NBF over an IP network.
DLSw was initially documented in IETF RFC 1434 in 1993. In 1995 it was further documented in the IETF RFC 1795. DLSw version 2 was presented in 1997 in IETF RFC 2166 as an improvement to RFC 1795. Cisco Systems has its own proprietary extensions to DLSw in DLSw+. According to Cisco, DLSw+ is 100% IETF RFC 1795 compliant but includes some proprietary extensions that can be used when both devices are Cisco.
Some organisations are starting to replace DLSw tunneling with the more modern Enterprise Extender (EE) protocol which is a feature of IBM APPN on z/OS systems. Microsoft refers to EE as IPDLC. Enterprise Extender uses UDP traffic at the transport layer rather than the network layer.
Cisco deploy Enterprise Extender on their hardware via the IOS feature known as SNAsW (SNA Switch).
See also
Microsoft Host Integration Server
Synchronous Data Link Control
Systems Network Architecture
References
External links
RFC 1434 Data Link Switching: Switch-to-Switch Protocol
RFC 1795 DLSw Standard Version 1.0
RFC 2166 DLSw v2.0 Enhancements
Tunneling protocols
Data-Link Switching | Data-Link Switching | [
"Technology",
"Engineering"
] | 281 | [
"Computing stubs",
"Computer networks engineering",
"Tunneling protocols",
"Computer network stubs"
] |
9,891,358 | https://en.wikipedia.org/wiki/Zinc%20pest | Zinc pest (from German Zinkpest "zinc plague"), also known as zinc rot, mazak rot and zamak rot, is a destructive, intercrystalline corrosion process of zinc alloys containing lead impurities. While impurities of the alloy are the primary cause of the problem, environmental conditions such as high humidity (greater than 65%) may accelerate the process.
It was first discovered to be a problem in 1923, and primarily affects die-cast zinc articles that were manufactured during the 1920s through 1950s. The New Jersey Zinc Company developed zamak alloys in 1929 using 99.99% pure zinc metal to avoid the problem, and articles made after 1960 are usually considered free of the risk of zinc pest since the use of purer materials and more controlled manufacturing conditions make zinc pest degradation unlikely.
Affected objects may show surface irregularities such as small cracks and fractures, blisters or pitting. Over time, the material slowly expands, cracking, buckling and warping in an irreversible process that makes the object exceedingly brittle and prone to fracture, and can eventually shatter the object, destroying it altogether. Due to the expansion process, attached normal material may also be damaged. The occurrence and severity of zinc pest in articles made of susceptible zinc alloys depends both on the concentration of lead impurities in the metal and on the storage conditions of the article in the ensuing decades. Zinc pest is dreaded by collectors of vintage die-cast model trains, toys, or radios, because rare or otherwise valuable items can inescapably be rendered worthless as the process of zinc pest destroys them. Because castings of the same object were usually made from various batches of metal over the production process, some examples of a given toy or model may survive today completely unaffected, while other identical examples may have completely disintegrated. It has also affected carburetors, hubcaps, door handles and automobile trim on cars of the 1920s and 1930s.
Since the 1940s, some model railroad hobbyists have claimed, with varying degrees of success, that a method of "pickling" zinc alloy parts by soaking them in vinegar or oxalic acid solution for several minutes before painting and assembling them could prevent or delay the effects of zinc pest.
Engine parts of older vehicles or airplanes, and military medals made of zinc alloys, may also be affected. In addition, the post-1982 copper-plated zinc Lincoln cents have been known to be affected.
Zinc pest is not related to tin pest, and is also different from a superficial white corrosion oxidation process ("Weissrost") that affects some zinc articles.
See also
Bronze disease
References
Zinc
Corrosion | Zinc pest | [
"Chemistry",
"Materials_science"
] | 544 | [
"Materials degradation",
"Electrochemistry",
"Metallurgy",
"Corrosion"
] |
9,894,026 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Software%20Reuse | The International Conference on Software Reuse (ICSR), is the primary scientific conference on software reuse, domain analysis, and domain engineering.
ICSR includes software reuse researchers, practitioners, and managers. The conference provides an archival source for important reuse papers. The conference is also meant to provide reuse information and education to practitioners, and to be an ongoing platform for technology transfer.
Software reusability, the use of knowledge or artifacts from existing systems to build new ones, is a key software engineering technology important both to engineers and managers. Reuse research has been very active. Many organizations have reported reuse successes, yet there are still important research issues in systematic reuse. There is a need for reuse solutions that can be applied across domain and organization boundaries. The conference consists of technical presentations, parallel working groups, plenary sessions, demonstrations, and tutorials.
Topics include reuse metrics, case studies and experiments, copyright and legal issues, current issues in reuse libraries, distributed components, formal methods, design and validation of components, domain analysis and engineering, generators, and Integration frameworks.
List of conferences:
References
External links
ReNews
Software engineering conferences
Reuse | International Conference on Software Reuse | [
"Engineering"
] | 242 | [
"Software engineering",
"Software engineering conferences",
"Software engineering stubs"
] |
9,894,678 | https://en.wikipedia.org/wiki/Machlett%20Laboratories | Machlett Laboratories was a Northeastern United States-based company that manufactured X-ray and high-power vacuum tubes. Machlett was a large producer of the tubes and developed accessories to be used with them as well.
For its contributions to World War II efforts, the US government gave it an "E" award in 1945. The company was bought by Varian in 1989.
History
The company began as E. Machlett and Son, which was founded in 1897 in New York City, United States as scientific glass makers.
Early days
Machlett Laboratories was created from E. Machlett & Sons in order to exploit the then new technology of X-rays. They made X-ray tubes from the beginning to 1989 when they were bought by Varian. In addition to making X-ray tubes, they manufactured high-power vacuum tubes for use in radio and TV broadcasting (857B mercury-vapor half-wave rectifier tube) and for industrial induction heating. Those two sides of the business were about equivalent for most of the life of the company.
Machlett Laboratories, Inc. was established in 1934 on Hope Street and Camp Avenue Stamford, Connecticut. This manufacturing operation began making X-ray tubes and became the largest producer of its kind in the world. Raymond was president of the company, founded by his father Robert, a scientist who made the first practical X-ray tube in America and devoted his life to making it safe and successful in the field of medicine. Robert Machlett associated with Madame Curie and other leading roentgenologists to make this happen.
They moved to 1063 Hope St. in Stamford, Connecticut, in the early 20th century, and remained there. Among their achievements was a counter made by them for Irene and Frederic Joliot-Curie for use in their experiments on artificial radioactivity in 1934 which is currently contained in the Science Museum in Great Britain. They were given an “E” award by the US government in 1945 for their contribution to the war effort.
In the mid 1940s they had a location at 220 E. 23rd Street, New York 10, N.Y., and were listed as Laboratory Apparatus & Chemicals. A plastic ruler from that time shows the "E" award.
Products
Transmitting tubes
Machlett was well known for its rather complete line of transmitting tubes, most particularly triodes.
For quite a while, Machlett's 6697s were the most-used Class B modulator tubes (as a push-pull pair) and Class C final tubes (as a single-ended pair) in 50,000-watt plate-modulated AM transmitters.
In a classical Doherty amplifier (a Class B "carrier" tube and a Class C "peak" tube), 50,000 watts could be achieved using only two 6697s, but the drive requirement was about 5,000 watts, so the driver was often a complete 5,000-watt transmitter. The driver power could be "passed through" (i.e., was not dissipated as heat) if the carrier tube was operated with a grounded grid and, as before, the peak tube was operated with a grounded cathode.
Later, chief competitor Eimac released tetrodes which rather quickly eclipsed Machlett's triodes, as tetrodes have higher gain (thereby requiring much lower driver power, usually about 1,000 watts for 50,000 watts out), and do not require "neutralization". And tetrodes can be screen-grid modulated, especially in a Sainton-modified Doherty amplifier, where both the "carrier" and "peak" tubes may be operated in Class C.
6697s were also employed in RCA's upgraded Ampliphase transmitter, the BTA-50H.
CPI's Econco division remanufactures Eimac and Machlett power tubes.
X-ray tubes
They were the first company to utilize the concept of the rotating anode, something which is just about universal in medical X-ray systems at this date. Towards the end of the time of manufacture, they were producing oil-circulating X-ray tubes with a heat exchanger attached (for use with computed tomography (CT) scanners), with 5-inch diameter rotating anodes formed from tungsten-rhenium alloy and molybdenum, with a large mass of graphite attached to act as a heat sink. Smaller tubes without graphite heat sinks had highly radiative coatings to disperse the generated heat into the oil surrounding the tube itself.
The manufacture of X-ray tubes was licensed to two other companies, GEC Medical and Comet SA of Bern in Switzerland.
Machlett X-ray tube
The Machlett X-ray tube was produced to ‘provide electrostatic protection for the filament (cathode) so as to permit long life to be achieved at operating voltages in the range 100-300 kV’ i .
The X-ray tube was designed and manufactured by E. Machlett & Son who were specialists in scientific glass instruments. The American company, which was established in New York 1897, began as a single shop and soon grew into an internationally recognized firm. The Machlett X-ray tube was patented in April 1934; one of its tubes, at the University of Melbourne's School of Physics, is from 1937. These X-ray tubes may have been used by Professor T.H Laby's X-ray group, which was a priority research topic there. This interest was sparked by the appointment in 1889 of Professor T. R. Lyle. Lyle, who was head of the school until 1915, is thought to have been the first person in Australia to have taken an X-ray photograph iii. A photocopy of this photograph can be found in the School of Physics Archive. For this particular experiment, Lyle actually made his own x-ray tube. His successor, Laby, continued to work with X-rays. During the 1920s he worked on the X-ray spectra of atoms and in 1930 he, along with Dr C. E. Eddy, published Quantitative Analysis by X-Ray Spectroscopy iv . Also with Eddy Laby produced the paper "Sensitivity of Atomic Analysis by X-rays". Laby went on to have an X-ray spectrograph of his own design manufactured by Adam Hilger Ltd.
Accessories
As well as X-ray tubes, they manufactured collimators – to define the beam size – and three marks of “Dynalyser” – an invasive instrument for measuring all the important parameters in an X-ray tube. This consisted of an HV unit insulated with SF6, and an indicating unit. A separate radiation monitor could be used with this, and an oscilloscope could also be attached to this device if desired. Some 2,500 of the most recent version were sold, and this device survived the sale of the company to Varian and later buyers, being in production from the early 1980s to around 1995.
There were three generations of the Dynalyzer produced. Maintenance on the Dynalyzer is available by its inventor, Dr. Jon Shapiro, at http://giciman.com
References
External links
Showing purchase of E. Machlett & Sons by Fisher Scientific
Shows Machlett personnel being given E award in 1945
Vacuum tubes
Medical imaging
Defunct companies based in New York (state) | Machlett Laboratories | [
"Physics"
] | 1,505 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
9,894,956 | https://en.wikipedia.org/wiki/Alphachrysovirus | Alphachrysovirus is a genus of double-stranded RNA viruses. It is one of two genera in the family Chrysoviridae. They infect fungi, in particular Penicillium. Their name is derived from the Greek word chrysos which means yellow-green. There are 20 species in this genus.
Structure
Viruses in the genus Alphachrysovirus are non-enveloped, with icosahedral geometries, and T=1, T=2 symmetry. The diameter is around 35–40 nm.
Genome
Genomes are linear double-stranded RNA which is around 12.5 kbp in length. The genome codes for four proteins. The genome has three double stranded RNA segments. All have extended highly conserved terminal sequences at both ends.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the double-stranded RNA virus replication model. Double-stranded RNA virus transcription is the method of transcription. The virus exits the host cell by cell to cell movement. Fungi serve as the natural host.
Taxonomy
The following species are recognized:
Amasya cherry disease associated chrysovirus
Anthurium mosaic-associated chrysovirus
Aspergillus fumigatus chrysovirus
Brassica campestris chrysovirus
Chrysothrix chrysovirus 1
Colletotrichum gloeosporioides chrysovirus
Cryphonectria nitschkei chrysovirus 1
Fusarium oxysporum chrysovirus 1
Helminthosporium victoriae virus 145S
Isaria javanica chrysovirus
Macrophomina phaseolina chrysovirus
Penicillium brevicompactum virus
Penicillium chrysogenum virus
Penicillium cyaneofulvum virus
Persea americana chrysovirus
Raphanus sativus chrysovirus
Salado alphachrysovirus
Shuangao insect-associated chrysovirus
Verticillium dahliae chrysovirus 1
Zea mays chrysovirus 1
References
External links
ICTV Report Chrysoviridae
Viralzone: Chrysovirus
Chrysoviridae
Riboviria
Virus genera | Alphachrysovirus | [
"Biology"
] | 464 | [
"Viruses",
"Riboviria"
] |
9,895,073 | https://en.wikipedia.org/wiki/Transcription%20factor%20II%20H | Transcription factor II H (TFIIH) is an important protein complex, having roles in transcription of various protein-coding genes and DNA nucleotide excision repair (NER) pathways. TFIIH first came to light in 1989 when general transcription factor-δ or basic transcription factor 2 was characterized as an indispensable transcription factor in vitro. This factor was also isolated from yeast and finally named TFIIH in 1992.
TFIIH consists of ten subunits, 7 of which (ERCC2/XPD, ERCC3/XPB, GTF2H1/p62, GTF2H4/p52, GTF2H2/p44, GTF2H3/p34 and GTF2H5/TTDA) form the core complex. The cyclin-activating kinase-subcomplex (CDK7, MAT1, and cyclin H) is linked to the core via the XPD protein. Two of the subunits, ERCC2/XPD and ERCC3/XPB, have helicase and ATPase activities and help create the transcription bubble. In a test tube, these subunits are only required for transcription if the DNA template is not already denatured or if it is supercoiled.
Two other TFIIH subunits, CDK7 and cyclin H, phosphorylate serine amino acids on the RNA polymerase II C-terminal domain and possibly other proteins involved in the cell cycle. Next to a vital function in transcription initiation, TFIIH is also involved in nucleotide excision repair.
History of TFIIH
Before TFIIH identified it, it had several names. It was isolated in 1989 isolated from rat liver, known by factor transcription delta. When identified from cancer cells it was known that time as Basic transcription factor 2. Also, when isolated from yeast it was termed transcription factor B. Finally, in 1992 known as TFIIH.
Structure of TFIIH
TFIIH is a ten‐subunit complex; seven of these subunits comprise the “core” whereas three comprise the dissociable “CAK” (CDK Activating Kinase) module. The core consists of subunits XPB, XPD, p62, p52, p44, p34 and p8 while CAK is composed of CDK7, cyclin H, and MAT1.
Functions
General function of TFIIH:
Initiation transcription of protein- coding gene.
DNA nucleotide repairing.
(NER)TFIIH is a general transcription factor that acts to recruit RNA Pol II to the promoters of genes. It functions as a DNA translocase, tracking along the DNA, reeling DNA into the Pol II cleft, and creating torsional strain leading to DNA unwinding. It also unwinds DNA after a DNA lesion has been recognized by either the global genome repair (GGR) pathway or the transcription-coupled repair (TCR) pathway of NER. Purified TFIIH has role in stopping further RNA synthesis by activating the cyclic peptide α-amanitin.
Trichothiodystrophy
Mutation in genes (XPB), (XPD) or (TTDA) cause trichothiodystrophy, a condition characterized by photosensitivity, ichthyosis, brittle hair and nails, intellectual impairment, decreased fertility and/or short stature.
Disease
Genetic polymorphisms of genes that encode subunits of TFIIH are known to be associated with increased cancer susceptibility in many tissues, e.g.; skin tissue, breast tissue and lung tissue. Mutations in the subunits (such as XPD and XPB) can lead to a variety of diseases, including xeroderma pigmentosum (XP) or XP combined with Cockayne syndrome. In addition to genetic variations, virus-encoded proteins also target TFIIH.
DNA repair
TFIIH participates in nucleotide excision repair (NER) by opening the DNA double helix after damage is initially recognized. NER is a multi-step pathway that removes a wide range of different damages that distort normal base pairing, including bulky chemical damages and UV-induced damages. Individuals with mutational defects in genes specifying protein components that catalyze the NER pathway, including the TFIIH components, often display features of premature aging (see DNA damage theory of aging).
Inhibitors
Potent, bioactive natural products like triptolide that inhibit mammalian transcription via inhibition of the XPB subunit of the general transcription factor TFIIH has been recently reported as a glucose conjugate for targeting hypoxic cancer cells with increased glucose transporter expression.
Mechanism of TFIIH repairing DNA damaged sequence
References
External links
Gene expression
Transcription factors | Transcription factor II H | [
"Chemistry",
"Biology"
] | 993 | [
"Biotechnology stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Molecular genetics",
"Cellular processes",
"Induced stem cells",
"Molecular biology",
"Biochemistry",
"Transcription factors"
] |
9,895,813 | https://en.wikipedia.org/wiki/Cyclin%20D/Cdk4 | The Cyclin D/Cdk4 complex is a multi-protein structure consisting of the proteins Cyclin D and cyclin-dependent kinase 4, or Cdk4, a serine-threonine kinase. This complex is one of many cyclin/cyclin-dependent kinase complexes that are the "hearts of the cell-cycle control system" and govern the cell cycle and its progression. As its name would suggest, the cyclin-dependent kinase is only active and able to phosphorylate its substrates when it is bound by the corresponding cyclin. The Cyclin D/Cdk4 complex is integral for the progression of the cell from the Growth 1 phase to the Synthesis phase of the cell cycle, for the Start or G1/S checkpoint.
Basic Mechanism
Under non-dividing conditions (when the cell is in the G0 phase of the cell cycle), Retinoblastoma protein (Rb) is bound with the E2F transcription factor. During the G0 to G1 transition, growth factor stimulates the synthesis of Cyclin D protein, whose concentration increases until it peaks around the G1/S transition.
In the early to middle stages of G1 phase, Cyclin D binds to the constitutively expressed Cdk4 protein, which creates an activated CyclinD/Cdk4 complex. Once activated, the Cyclin D/Cdk4 complex docks at a C-terminus helix on the Retinoblastoma protein (pRb), which is driven by a recognition site for the C-terminus helix on Cyclin D. Upon docking, CyclinD/Cdk4 mono-phosphorylates the Rb protein, which disrupts the Rb/E2F interaction and is sufficient to initiate E2F induction. E2F transcriptionally activates a number of downstream target genes required in the next stages of the cell cycle and in DNA replication by binding to their DNA promoter regions. These genes include the cyclin E and A genes, and other genes associated with the G1/S transition.
Synthesis of cyclin E and subsequent binding to constitutively expressed Cdk2 leads to a surge in activity of the cyclin E/Cdk2 complex, which is responsible for hyperphosphorylation of Rb. Rb hyperphosphorylation leads to the complete inactivation of Rb and release of E2F, initiating a positive feedback loop between E2F and cyclin E/Cdk2 that stimulates expression of E2F-driven G1/S transition genes, and, at a certain level, activates the bistable switch that drives irreversible progression into S phase.
Regulation
There are multiple regulation points within this signaling pathway. First and foremost, under non-dividing conditions multiple proteins can inhibit the Cyclin D/Cdk4 complex by binding Cdk4 and inhibiting its association with Cyclin D. Primarily, this is accomplished by p27 but it can also be done by p16 and p21. However, this pathway is stimulated by the upstream binding of growth factors (GF), either from within the cell itself or from neighboring cells. Stimulation by growth factors activates any of a number of receptor tyrosine kinase (RTK) proteins. These receptor tyrosine kinases in turn phosphorylate and activate many other proteins, including Fos/Jun/Myc and phosphatidylinositol 3 kinase (PI-3-K). Fos/Jun/Myc helps to activate the Cyclin D/Cdk4 complex. Phosphatidylinositol 3 kinase phosphorylates p27 (or p16 or p21) and SCF/Skp1. The phosphorylation of p27 inhibits p27's ability to bind Cdk4, thus freeing Cdk4 to associate with Cyclin D and form an active complex. SCF/Skp1 (an E3 ubiquitin ligase) helps to further inhibit p27 and thus further help activate the Cyclin D/Cdk4 complex. Also, p27 acts as an inhibitor of Cyclin E and Cyclin A. So, its inhibition also facilitates the activation of downstream mitotic processes, as noted above.
There are also other peripheral regulators of the Cyclin D/Cdk4 complex. In megakaryocytes, it is regulated by the GATA-1 transcription factor. GATA-1 serves as an activating transcription factor of Cyclin D and potentially also as a repressor of the Cyclin D inhibitor, p16. Cdk4 also requires activation upon complex assembly with Cyclin D. This is accomplished by a Cdk activating kinase (CAK), which phosphorylates Cdk4 at threonine 172.
Cancer
Disruptions in The CyclinD/Cdk4 Axis in Cancer
The function of the Cyclin D/Cdk4 complex suggests an obvious link to cancer and tumorigenesis. In fact, disruptions in the Cyclin D/Cdk4 axis that lead to increased Cyclin D/Cdk4 activity have been detected in many cancers. There are a number of drivers of these disruptions.
First, tumors can overexpress Cyclin D1, as has been found in breast and pancreatic cancer.
Second, tumors can have mutations or amplifications in the Cdk4 protein, as has been found in melanoma and squamous cell carcinoma of the head and neck.
Third, tumors can experience reduction in or complete loss of negative regulators of Cyclin D/Cdk4, either by mutation, deletion, or downregulation of the inhibitors. Homozygous deletions in p16, an INK4-family inhibitor of Cdk4, have been found in over 50% of gliomas, and mutations in p16 have been found in numerous cancer types including familial melanomas; lymphomas; and esophageal, pancreatic, and non-small cell lung cancers. Decreased expression of p27, a CIP/KIP-family inhibitor, has been found in a number of colon, breast, prostate, liver, lung, bladder, ovary, and stomach cancers; it is an indicator of poorer prognosis in these cancers.
Fourth, tumors can downregulate miRNAs that target Cdk4, as has been found in bladder cancer.
Lastly, tumors can have dysregulation in upstream oncogenic signaling pathways like the phosphatidylinositol 3-kinase (P13K) pathway, the mitogen-activated protein kinase (MAPK) kinase, the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) pathway, and steroid hormone signaling pathways that promote CyclinD/Cdk4 activity.
Disruption of the Cyclin D/Cdk4 axis through any of these three mechanisms induces phosphorylation of Rb, transcription of E2F-driven genes, uncontrolled progression through the G1/S checkpoint, and ultimately cancer cell growth.
Selective Cdk4/6 Inhibitors
Given the role of Cyclin D/Cdk4 in cancer progression, the development of selective Cdk4/6 inhibitors has been of increased interest in recent years. Currently, three Cdk4/6 inhibitors have been approved or are in late-stage development: palbociclib, ribociclib, and abemaciclib. All three of these inhibitors are ATP-competitive, orally administered medications.
Thus far, selective Cdk4/6 inhibitors have shown the most promise when used in combination with other anti-estrogen therapies for the treatment of hormone-receptor-positive (HR+) advanced breast cancer. In HR+ breast cancer, cells retain wildtype Rb expression and have overexpressed Cyclin D1.
Recent Stage III clinical trial results from the MONALEESA-2 study have indicated that ribociclib in combination with the nonsteroidal aromatase inhibitor letrozole increased median overall survival by 12.5 months in HR+, human epidermal growth factor receptor 2 (HER2)-negative postmenopausal breast cancer patients compared to treatment with letrozole alone.
Results from the PALOMA-2 study showed that treatment with palbociclib and letrozole increased progression-free survival by 10.3 months compared to treatment with letrozole alone for patients with previously untreated HR+ positive, HER2-negative breast cancer.
Results from the MONARCH-3 study have shown that treatment with abemaciclib in combination with letrozole or anastrozole increased median progression-free survival by 13.42 months for postmenopausal patients with untreated HR+, HER2-negative breast cancer.
Additional studies into the efficacy of these combination therapies on advanced breast cancer survival in different settings are ongoing. Based on the encouraging results from the clinical trials, additional studies are also underway to investigate the effect of the three selective Cdk4/6 inhibitors on other neoplasms like non-small cell lung cancer. Of note, some patients in pre-clinical and clinical settings have not responded to the Cdk4/6 inhibitors or have become resistant to them. Research into the mechanisms behind these clinical outcomes is still in progress.
Non-Canonical Roles of CyclinD/Cdk4
In addition to its canonical role in promoting progression through the cell cycle, CyclinD/Cdk4 also plays a role in regulating cell differentiation and metabolism in a variety of contexts.
Cell Differentiation
Rb can promote cell cycle differentiation by interacting with multiple different cell-type-specific transcription factors. These transcription factors include MyoD and MEF2, which regulate muscle cell differentiation, and RUNX2, which regulates osteoblast differentiation. When CyclinD/Cdk4 phosphorylates and inactivates Rb, Rb’s role in driving cell differentiation via interaction with these transcription factors is inhibited. Cyclin D/Cdk4 activity can also directly block the association of MEF2 with GRIP-1, a transcription co-activator, which inhibits MEF2’s ability to induce muscle gene expression and subsequent differentiation.
Cyclin D/Cdk4 activity can also regulate cell differentiation through Rb-independent pathways. For instance, CyclinD/Cdk4 activity has been shown to phosphorylate the transcription factor GATA4, which targets it for degradation and inhibits differentiation of cardiomyocytes. Additionally, Cyclin D/Cdk4 activity is thought to block neurogenesis in neural stem cells and promote the expansion of basal progenitors.
Metabolism
Gluconeogenesis in the liver is critical for survival during times of fasting and starvation. CyclinD1/Cdk4 activity has been shown to play a role in regulating glucose homeostasis by suppressing hepatic gluconeogenesis via phosphorylation-induced inhibition of the peroxisome proliferator-activated receptor γ coactivator-1α (PGC1α). (PGC1α) is a transcriptional coactivator that drives the gene expression programming for gluconeogenesis in the liver. Additional research has shown that the CyclinD/Cdk4/Rb/ER2F pathway also influences the expression of Kir6.2, a subunit of the ATP-sensitive K channel that regulates glucose-induced insulin secretion. When the CyclinD/Cdk4 complex is inhibited, Kir6.2 expression is downregulated, and there is impaired insulin secretion and glucose intolerance when tested in mouse models. Given that glucose homeostasis is dysregulated in diabetes, there is ongoing interest in whether the CyclinD/Cdk4 complex could be a potential target for disease treatment.
See also
Human papillomavirus
References
Cell cycle
Human proteins
Protein complexes
Transcription factors
es:Ciclina D/Cdk4 | Cyclin D/Cdk4 | [
"Chemistry",
"Biology"
] | 2,553 | [
"Gene expression",
"Signal transduction",
"Cellular processes",
"Induced stem cells",
"Cell cycle",
"Transcription factors"
] |
17,900,572 | https://en.wikipedia.org/wiki/Overall%20equipment%20effectiveness | Overall equipment effectiveness (OEE) is a measure of how well a manufacturing operation is utilized (facilities, time and material) compared to its full potential, during the periods when it is scheduled to run. It identifies the percentage of manufacturing time that is truly productive. An OEE of 100% means that only good parts are produced (100% quality), at the maximum speed (100% performance), and without interruption (100% availability).
Measuring OEE is a manufacturing best practice. By measuring OEE and the underlying losses, important insights can be gained on how to systematically improve the manufacturing process. OEE is an effective metric for identifying losses, bench-marking progress, and improving the productivity of manufacturing equipment (i.e., eliminating waste). The best way for reliable OEE monitoring is to automatically collect all data directly from the machines.
Total effective equipment performance (TEEP) is a closely related measure which quantifies OEE against calendar hours rather than only against scheduled operating hours. A TEEP of 100% means that the operations have run with an OEE of 100% 24 hours a day and 365 days a year (100% loading).
The term OEE was coined by Seiichi Nakajima. It is based on the Harrington Emerson way of thinking regarding labor efficiency. The generic form of OEE allows comparison between manufacturing units in differing industries. It is not however an absolute measure and is best used to identify scope for process performance improvement, and how to get the improvement.
OEE measurement is also commonly used as a key performance indicator (KPI) in conjunction with lean manufacturing efforts to provide an indicator of success. OEE can be illustrated by a brief discussion of the six metrics that comprise the system (the "Six Big Losses").
Calculations for OEE and TEEP
The OEE of a manufacturing unit are calculated as the product of three separate components:
Availability: percentage of scheduled time that the operation is available to operate. Often referred to as Uptime.
Performance: speed at which the Work Center runs as a percentage of its designed speed.
Quality: Good Units produced as a percentage of the Total Units Started. It is commonly referred to as the first pass yield (FPY).
To calculate the Total Effective Equipment Performance(TEEP), the OEE is multiplied by a fourth component:
Loading: percentage of total calendar time that is actually scheduled for operation.
The calculations of OEE are not particularly complicated, but care must be taken as to standards that are used as the basis. Additionally, these calculations are valid at the work center or part number level but become more complicated if rolling up to aggregate levels.
9 Major Downtime Losses Affect Availability
Machine broken
Setup time
Machine adjustment
Quality issues from material
Material missing
Operations team member missing
Tool change
Startup loss
Other-Miscellaneous
Overall equipment effectiveness
Each of the three components of the OEE points to an aspect of the process that can be targeted for improvement. OEE may be applied to any individual Work Center, or rolled up to Department or Plant levels. This tool also allows for drilling down for very specific analysis, such as a particular Part Number, Shift, or any of several other parameters.
It is unlikely that any manufacturing process can run at 100% OEE. Many manufacturers benchmark their industry to set a challenging target; 85% is not uncommon.
OEE is calculated with the formula (Availability)*(Performance)*(Quality)
Using the examples given below:
(Availability= 86.6%)*(Performance=93%)*(Quality=91.3%)= (OEE=73.6%)
Alternatively, and often easier, OEE is calculated by dividing the minimum time needed to produce the parts under optimal conditions by the actual time needed to produce the parts. For example:
Total Time: 8-hour shift or 28,800 seconds, producing 14,400 parts, or one part every 2 seconds.
Fastest possible cycle time is 1.5 seconds, hence only 21,600 seconds would have been needed to produce the 14,400 parts. The remaining 7,200 seconds or 2 hours were lost.
The OEE is now the 21,600 seconds divided by 28,800 seconds (same as minimal 1.5 seconds per part divided by 2 actual seconds per part), or 75%.
Total effective equipment performance
Whereas OEE measures efficiency based on scheduled hours, TEEP measures efficiency against calendar hours, i.e.: 24 hours per day, 365 days per year.
TEEP, therefore, reports the 'bottom line' utilization of assets.
TEEP = Loading * OEE
Loading
The Loading portion of the TEEP Metric represents the percentage of time that an operation is scheduled to operate compared to the total Calendar Time that is available. The Loading Metric is a pure measurement of Schedule efficiency and is designed to exclude the effects how well that operation may perform.
Calculation: Loading = Scheduled Time / Calendar Time
Example:
A given Work Center is scheduled to run 5 Days per Week, 24 Hours per Day.
For a given week, the Total Calendar Time is 7 Days at 24 Hours.
Loading = (5 days x 24 hours) / (7 days x 24 hours) = 71.4%
Availability
The Availability portion of the OEE Metric represents the percentage of scheduled time that the operation is available to operate. The Availability Metric is a pure measurement of Uptime that is designed to exclude the effects of Quality and Performance. The losses due to wasted availability are called availability losses.
Example:
A given Work Center is scheduled to run for an 8-hour (480-minute) shift with a 30-minute scheduled break and during the break the lines stop, and unscheduled downtime is 60 minutes.
The scheduled time = 480 minutes - 30 minutes = 450 minutes.
Operating Time = 480 Minutes – 30 Minutes Schedule Loss – 60 Minutes Unscheduled Downtime = 390 Minutes
Calculation: Availability = operating time / scheduled time
Availability = 390 minutes / 450 minutes = 86.6%
Performance and productivity
Calculation: Performance (Productivity) = (Parts Produced * Ideal Cycle Time) / Operating time
Example:
A given Work Center is scheduled to run for an 8-hour (480-minute) shift with a 30-minute scheduled break.
Operating Time = 450 Min Scheduled – 60 Min Unscheduled Downtime = 390 Minutes
The Standard Rate for the part being produced is 40 Units/Hour or 1.5 Minutes/Unit
The Work Center produces 242 Total Units during the shift. Note: The basis is Total Units, not Good Units. The Performance metric does not penalize for Quality.
Time to Produce Parts = 242 Units * 1.5 Minutes/Unit = 363 Minutes
Performance (Productivity) = 363 Minutes / 390 Minutes = 93.1%
Quality
The Quality portion of the OEE Metric represents the Good Units produced as a percentage of the Total Units Started. The Quality Metric is a pure measurement of Process Yield that is designed to exclude the effects of Availability and Performance. The losses due to defects and rework are called quality losses and quality stops.
Reworked units which have been corrected are only measured as unscheduled downtime while units being scrapped can affect both operation time and unit count.
Calculation: Quality = (Units produced - defective units) / (Units produced)
Example:
242 Units are produced. 21 are defective.
(242 units produced - 21 defective units) = 221 units
221 good units / 242 total units produced = 91.32%
"Six Big Losses"
To be able to better determine the sources of the greatest loss and to target the areas that should be improved to increase performances, these categories (Availability, Performance and Quality) have been subdivided further into what is known as the 'Six Big Losses' to OEE.
These are categorized as follows:
The reason for identifying the losses in these categories is so that specific countermeasures can be applied to reduce the loss and improve the overall OEE.
Total Productive Maintenance
Continuous improvement in OEE is the goal of TPM (Total Productive Maintenance). Specifically, the goal of TPM as set out by Seiichi Nakajima is "The continuous improvement of OEE by engaging all those that impact on it in small group activities". To achieve this, the TPM toolbox sets out a Focused improvement tactic to reduce each of the six types of OEE loss. For example, the Focused improvement tactic to systematically reduce breakdown risk sets out how to improve asset condition and standardise working methods to reduce human error and accelerated wear.
Combining OEE with Focused improvement converts OEE from a lagging to a leading indicator. The first Focused improvement stage of OEE improvement is to achieve a stable OEE. One which varies at around 5% from the mean for a representative production sample. Once an asset efficiency is stable and not impacted by variability in equipment wear rates and working methods. The second stage of OEE improvement (optimisation) can be carried out to remove chronic losses. Combining OEE and TPM Focused improvement tactics creates a leading indicator that can be used to guide performance management priorities. As the TPM process delivers these gains through small cross functional improvement teams, the process of OEE improvement raises front line team engagement/problem ownership, collaboration and skill levels. It is this combination of OEE as a KPI, TPM Focused improvement tactics and front line team engagement that locks in the gains and delivers the TPM goal of year on year improvement in OEE.
Heuristic
OEE is useful as a heuristic, but can break down in several circumstances. For example, it may be far more costly to
run a facility at certain times. Performance and quality may not be independent of each other or of availability and loading.
Experience may develop over time. Since the performance of shop floor managers is at least sometimes compared to the OEE, these numbers are often not reliable, and there are numerous ways to fudge these numbers.
OEE has properties of a geometric mean. As such it punishes variability among its subcomponents. For example, 20% * 80% = 16%,
whereas 50% * 50% = 25%. When there are asymmetric costs associated with one or more of the components, then the model may become less appropriate.
Consider a system where the cost of error is exceptionally high. In such a condition, higher quality may be far more important
in a proper evaluation of efficiency than performance or availability. OEE also to some extent assumes a closed system and a potentially static one. If one can bring in additional resources (or lease out unused resources to other projects or business units) then it may be more appropriate for example to use an expected net present value analysis.
Variability in flow can also introduce important costs and risks that may merit further modeling. Sensitivity analysis and measures of change may be helpful.
Further reading
OEE and derived indicators TEEP, PEE, OAE, OPE, OFE, OTE and CTE, MES Center Association
Everything You Need to Know About OEE, Manufacturing Tomorrow
Overall Equipment Effectiveness (OEE) – What is OEE & How is OEE calculated?
See also
Overall labor effectiveness
Total productive maintenance
References
Lean manufacturing
Production planning | Overall equipment effectiveness | [
"Engineering"
] | 2,270 | [
"Lean manufacturing"
] |
17,904,252 | https://en.wikipedia.org/wiki/Mean%20kinetic%20temperature | Mean kinetic temperature (MKT) is a simplified way of expressing the overall effect of temperature fluctuations during storage or transit of perishable goods. The MKT is used to predict the overall effect of temperature fluctuations on perishable goods. It has more recently been applied to the pharmaceutical industry.
The mean kinetic temperature can be expressed as:
Where:
is the mean kinetic temperature in kelvins
is the activation energy (in kJ mol−1)
is the gas constant (in J mol−1 K−1)
to are the temperatures at each of the sample points in kelvins
to are time intervals at each of the sample points
When the temperature readings are taken at the same interval (i.e., = = = ), the above equation is reduced to:
Where:
is the number of temperature sample points
Temperature
Pharmaceutical industry | Mean kinetic temperature | [
"Physics",
"Chemistry",
"Biology"
] | 168 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Pharmacology",
"Physical quantities",
"Life sciences industry",
"Pharmaceutical industry",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
17,904,953 | https://en.wikipedia.org/wiki/Rational%20dependence | In mathematics, a collection of real numbers is rationally independent if none of them can be written as a linear combination of the other numbers in the collection with rational coefficients. A collection of numbers which is not rationally independent is called rationally dependent. For instance we have the following example.
Because if we let , then .
Formal definition
The real numbers ω1, ω2, ... , ωn are said to be rationally dependent if there exist integers k1, k2, ... , kn, not all of which are zero, such that
If such integers do not exist, then the vectors are said to be rationally independent. This condition can be reformulated as follows: ω1, ω2, ... , ωn are rationally independent if the only n-tuple of integers k1, k2, ... , kn such that
is the trivial solution in which every ki is zero.
The real numbers form a vector space over the rational numbers, and this is equivalent to the usual definition of linear independence in this vector space.
See also
Baker's theorem
Dehn invariant
Gelfond–Schneider theorem
Hamel basis
Hodge conjecture
Lindemann–Weierstrass theorem
Linear flow on the torus
Schanuel's conjecture
Bibliography
Dynamical systems | Rational dependence | [
"Physics",
"Mathematics"
] | 265 | [
"Mechanics",
"Dynamical systems"
] |
17,905,842 | https://en.wikipedia.org/wiki/Linear%20flow%20on%20the%20torus | In mathematics, especially in the area of mathematical analysis known as dynamical systems theory, a linear flow on the torus is a flow on the n-dimensional torus
,
which is represented by the following differential equations with respect to the standard angular coordinates
.
The solution of these equations can explicitly be expressed as
.
If we represent the torus as we see that a starting point is moved by the flow in the direction at constant speed and when it reaches the border of the unitary -cube it jumps to the opposite face of the cube.
For a linear flow on the torus, all orbits are either periodic or dense on a subset of the -torus, which is a -torus. When the components of are rationally independent all the orbits are dense on the whole space. This can be easily seen in the two-dimensional case: if the two components of are rationally independent, the Poincaré section of the flow on an edge of the unit square is an irrational rotation on a circle and therefore its orbits are dense on the circle, as a consequence the orbits of the flow must be dense on the torus.
Irrational winding of a torus
In topology, an irrational winding of a torus is a continuous injection of a line into a two-dimensional torus that is used to set up several counterexamples. A related notion is the Kronecker foliation of a torus, a foliation formed by the set of all translates of a given irrational winding.
Definition
One way of constructing a torus is as the quotient space of a two-dimensional real vector space by the additive subgroup of integer vectors, with the corresponding projection Each point in the torus has as its preimage one of the translates of the square lattice in and factors through a map that takes any point in the plane to a point in the unit square given by the fractional parts of the original point's Cartesian coordinates.
Now consider a line in given by the equation If the slope of the line is rational, it can be represented by a fraction and a corresponding lattice point of It can be shown that then the projection of this line is a simple closed curve on a torus.
If, however, is irrational, it will not cross any lattice points except 0, which means that its projection on the torus will not be a closed curve, and the restriction of on this line is injective. Moreover, it can be shown that the image of this restricted projection as a subspace, called the irrational winding of a torus, is dense in the torus.
Applications
Irrational windings of a torus may be used to set up counter-examples related to monomorphisms. An irrational winding is an immersed submanifold but not a regular submanifold of the torus, which shows that the image of a manifold under a continuous injection to another manifold is not necessarily a (regular) submanifold. Irrational windings are also examples of the fact that the topology of the submanifold does not have to coincide with the subspace topology of the submanifold.
Secondly, the torus can be considered as a Lie group , and the line can be considered as . It is then easy to show that the image of the continuous and analytic group homomorphism is not a regular submanifold for irrational although it is an immersed submanifold, and therefore a Lie subgroup. It may also be used to show that if a subgroup of the Lie group is not closed, the quotient does not need to be a manifold and might even fail to be a Hausdorff space.
See also
Notes
References
Bibliography
General topology
Lie groups
Topological spaces
Dynamical systems
Ergodic theory | Linear flow on the torus | [
"Physics",
"Mathematics"
] | 751 | [
"General topology",
"Lie groups",
"Mathematical structures",
"Space (mathematics)",
"Ergodic theory",
"Topological spaces",
"Topology",
"Mechanics",
"Algebraic structures",
"Dynamical systems"
] |
3,240,033 | https://en.wikipedia.org/wiki/Toda%20lattice | The Toda lattice, introduced by , is a simple model for a one-dimensional crystal in solid state physics. It is famous because it is one of the earliest examples of a non-linear completely integrable system.
It is given by a chain of particles with nearest neighbor interaction, described by the Hamiltonian
and the equations of motion
where is the displacement of the -th particle from its equilibrium position,
and is its momentum (mass ),
and the Toda potential .
Soliton solutions
Soliton solutions are solitary waves spreading in time with no change to their shape and size and interacting with each other in a particle-like way. The general N-soliton solution of the equation is
where
with
where
and
.
Integrability
The Toda lattice is a prototypical example of a completely integrable system. To see this one uses Flaschka's variables
such that the Toda lattice reads
To show that the system is completely integrable, it suffices to find a Lax pair, that is, two operators L(t) and P(t) in the Hilbert space of square summable sequences such that the Lax equation
(where [L, P] = LP - PL is the Lie commutator of the two operators) is equivalent to the time derivative of Flaschka's variables. The choice
where f(n+1) and f(n-1) are the shift operators, implies that the operators L(t) for different t are unitarily equivalent.
The matrix has the property that its eigenvalues are invariant in time. These eigenvalues constitute independent integrals of motion, therefore the Toda lattice is completely integrable.
In particular, the Toda lattice can be solved by virtue of the inverse scattering transform for the Jacobi operator L. The main result implies that arbitrary (sufficiently fast) decaying initial conditions asymptotically for large t split into a sum of solitons and a decaying dispersive part.
See also
Lax pair
Lie bialgebra
Poisson–Lie group
References
Eugene Gutkin, Integrable Hamiltonians with Exponential Potential, Physica 16D (1985) 398-404.
External links
E. W. Weisstein, Toda Lattice at ScienceWorld
G. Teschl, The Toda Lattice
J Phys A Special issue on fifty years of the Toda lattice
Exactly solvable models
Integrable systems
Solitons
Lattice models | Toda lattice | [
"Physics",
"Materials_science"
] | 508 | [
"Integrable systems",
"Theoretical physics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics"
] |
3,240,412 | https://en.wikipedia.org/wiki/Lumicera | Lumicera is a transparent ceramic developed by Murata Manufacturing Co., Ltd.
Murata Manufacturing first developed transparent polycrystalline ceramics in February 2001. This polycrystalline ceramic is a type of dielectric resonator material commonly used in microwaves and millimeter waves. While offering superior electrical properties, high levels of transmissivity, and refractive index, it also has good optical characteristics without birefringence.
Normally, ceramics are opaque because pores are formed at triple points where grains intersect, causing scattering of incident light. Murata has optimized the entire development process of making dense and homogenous ceramics to improve their performance.
Under recommendations from Casio, the material itself has been refined for use in digital camera optical lenses by endowing it with improved transmission of short wavelength light and by reducing pores inside ceramics that reduce transparency.
Lumicera has the same light transmitting qualities as optical glass commonly used in today's conventional camera lenses, however it has a refractive index (nd = 2.08 at 587 nm) much greater than that of optical glass (nd = 1.5 – 1.85 ) and offers superior strength. The Lumicera Z variant is described as barium oxide based material, not containing any environmentally hazardous materials (e.g. lead).
Lumicera is transparent up to 10 micrometers, making it useful for instruments operating in the mid-infrared spectrum.
Lumicera is a trademark of Murata Manufacturing Co., Ltd.
Lumicera is used in some Casio Exilim cameras, where it allowed 20% reduction of the lens profile.
References
Optical materials
Ceramic materials
Ceramic engineering
Transparent materials | Lumicera | [
"Physics",
"Engineering"
] | 350 | [
"Physical phenomena",
"Optical phenomena",
"Materials",
"Optical materials",
"Ceramic materials",
"Transparent materials",
"Ceramic engineering",
"Matter"
] |
3,241,970 | https://en.wikipedia.org/wiki/Mikheyev%E2%80%93Smirnov%E2%80%93Wolfenstein%20effect | The Mikheyev–Smirnov–Wolfenstein effect (often referred to as the matter effect) is a particle physics process which modifies neutrino oscillations in matter of varying density. The MSW effect is broadly analogous to the differential retardation of sound waves in density-variable media, however it also involves the propagation dynamics of three separate quantum fields which experience distortion.
In free space, the separate rates of neutrino eigenstates lead to standard neutrino flavor oscillation. Within matter – such as within the Sun – the analysis is more complicated, as shown by Mikheyev, Smirnov and Wolfenstein. It leads to a wide admixture of emanating neutrino flavors, which provides a compelling solution to the solar neutrino problem.
Works in 1978 and 1979 by American physicist Lincoln Wolfenstein led to understanding that the oscillation parameters of neutrinos are changed in matter. In 1985, the Soviet physicists Stanislav Mikheyev and Alexei Smirnov predicted that a slow decrease of the density of matter can resonantly enhance the neutrino mixing. Later in 1986, Stephen Parke of Fermilab, Hans Bethe of Cornell University, and S. Peter Rosen and James Gelb of Los Alamos National Laboratory provided analytic treatments of this effect.
Summary
The presence of electrons in matter changes the instantaneous Hamiltonian eigenstates (mass eigenstates) of neutrinos due to the charged current's elastic forward scattering of the electron neutrinos (i.e., weak interactions). This coherent forward scattering is analogous to the electromagnetic process leading to the refractive index of light in a medium and can be described either as the classical refractive index, or the electric potential, . The difference of potentials for different neutrinos and : induces the evolution of mixed neutrino flavors (either electron, muon, or tau).
In the presence of matter, the Hamiltonian of the system changes with respect to the potential: , where is the Hamiltonian in vacuum. Correspondingly, the mass eigenstates and eigenvalues of change, which means that the neutrinos in matter now have a different effective mass than they did in vacuum: . Since neutrino oscillations depend upon the squared mass difference of the neutrinos, neutrino oscillations experience different dynamics than they did in vacuum.
Similar to the vacuum case, the mixing angle describes the change of flavors of the eigenstates. In matter, the mixing angle depends on the number density of electrons and the energy of the neutrinos: . As the neutrinos propagate through density-variant matter, changes – and with it, the flavors of the eigenstates.
With antineutrinos, the conceptual point is the same but the effective charge that the weak interaction couples to (called weak isospin) has an opposite sign. If the electron density of matter changes along the path of neutrinos, the mixing of neutrinos grows to maximum at some value of the density, and then turns back; it leads to resonant conversion of one type of neutrinos to another one.
The effect is important at the very large electron densities of the Sun where electron neutrinos are produced. The high-energy neutrinos seen, for example, in Sudbury Neutrino Observatory (SNO) and in Super-Kamiokande, are produced mainly as the higher mass eigenstate in matter , and remain as such as the density of solar material changes. Thus, the neutrinos of high energy leaving the Sun are in a vacuum propagation eigenstate, , that has a reduced overlap with the electron neutrino seen by charged current reactions in the detectors.
Resonance in the MSW effect
Neutrino flavor mixing experiences resonance and becomes maximal under certain conditions of the relationship between the vacuum oscillation length and the matter density-dependent refraction length where is the Fermi coupling constant. The refraction length is understood as the distance over which the matter "phase" from the coherent scattering is equal to
The resonance condition is given by which is when the neutrino system experiences resonance and the mixing becomes maximal. For very small this condition becomes that is, the eigenfrequency for a system of mixed neutrinos becomes approximately equal to the eigenfrequency of medium.
The resonance density is informed by the resonance condition: and is directly related the number density of electrons If vacuum density reaches the maximal value, the resonance density goes to zero. In a medium with fluctuating density, itself fluctuates – the interval between its maximum and minimum values is called the resonance layer.
Solar neutrinos and the MSW effect
For high-energy solar neutrinos the MSW effect is important, and leads to the expectation that , where is the solar mixing angle. This was dramatically confirmed in the Sudbury Neutrino Observatory (SNO), which has resolved the solar neutrino problem. SNO measured the flux of solar electron neutrinos to be ~34% of the total neutrino flux (the electron neutrino flux measured via the charged current reaction, and the total flux via the neutral current reaction). The SNO results agree well with the expectations. Earlier, Kamiokande and Super-Kamiokande measured a mixture of charged current and neutral current reactions, that also support the occurrence of the MSW effect with a similar suppression, but with less confidence.
For the low-energy solar neutrinos, on the other hand, the matter effect is negligible, and the formalism of oscillations in vacuum is valid. The size of the source (i.e. the solar core) is significantly larger than the oscillation length, therefore, averaging over the oscillation factor, one obtains . For = 34° this corresponds to a survival probability of Pee ≈ 60%. This is consistent with the experimental observations of low energy solar neutrinos by the Homestake experiment (the first experiment to reveal the solar neutrino problem), followed by GALLEX, GNO, and SAGE (collectively, gallium radiochemical experiments), and, more recently, the Borexino experiment, which observed the neutrinos from pp (< 420 keV), 7Be (862 keV), pep (1.44 MeV), and 8B (< 15 MeV) separately. The measurements of Borexino alone verify the MSW pattern; however all these experiments are consistent with each other and provide us strong evidence of the MSW effect.
These results are further supported by the reactor experiment KamLAND, that is uniquely able to measure the parameters of oscillation that are also consistent with all other measurements.
The transition between the low energy regime (the MSW effect is negligible) and the high energy regime (the oscillation probability is determined by matter effects) lies in the region of about 2 MeV for the solar neutrinos.
The MSW effect can also modify neutrino oscillations in the Earth, and future search for new oscillations and/or leptonic CP violation may make use of this property.
Supernova neutrinos and the MSW effect
Supernovae are calculated to emit of the order of neutrinos and antineutrinos of all flavors, and supernova neutrinos carry away about 99% of the gravitational energy of the supernova and are considered strongest source of cosmic neutrinos in the MeV range. As such, scientists have attempted to simulate and mathematically characterize the action of MSW dynamics on SN neutrinos.
Some effect of MSW flavor conversion has already been observed in SN 1987A. In the case of normal neutrino mass hierarchy, and , transitions occurred inside the star, then and oscillated inside the Earth. Due to the differences in the distance traveled by neutrinos to Kamiokande, IMB and Baksan within the Earth, the MSW effect can partially explain the difference of the Kamiokande and IMB energy spectrum of events.
See also
Neutrino oscillations
References
Bibliography
Neutrinos
Astroparticle physics | Mikheyev–Smirnov–Wolfenstein effect | [
"Physics"
] | 1,748 | [
"Astroparticle physics",
"Particle physics",
"Astrophysics"
] |
3,242,434 | https://en.wikipedia.org/wiki/Retinal%20implant | A retinal implant is a visual prosthesis for restoration of sight to patients blinded by retinal degeneration. The system is meant to partially restore useful vision to those who have lost their photoreceptors due to retinal diseases such as retinitis pigmentosa (RP) or age-related macular degeneration (AMD). Retinal implants are being developed by a number of private companies and research institutions, and three types are in clinical trials: epiretinal (on the retina), subretinal (behind the retina), and suprachoroidal (between the choroid and the sclera). The implants introduce visual information into the retina by electrically stimulating the surviving retinal neurons. So far, elicited percepts had rather low resolution, and may be suitable for light perception and recognition of simple objects.
History
Foerster was the first to discover that electrical stimulation of the occipital cortex could be used to create visual percepts, phosphenes. The first application of an implantable stimulator for vision restoration was developed by Drs. Brindley and Lewin in 1968. This experiment demonstrated the viability of creating visual percepts using direct electrical stimulation, and it motivated the development of several other implantable devices for stimulation of the visual pathway, including retinal implants. Retinal stimulation devices, in particular, have become a focus of research as approximately half of all cases of blindness are caused by retinal damage. The development of retinal implants has also been motivated in part by the advancement and success of cochlear implants, which has demonstrated that humans can regain significant sensory function with limited input.
The Argus II retinal implant, manufactured by Second Sight Medical Products received market approval in the US in Feb 2013 and in Europe in Feb 2011, becoming the first approved implant. The device may help adults with RP who have lost the ability to perceive shapes and movement to be more mobile and to perform day-to-day activities. The epiretinal device is known as the Retina Implant and was originally developed in Germany by Retina Implant AG. It completed a multi-centre clinical trial in Europe and was awarded a CE Mark in 2013, making it the first wireless epiretinal electronic device to gain approval.
Candidates
Optimal candidates for retinal implants have retinal diseases, such as retinitis pigmentosa or age-related macular degeneration. These diseases cause blindness by affecting the photoreceptor cells in the outer layer of the retina, while leaving the inner and middle retinal layers intact. Minimally, a patient must have an intact ganglion cell layer in order to be a candidate for a retinal implant. This can be assessed non-invasively using optical coherence tomography (OCT) imaging. Other factors, including the amount of residual vision, overall health, and family commitment to rehabilitation, are also considered when determining candidates for retinal implants. In subjects with age-related macular degeneration, who may have intact peripheral vision, retinal implants could result in a hybrid form of vision. In this case the implant would supplement the remaining peripheral vision with central vision information.
Types
There are two main types of retinal implants by placement. Epiretinal implants are placed in the internal surface of the retina, while subretinal implants are placed between the outer retinal layer and the retinal pigment epithelium.
Epiretinal implants
Design principles
Epiretinal implants are placed on top of the retinal surface, above the nerve fiber layer, directly stimulating ganglion cells and bypassing all other retinal layers. Array of electrodes is stabilized on the retina using micro tacks which penetrate into the sclera. Typically, external video camera on eyeglasses acquires images and transmits processed video information to the stimulating electrodes via wireless telemetry. An external transmitter is also required to provide power to the implant via radio-frequency induction coils or infrared lasers. The real-time image processing involves reducing the resolution, enhancing contrast, detecting the edges in the image and converting it into a spatio-temporal pattern of stimulation delivered to the electrode array on the retina. The majority of electronics can be incorporated into the associated external components, allowing for a smaller implant and simpler upgrades without additional surgery. The external electronics provides full control over the image processing for each patient.
Advantages
Epiretinal implants directly stimulate the retinal ganglion cells, thereby bypassing all other retinal layers. Therefore, in principle, epiretinal implants could provide visual perception to individuals even if all other retinal layers have been damaged.
Disadvantages
Since the nerve fiber layer has similar stimulation threshold to that of the retinal ganglion cells, axons passing under the epiretinal electrodes are stimulated, creating arcuate percepts, and thereby distorting the retinotopic map. So far, none of the epiretinal implants had light-sensitive pixels, and hence they rely on external camera for capturing the visual information. Therefore, unlike natural vision, eye movements do not shift the transmitted image on the retina, which creates a perception of the moving object when person with such an implant changes the direction of gaze. Therefore, patients with such implants are asked to not move their eyes, but rather scan the visual field with their head. Additionally, encoding visual information at the ganglion cell layer requires very sophisticated image processing techniques in order to account for various types of the retinal ganglion cells encoding different features of the image.
Clinical study
The first epiretinal implant, the ARGUS device, included a silicon platinum array with 16 electrodes. The Phase I clinical trial of ARGUS began in 2002 by implanting six participants with the device. All patients reported gaining a perception of light and discrete phosphenes, with the visual function of some patients improving significantly over time. Future versions of the ARGUS device are being developed with increasingly dense electrode arrays, allowing for improved spatial resolution. The most recent ARGUS II device contains 60 electrodes, and a 200 electrode device is under development by ophthalmologists and engineers at the USC Eye Institute. The ARGUS II device received marketing approval in February 2011 (CE Mark demonstrating safety and performance), and it is available in Germany, France, Italy, and UK. Interim results on 30 patients long term trials were published in Ophthalmology in 2012. Argus II received approval from the US FDA on April 14, 2013 FDA Approval.
Another epiretinal device, the Learning Retinal Implant, has been developed by IIP technologies GmbH, and has begun to be evaluated in clinical trials. A third epiretinal device, EPI-RET, has been developed and progressed to clinical testing in six patients. The EPI-RET device contains 25 electrodes and requires the crystalline lens to be replaced with a receiver chip. All subjects have demonstrated the ability to discriminate between different spatial and temporal patterns of stimulation.
Subretinal implants
Design principles
Subretinal implants sit on the outer surface of the retina, between the photoreceptor layer and the retinal pigment epithelium, directly stimulating retinal cells and relying on the normal processing of the inner and middle retinal layers. Adhering a subretinal implant in place is relatively simple, as the implant is mechanically constrained by the minimal distance between the outer retina and the retinal pigment epithelium. A subretinal implant consists of a silicon wafer containing light sensitive microphotodiodes, which generate signals directly from the incoming light. Incident light passing through the retina generates currents within the microphotodiodes, which directly inject the resultant current into the underlying retinal cells via arrays of microelectrodes. The pattern of microphotodiodes activated by incident light therefore stimulates a pattern of bipolar, horizontal, amacrine, and ganglion cells, leading to a visual perception representative of the original incident image. In principle, subretinal implants do not require any external hardware beyond the implanted microphotodiodes array. However, some subretinal implants require power from external circuitry to enhance the image signal.
Advantages
A subretinal implant is advantageous over an epiretinal implant in part because of its simpler design. The light acquisition, processing, and stimulation are all carried out by microphotodiodes mounted onto a single chip, as opposed to the external camera, processing chip, and implanted electrode array associated with an epiretinal implant. The subretinal placement is also more straightforward, as it places the stimulating array directly adjacent to the damaged photoreceptors. By relying on the function of the remaining retinal layers, subretinal implants allow for normal inner retinal processing, including amplification, thus resulting in an overall lower threshold for a visual response. Additionally, subretinal implants enable subjects to use normal eye movements to shift their gaze. The retinotopic stimulation from subretinal implants is inherently more accurate, as the pattern of incident light on the microphotodiodes is a direct reflection of the desired image. Subretinal implants require minimal fixation, as the subretinal space is mechanically constrained and the retinal pigment epithelium creates negative pressure within the subretinal space.
Disadvantages
The main disadvantage of subretinal implants is the lack of sufficient incident light to enable the microphotodiodes to generate adequate current. Thus, subretinal implants often incorporate an external power source to amplify the effect of incident light. The compact nature of the subretinal space imposes significant size constraints on the implant. The close proximity between the implant and the retina also increases the possibility of thermal damage to the retina from heat generated by the implant. Subretinal implants require intact inner and middle retinal layers, and therefore are not beneficial for retinal diseases extending beyond the outer photoreceptor layer. Additionally, photoreceptor loss can result in the formation of a membrane at the boundary of the damaged photoreceptors, which can impede stimulation and increase the stimulation threshold.
Clinical studies
Optobionics was the first company to develop a subretinal implant and evaluate the design in a clinical trial. Initial reports indicated that the implantation procedure was safe, and all subjects reported some perception of light and mild improvement in visual function. The current version of this device has been implanted in 10 patients, who have each reported improvements in the perception of visual details, including contrast, shape, and movement. Retina Implant AG in Germany has also developed a subretinal implant, which has undergone clinical testing in nine patients. Trial was put on hold due to repeated failures. The Retina Implant AG device contains 1500 microphotodiodes, allowing for increased spatial resolution, but requires an external power source. Retina implant AG reported 12 months results on the Alpha IMS study in February 2013 showing that six out of nine patients had a device failure in the nine months post implant Proceedings of the royal society B, and that five of the eight subjects reported various implant-mediated visual perceptions in daily life. One had optic nerve damage and did not perceive stimulation. The Boston Subretinal Implant Project has also developed several iterations of a functional subretinal implant, and focused on short term analysis of implant function. Results from all clinical trials to date indicate that patients receiving subretinal implants report perception of phosphenes, with some gaining the ability to perform basic visual tasks, such as shape recognition and motion detection.
Spatial resolution
The quality of vision expected from a retinal implant is largely based on the maximum spatial resolution of the implant. Current prototypes of retinal implants are capable of providing low resolution, pixelated images.
"State-of-the-art" retinal implants incorporate 60-100 channels, sufficient for basic object discrimination and recognition tasks. However, simulations of the resultant pixelated images assume that all electrodes on the implant are in contact with the desired retinal cell; in reality the expected spatial resolution is lower, as a few of the electrodes may not function optimally. Tests of reading performance indicated that a 60-channel implant is sufficient to restore some reading ability, but only with significantly enlarged text. Similar experiments evaluating room navigation ability with pixelated images demonstrated that 60 channels were sufficient for experienced subjects, while naïve subjects required 256 channels. This experiment, therefore, not only demonstrated the functionality provided by low resolution visual feedback, but also the ability for subjects to adapt and improve over time. However, these experiments are based merely on simulations of low resolution vision in normal subjects, rather than clinical testing of implanted subjects. The number of electrodes necessary for reading or room navigation may differ in implanted subjects, and further testing needs to be conducted within this clinical population to determine the required spatial resolution for specific visual tasks.
Simulation results indicate that 600-1000 electrodes would be required to enable subjects to perform a wide variety of tasks, including reading, face recognition, and navigating around rooms. Thus, the available spatial resolution of retinal implants needs to increase by a factor of 10, while remaining small enough to implant, to restore sufficient visual function for those tasks. It is worth to note high-density stimulation is not equal to high visual acuity (resolution), which requires a lot of factors in both hardware (electrodes and coatings) and software (stimulation strategies based on surgical results).
Current status and future developments
Clinical reports to date have demonstrated mixed success, with all patients report at least some sensation of light from the electrodes, and a smaller proportion gaining more detailed visual function, such as identifying patterns of light and dark areas. The clinical reports indicate that, even with low resolution, retinal implants are potentially useful in providing crude vision to individuals who otherwise would not have any visual sensation. However, clinical testing in implanted subjects is somewhat limited and the majority of spatial resolution simulation experiments have been conducted in normal controls. It remains unclear whether the low level vision provided by current retinal implants is sufficient to balance the risks associated with the surgical procedure, especially for subjects with intact peripheral vision. Several other aspects of retinal implants need to be addressed in future research, including the long term stability of the implants and the possibility of retinal neuron plasticity in response to prolonged stimulation.
The Manchester Royal Infirmary and Prof Paulo E Stanga announced on July 22, 2015, the first successful implantation of Second Sight's Argus II in patients with severe Age Related Macular Degeneration. These results are very impressive as it appears that the patients integrate the residual vision and the artificial vision. It potentially opens the use of retinal implants to millions of patients with AMD.
See also
Retinal regeneration
References
External links
Japan Retinal Implant Project
- The Retinal Implant Project - rle.mit.edu
National Eye Institute of the National Institutes of Heath (NIH)
Biomedical engineering
Neuroprosthetics
Implants (medicine)
Artificial organs
Blindness
Eye
Prosthetics
Medical devices | Retinal implant | [
"Engineering",
"Biology"
] | 3,176 | [
"Biological engineering",
"Biomedical engineering",
"Artificial organs",
"Medical devices",
"Medical technology"
] |
3,244,102 | https://en.wikipedia.org/wiki/Watermaker | A watermaker is a device used to obtain potable water by reverse osmosis of seawater. In boating and yachting circles, desalinators are often referred to as "watermakers".
The devices can be expensive to acquire and maintain, but are quite valuable because they reduce the need for large water tanks for a long passage.
The term watermaker may also refer to an atmospheric water generator, a machine that extracts potable water from the humidity in air using a refrigeration or a desiccant.
Varieties
Many versions are used by long-distance ocean cruisers.
Depending on the design, watermakers can be powered by electricity from the battery bank, an engine, an AC generator or hand operated. There is a portable, towed, water-powered watermaker available which converts to hand operation in an emergency.
Water requirement
There is great variation in the amount of water consumed.
At home in the United States, each person uses about 55 gallons (208 liters) of water per day on average. Where supplies are limited, and in emergencies, much less may be used.
Typical cruising yachts use from 4 to 20 litres (1.05 to 5.28 gallons) per person per day, the average probably being about 6 litres (1.59 gallons). The minimum water intake required to maintain body hydration is 1.5 litres (0.4 gallons) per day. The amount of water that is required for a person to consume is dependent on different factors. Some of these factors include weight, height and gender. Men on average needs a greater amount of water than women do.
Popular brands of yacht watermakers typically make from 2 to 150 litres per hour of operation (0.53 to 41 gallons) depending on the model.
There are strong opinions among small boat cruisers about the usefulness of these devices. The arguments may be summarised as:
Pros
A watermaker uses only a small amount of fuel to generate a large amount of water, eliminating the need for large, heavy water tanks.
The user is independent of shore-based water supplies, which is especially important in remote area.
They provide safe water when shore-based water is of uncertain quality.
Some designs are portable and can be converted to manual operation in an emergency.
The hand-held unit offered by one manufacturer and the towed water-powered watermaker offered by another manufacturer can be transferred to a liferaft in an emergency.
Cons
They are expensive: Indicative costs are US$2,000 for the manual type, US$3,000 for the towed water-powered type, US$4,000 or more for an engine-driven type (designed to be fitted to the inboard motor of the vessel), and about the same for an AC generator-driven type.
Some types (but not all) are time-consuming and expensive to maintain.
They are power hungry, except the hand-held emergency watermaker and the towed water powered type. Accordingly, these devices overcome the problem of large electric current demand. The drawbacks for the non-electric designs are that manual operation is tiring for the operator and the towed watermaker only works while the vessel is moving.
Some manufacturers of electrically powered watermakers have energy recovery systems in their design which reduce the power consumption; however, these are typically some 50% more expensive for any similar size due to their additional complexity. As a guideline, assuming a 12V DC system, the energy recovery incorporated in those watermakers have the effect of reducing the electric current used from perhaps typically 20A to about 8A. Like any piece of equipment, it is bound to fail at some time and cause expense/anxiety.
Technology
All watermakers designed for small boats and yachts rely on essentially the same technology, exploiting the principle of "reverse osmosis": a high pressure pump forcing seawater through a membrane that allows water but not salt to pass.
The common comparison is that of a filter; however, as the holes in the membrane are smaller than molecules of sodium chloride (salt) and indeed smaller than bacteria, and pressures in the nature of 45-50 bar are required, the process is much more complex than the common water filter or the oil filter found in automobile engines.
Atmospheric water generator
An atmospheric water generator is a machine that extracts potable water from the humidity in air using a refrigeration or a desiccant. Condensing moisture by refrigeration requires a minimum ambient temperature of about , while desiccant adsorbers have no such restriction. Either method is suitable for a desert climate, where water production is dependent on ambient humidity. The Negev desert in Israel, for example, has a significant average relative humidity of 64%.
Contrary to some online sources, a 1922 article in Popular Science cites an average relative humidity of 30% for the Sahara Desert, about half the humidity in an air-conditioned home. Moreover, the effect of the dew point causes early mornings to have higher humidity, so that atmospheric water generation is possible even in the harshest climates.
References
Drinking water
Water treatment
Water supply
Membrane technology | Watermaker | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,039 | [
"Hydrology",
"Separation processes",
"Water treatment",
"Water pollution",
"Membrane technology",
"Environmental engineering",
"Water technology",
"Water supply"
] |
3,245,447 | https://en.wikipedia.org/wiki/Forced%20convection | Forced convection is a mechanism, or type of transport, in which fluid motion is generated by an external source (like a pump, fan, suction device, etc.). Alongside natural convection, thermal radiation, and thermal conduction it is one of the methods of heat transfer and allows significant amounts of heat energy to be transported very efficiently.
Applications
This mechanism is found very commonly in everyday life, including central heating and air conditioning and in many other machines. Forced convection is often encountered by engineers designing or analyzing heat exchangers, pipe flow, and flow over a plate at a different temperature than the stream (the case of a shuttle wing during re-entry, for example).
Mixed convection
In any forced convection situation, some amount of natural convection is always present whenever there are gravitational forces present (i.e., unless the system is in an inertial frame or free-fall). When the natural convection is not negligible, such flows are typically referred to as mixed convection.
Mathematical analysis
When analyzing potentially mixed convection, a parameter called the Archimedes number (Ar) parametrizes the relative strength of free and forced convection. The Archimedes number is the ratio of Grashof number and the square of Reynolds number, which represents the ratio of buoyancy force and inertia force, and which stands in for the contribution of natural convection. When Ar ≫ 1, natural convection dominates and when Ar ≪ 1, forced convection dominates.
When natural convection isn't a significant factor, mathematical analysis with forced convection theories typically yields accurate results. The parameter of importance in forced convection is the Péclet number, which is the ratio of advection (movement by currents) and diffusion (movement from high to low concentrations) of heat.
When the Peclet number is much greater than unity (1), advection dominates diffusion. Similarly, much smaller ratios indicate a higher rate of diffusion relative to advection.
See also
Convective heat transfer
Combined forced and natural convection
References
External links
Thermodynamics
Heat transfer | Forced convection | [
"Physics",
"Chemistry",
"Mathematics"
] | 420 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics",
"Dynamical systems"
] |
198,951 | https://en.wikipedia.org/wiki/CpG%20site | The CpG sites or CG sites are regions of DNA where a cytosine nucleotide is followed by a guanine nucleotide in the linear sequence of bases along its 5' → 3' direction. CpG sites occur with high frequency in genomic regions called CpG islands.
Cytosines in CpG dinucleotides can be methylated to form 5-methylcytosines. Enzymes that add a methyl group are called DNA methyltransferases. In mammals, 70% to 80% of CpG cytosines are methylated. Methylating the cytosine within a gene can change its expression, a mechanism that is part of a larger field of science studying gene regulation that is called epigenetics. Methylated cytosines often mutate to thymines.
In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island.
CpG characteristics
Definition
CpG is shorthand for 5'—C—phosphate—G—3' , that is, cytosine and guanine separated by only one phosphate group; phosphate links any two nucleosides together in DNA. The CpG notation is used to distinguish this single-stranded linear sequence from the CG base-pairing of cytosine and guanine for double-stranded sequences. The CpG notation is therefore to be interpreted as the cytosine being 5 prime to the guanine base. CpG should not be confused with GpC, the latter meaning that a guanine is followed by a cytosine in the 5' → 3' direction of a single-stranded sequence.
Under-representation caused by high mutation rate
CpG dinucleotides have long been observed to occur with a much lower frequency in the sequence of vertebrate genomes than would be expected due to random chance. For example, in the human genome, which has a 42% GC content, a pair of nucleotides consisting of cytosine followed by guanine would be expected to occur of the time. The frequency of CpG dinucleotides in human genomes is less than one-fifth of the expected frequency.
This underrepresentation is a consequence of the high mutation rate of methylated CpG sites: the spontaneously occurring deamination of a methylated cytosine results in a thymine, and the resulting G:T mismatched bases are often improperly resolved to A:T; whereas the deamination of unmethylated cytosine results in a uracil, which as a foreign base is quickly replaced by a cytosine by the base excision repair mechanism. The C to T transition rate at methylated CpG sites is ~10 fold higher than at unmethylated sites.
Genomic distribution
CpG dinucleotides frequently occur in CpG islands (see definition of CpG islands, below). There are 28,890 CpG islands in the human genome, (50,267 if one includes CpG islands in repeat sequences). This is in agreement with the 28,519 CpG islands found by Venter et al. since the Venter et al. genome sequence did not include the interiors of highly similar repetitive elements and the extremely dense repeat regions near the centromeres. Since CpG islands contain multiple CpG dinucleotide sequences, there appear to be more than 20 million CpG dinucleotides in the human genome.
CpG islands
CpG islands (or CG islands) are regions with a high frequency of CpG sites. Though objective definitions for CpG islands are limited, the usual formal definition is a region with at least 200 bp, a GC percentage greater than 50%, and an observed-to-expected CpG ratio greater than 60%. The "observed-to-expected CpG ratio" can be derived where the observed is calculated as: and the expected as or .
Many genes in mammalian genomes have CpG islands associated with the start of the gene (promoter regions). Because of this, the presence of a CpG island is used to help in the prediction and annotation of genes.
In mammalian genomes, CpG islands are typically 300–3,000 base pairs in length, and have been found in or near approximately 40% of promoters of mammalian genes. Over 60% of human genes and almost all house-keeping genes have their promoters embedded in CpG islands. Given the frequency of GC two-nucleotide sequences, the number of CpG dinucleotides is much lower than would be expected.
A 2002 study revised the rules of CpG island prediction to exclude other GC-rich genomic sequences such as Alu repeats. Based on an extensive search on the complete sequences of human chromosomes 21 and 22, DNA regions greater than 500 bp were found more likely to be the "true" CpG islands associated with the 5' regions of genes if they had a GC content greater than 55%, and an observed-to-expected CpG ratio of 65%.
CpG islands are characterized by CpG dinucleotide content of at least 60% of that which would be statistically expected (~4–6%), whereas the rest of the genome has much lower CpG frequency (~1%), a phenomenon called CG suppression. Unlike CpG sites in the coding region of a gene, in most instances the CpG sites in the CpG islands of promoters are unmethylated if the genes are expressed. This observation led to the speculation that methylation of CpG sites in the promoter of a gene may inhibit gene expression. Methylation, along with histone modification, is central to imprinting. Most of the methylation differences between tissues, or between normal and cancer samples, occur a short distance from the CpG islands (at "CpG island shores") rather than in the islands themselves.
CpG islands typically occur at or near the transcription start site of genes, particularly housekeeping genes, in vertebrates. A C (cytosine) base followed immediately by a G (guanine) base (a CpG) is rare in vertebrate DNA because the cytosines in such an arrangement tend to be methylated. This methylation helps distinguish the newly synthesized DNA strand from the parent strand, which aids in the final stages of DNA proofreading after duplication. However, over time methylated cytosines tend to turn into thymines because of spontaneous deamination. There is a special enzyme in humans (Thymine-DNA glycosylase, or TDG) that specifically replaces T's from T/G mismatches. However, due to the rarity of CpGs, it is theorised to be insufficiently effective in preventing a possibly rapid mutation of the dinucleotides. The existence of CpG islands is usually explained by the existence of selective forces for relatively high CpG content, or low levels of methylation in that genomic area, perhaps having to do with the regulation of gene expression. A 2011 study showed that most CpG islands are a result of non-selective forces.
Methylation, silencing, cancer, and aging
CpG islands in promoters
In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island.
Distal promoter elements also frequently contain CpG islands. An example is the DNA repair gene ERCC1, where the CpG island-containing element is located about 5,400 nucleotides upstream of the transcription start site of the ERCC1 gene. CpG islands also occur frequently in promoters for functional noncoding RNAs such as microRNAs.
Methylation of CpG islands stably silences genes
In humans, DNA methylation occurs at the 5 position of the pyrimidine ring of the cytosine residues within CpG sites to form 5-methylcytosines. The presence of multiple methylated CpG sites in CpG islands of promoters causes stable silencing of genes. Silencing of a gene may be initiated by other mechanisms, but this is often followed by methylation of CpG sites in the promoter CpG island to cause the stable silencing of the gene.
Promoter CpG hyper/hypo-methylation in cancer
In cancers, loss of expression of genes occurs about 10 times more frequently by hypermethylation of promoter CpG islands than by mutations. For example, in a colorectal cancer there are usually about 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. In contrast, in one study of colon tumors compared to adjacent normal-appearing colonic mucosa, 1,734 CpG islands were heavily methylated in tumors whereas these CpG islands were not methylated in the adjacent mucosa. Half of the CpG islands were in promoters of annotated protein coding genes, suggesting that about 867 genes in a colon tumor have lost expression due to CpG island methylation. A separate study found an average of 1,549 differentially methylated regions (hypermethylated or hypomethylated) in the genomes of six colon cancers (compared to adjacent mucosa), of which 629 were in known promoter regions of genes. A third study found more than 2,000 genes differentially methylated between colon cancers and adjacent mucosa. Using gene set enrichment analysis, 569 out of 938 gene sets were hypermethylated and 369 were hypomethylated in cancers. Hypomethylation of CpG islands in promoters results in overexpression of the genes or gene sets affected.
One 2012 study listed 147 specific genes with colon cancer-associated hypermethylated promoters, along with the frequency with which these hypermethylations were found in colon cancers. At least 10 of those genes had hypermethylated promoters in nearly 100% of colon cancers. They also indicated 11 microRNAs whose promoters were hypermethylated in colon cancers at frequencies between 50% and 100% of cancers. MicroRNAs (miRNAs) are small endogenous RNAs that pair with sequences in messenger RNAs to direct post-transcriptional repression. On average, each microRNA represses several hundred target genes. Thus microRNAs with hypermethylated promoters may be allowing over-expression of hundreds to thousands of genes in a cancer.
The information above shows that, in cancers, promoter CpG hyper/hypo-methylation of genes and of microRNAs causes loss of expression (or sometimes increased expression) of far more genes than does mutation.
DNA repair genes with hyper/hypo-methylated promoters in cancers
DNA repair genes are frequently repressed in cancers due to hypermethylation of CpG islands within their promoters. In head and neck squamous cell carcinomas at least 15 DNA repair genes have frequently hypermethylated promoters; these genes are XRCC1, MLH3, PMS1, RAD51B, XRCC3, RAD54B, BRCA1, SHFM1, GEN1, FANCE, FAAP20, SPRTN, SETMAR, HUS1, and PER1. About seventeen types of cancer are frequently deficient in one or more DNA repair genes due to hypermethylation of their promoters. As an example, promoter hypermethylation of the DNA repair gene MGMT occurs in 93% of bladder cancers, 88% of stomach cancers, 74% of thyroid cancers, 40%-90% of colorectal cancers and 50% of brain cancers. Promoter hypermethylation of LIG4 occurs in 82% of colorectal cancers. Promoter hypermethylation of NEIL1 occurs in 62% of head and neck cancers and in 42% of non-small-cell lung cancers. Promoter hypermethylation of ATM occurs in 47% of non-small-cell lung cancers. Promoter hypermethylation of MLH1 occurs in 48% of non-small-cell lung cancer squamous cell carcinomas. Promoter hypermethylation of FANCB occurs in 46% of head and neck cancers.
On the other hand, the promoters of two genes, PARP1 and FEN1, were hypomethylated and these genes were over-expressed in numerous cancers. PARP1 and FEN1 are essential genes in the error-prone and mutagenic DNA repair pathway microhomology-mediated end joining. If this pathway is over-expressed the excess mutations it causes can lead to cancer. PARP1 is over-expressed in tyrosine kinase-activated leukemias, in neuroblastoma, in testicular and other germ cell tumors, and in Ewing's sarcoma, FEN1 is over-expressed in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreatic, and lung.
DNA damage appears to be the primary underlying cause of cancer. If accurate DNA repair is deficient, DNA damages tend to accumulate. Such excess DNA damage can increase mutational errors during DNA replication due to error-prone translesion synthesis. Excess DNA damage can also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations can give rise to cancer (see malignant neoplasms). Thus, CpG island hyper/hypo-methylation in the promoters of DNA repair genes are likely central to progression to cancer.
Methylation of CpG sites with age
Since age has a strong effect on DNA methylation levels on tens of thousands of CpG sites, one can define a highly accurate biological clock (referred to as epigenetic clock or DNA methylation age) in humans and chimpanzees.
Unmethylated sites
Unmethylated CpG dinucleotide sites can be detected by Toll-like receptor 9 (TLR 9) on plasmacytoid dendritic cells, monocytes, natural killer (NK) cells, and B cells in humans. This is used to detect intracellular viral infection.
Role of CpG sites in memory
In mammals, DNA methyltransferases (which add methyl groups to DNA bases) exhibit a sequence preference for cytosines within CpG sites. In the mouse brain, 4.2% of all cytosines are methylated, primarily in the context of CpG sites, forming 5mCpG. Most hypermethylated 5mCpG sites increase the repression of associated genes.
As reviewed by Duke et al., neuron DNA methylation (repressing expression of particular genes) is altered by neuronal activity. Neuron DNA methylation is required for synaptic plasticity; is modified by experiences; and active DNA methylation and demethylation is required for memory formation and maintenance.
In 2016 Halder et al. using mice, and in 2017 Duke et al. using rats, subjected the rodents to contextual fear conditioning, causing an especially strong long-term memory to form. At 24 hours after the conditioning, in the hippocampus brain region of rats, the expression of 1,048 genes was down-regulated (usually associated with 5mCpG in gene promoters) and the expression of 564 genes was up-regulated (often associated with hypomethylation of CpG sites in gene promoters). At 24 hours after training, 9.2% of the genes in the rat genome of hippocampus neurons were differentially methylated. However while the hippocampus is essential for learning new information it does not store information itself. In the mouse experiments of Halder, 1,206 differentially methylated genes were seen in the hippocampus one hour after contextual fear conditioning but these altered methylations were reversed and not seen after four weeks. In contrast with the absence of long-term CpG methylation changes in the hippocampus, substantial differential CpG methylation could be detected in cortical neurons during memory maintenance. There were 1,223 differentially methylated genes in the anterior cingulate cortex of mice four weeks after contextual fear conditioning.
Demethylation at CpG sites requires ROS activity
In adult somatic cells DNA methylation typically occurs in the context of CpG dinucleotides (CpG sites), forming 5-methylcytosine-pG, or 5mCpG. Reactive oxygen species (ROS) may attack guanine at the dinucleotide site, forming 8-hydroxy-2'-deoxyguanosine (8-OHdG), and resulting in a 5mCp-8-OHdG dinucleotide site. The base excision repair enzyme OGG1 targets 8-OHdG and binds to the lesion without immediate excision. OGG1, present at a 5mCp-8-OHdG site recruits TET1 and TET1 oxidizes the 5mC adjacent to the 8-OHdG. This initiates demethylation of 5mC.
As reviewed in 2018, in brain neurons, 5mC is oxidized by the ten-eleven translocation (TET) family of dioxygenases (TET1, TET2, TET3) to generate 5-hydroxymethylcytosine (5hmC). In successive steps TET enzymes further hydroxylate 5hmC to generate 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC). Thymine-DNA glycosylase (TDG) recognizes the intermediate bases 5fC and 5caC and excises the glycosidic bond resulting in an apyrimidinic site (AP site). In an alternative oxidative deamination pathway, 5hmC can be oxidatively deaminated by activity-induced cytidine deaminase/apolipoprotein B mRNA editing complex (AID/APOBEC) deaminases to form 5-hydroxymethyluracil (5hmU) or 5mC can be converted to thymine (Thy). 5hmU can be cleaved by TDG, single-strand-selective monofunctional uracil-DNA glycosylase 1 (SMUG1), Nei-Like DNA Glycosylase 1 (NEIL1), or methyl-CpG binding protein 4 (MBD4). AP sites and T:G mismatches are then repaired by base excision repair (BER) enzymes to yield cytosine (Cyt).
Two reviews summarize the large body of evidence for the critical and essential role of ROS in memory formation. The DNA demethylation of thousands of CpG sites during memory formation depends on initiation by ROS. In 2016, Zhou et al., showed that ROS have a central role in DNA demethylation.
TET1 is a key enzyme involved in demethylating 5mCpG. However, TET1 is only able to act on 5mCpG if an ROS has first acted on the guanine to form 8-hydroxy-2'-deoxyguanosine (8-OHdG), resulting in a 5mCp-8-OHdG dinucleotide (see first figure in this section). After formation of 5mCp-8-OHdG, the base excision repair enzyme OGG1 binds to the 8-OHdG lesion without immediate excision. Adherence of OGG1 to the 5mCp-8-OHdG site recruits TET1, allowing TET1 to oxidize the 5mC adjacent to 8-OHdG, as shown in the first figure in this section. This initiates the demethylation pathway shown in the second figure in this section.
Altered protein expression in neurons, controlled by ROS-dependent demethylation of CpG sites in gene promoters within neuron DNA, is central to memory formation.
CpG loss
CpG depletion has been observed in the process of DNA methylation of Transposable Elements (TEs) where TEs are not only responsible in the genome expansion but also CpG loss in a host DNA. TEs can be known as "methylation centers" whereby the methylation process, the TEs spreads into the flanking DNA once in the host DNA. This spreading might subsequently result in CpG loss over evolutionary time. Older evolutionary times show a higher CpG loss in the flanking DNA, compared to the younger evolutionary times. Therefore, the DNA methylation can lead eventually to the noticeably loss of CpG sites in neighboring DNA.
Genome size and CpG ratio are negatively correlated
There is generally an inverse correlation between genome size and number of CpG islands, as larger genomes typically have a greater number of transposable elements. Selective pressure against TE's is substantially reduced if expression is suppressed via methylation, further TE's can act as "methylation centres" facilitating methylation of flanking DNA. Since methylation reduces selective pressure on nucleotide sequence long term methylation of CpG sites increases accumulation of spontaneous cytosine to thymine transitions, thereby resulting in a loss of Cp sites.
Alu elements as promoters of CpG loss
Alu elements are known as the most abundant type of transposable elements. Some studies have used Alu elements as a way to study the factors responsible for genome expansion. Alu elements are CpG-rich in a longer amount of sequence, unlike LINEs and ERVs. Alus can work as a methylation center, and the insertion into a host DNA can produce DNA methylation and provoke a spreading into the Flanking DNA area. This spreading is why there is considerable CpG loss and genome expansion. However, this is a result that is analyzed over time because older Alu elements show more CpG loss in sites of neighboring DNA compared to younger ones.
See also
TLR9, detector of unmethylated CpG sites
DNA methylation age
References
Molecular genetics
DNA | CpG site | [
"Chemistry",
"Biology"
] | 4,592 | [
"Molecular genetics",
"Molecular biology"
] |
199,081 | https://en.wikipedia.org/wiki/Period%207%20element | A period 7 element is one of the chemical elements in the seventh row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behavior of the elements as their atomic number increases: a new row is begun when chemical behavior begins to repeat, meaning that elements with similar behavior fall into the same vertical columns. The seventh period contains 32 elements, tied for the most with period 6, beginning with francium and ending with oganesson, the heaviest element currently discovered. As a rule, period 7 elements fill their 7s shells first, then their 5f, 6d, and 7p shells in that order, but there are exceptions, such as uranium.
Properties
All period 7 elements are radioactive. This period contains the actinides, which include plutonium, the last naturally occurring element; subsequent elements must be created artificially. While the first five of these synthetic elements (americium through einsteinium) are now available in macroscopic quantities, most are extremely rare, having only been prepared in microgram amounts or less. The later transactinide elements have only been identified in laboratories in batches of a few atoms at a time.
Though the rarity of many of these elements means that experimental results are not many, their periodic and group trends are less well defined than other periods. Whilst francium and radium do show typical properties of their respective groups, actinides display a much greater variety of behavior and oxidation states than the lanthanides. These peculiarities are due to a variety of factors, including a large degree of spin–orbit coupling and relativistic effects, ultimately caused by the very high electric charge of their massive nuclei. Periodicity mostly holds throughout the 6d series and is predicted also for moscovium and livermorium, but the other four 7p elements, nihonium, flerovium, tennessine, and oganesson, are predicted to have very different properties from those expected for their groups.
Elements
{| class="wikitable sortable"
! colspan="3" | Chemical element
! Block
! Electron configuration
! Occurrence
|-
!
!
!
!
!
!
|- bgcolor=""
|| 87 || Fr || Francium || s-block || [Rn] 7s1 || From decay
|- bgcolor=""
|| 88 || Ra || Radium || s-block || [Rn] 7s2 || From decay
|- bgcolor=""
|| 89 || Ac || Actinium || f-block || [Rn] 6d1 7s2 (*) || From decay
|- bgcolor=""
|| 90 || Th || Thorium || f-block || [Rn] 6d2 7s2 (*) || Primordial
|- bgcolor=""
|| 91 || Pa || Protactinium || f-block || [Rn] 5f2 6d1 7s2 (*) || From decay
|- bgcolor=""
|| 92 || U || Uranium || f-block || [Rn] 5f3 6d1 7s2 (*) || Primordial
|- bgcolor=""
|| 93 || Np || Neptunium || f-block || [Rn] 5f4 6d1 7s2 (*) || From decay
|- bgcolor=""
|| 94 || Pu || Plutonium || f-block || [Rn] 5f6 7s2 || From decay
|- bgcolor=""
|| 95 || Am || Americium || f-block || [Rn] 5f7 7s2 || Synthetic
|- bgcolor=""
|| 96 || Cm || Curium || f-block || [Rn] 5f7 6d1 7s2 (*) || Synthetic
|- bgcolor=""
|| 97 || Bk || Berkelium || f-block || [Rn] 5f9 7s2 || Synthetic
|- bgcolor=""
|| 98 || Cf || Californium || f-block || [Rn] 5f10 7s2 || Synthetic
|- bgcolor=""
|| 99 || Es || Einsteinium || f-block || [Rn] 5f11 7s2 || Synthetic
|- bgcolor=""
|| 100 || Fm || Fermium || f-block || [Rn] 5f12 7s2 || Synthetic
|- bgcolor=""
|| 101 || Md || Mendelevium || f-block|| [Rn] 5f13 7s2 || Synthetic
|- bgcolor=""
|| 102 || No || Nobelium || f-block || [Rn] 5f14 7s2|| Synthetic
|- bgcolor=""
|| 103 || Lr || Lawrencium || d-block || [Rn] 5f14 7s2 7p1 (*) || Synthetic
|- bgcolor=""
|| 104 || Rf || Rutherfordium || d-block || [Rn] 5f14 6d2 7s2 || Synthetic
|- bgcolor=""
|| 105 || Db || Dubnium || d-block || [Rn] 5f14 6d3 7s2 || Synthetic
|- bgcolor=""
|| 106 || Sg || Seaborgium || d-block || [Rn] 5f14 6d4 7s2 || Synthetic
|- bgcolor=""
|| 107 || Bh || Bohrium || d-block || [Rn] 5f14 6d5 7s2 || Synthetic
|- bgcolor=""
|| 108 || Hs || Hassium || d-block || [Rn] 5f14 6d6 7s2 || Synthetic
|- bgcolor=""
|| 109 || Mt || Meitnerium || d-block || [Rn] 5f14 6d7 7s2 (?) || Synthetic
|- bgcolor=""
|| 110 || Ds || Darmstadtium || d-block || [Rn] 5f14 6d8 7s2 (?) || Synthetic
|- bgcolor=""
|| 111 || Rg || Roentgenium || d-block || [Rn] 5f14 6d9 7s2 (?) || Synthetic
|- bgcolor=""
|| 112 || Cn || Copernicium || d-block || [Rn] 5f14 6d10 7s2 (?) || Synthetic
|- bgcolor=""
|| 113 || Nh || Nihonium || p-block || [Rn] 5f14 6d10 7s2 7p1 (?) || Synthetic
|- bgcolor=""
|| 114 || Fl || Flerovium || p-block || [Rn] 5f14 6d10 7s2 7p2 (?) || Synthetic
|- bgcolor=""
|| 115 || Mc || Moscovium || p-block || [Rn] 5f14 6d10 7s2 7p3 (?) || Synthetic
|- bgcolor=""
|| 116 || Lv || Livermorium || p-block || [Rn] 5f14 6d10 7s2 7p4 (?) || Synthetic
|- bgcolor=""
|| 117 || Ts || Tennessine || p-block || [Rn] 5f14 6d10 7s2 7p5 (?) || Synthetic
|- bgcolor=""
|| 118 || Og || Oganesson || p-block || [Rn] 5f14 6d10 7s2 7p6 (?) || Synthetic
|}
(?) Prediction
(*) Exception to the Madelung rule.
In many periodic tables, the f-block is erroneously shifted one element to the right, so that lanthanum and actinium become d-block elements, and Ce–Lu and Th–Lr form the f-block tearing the d-block into two very uneven portions. This is a holdover from early erroneous measurements of electron configurations. Lev Landau and Evgeny Lifshitz pointed out in 1948 that lutetium is not an f-block element, and since then physical, chemical, and electronic evidence has overwhelmingly supported that the f-block contains the elements La–Yb and Ac–No, as shown here and as supported by International Union of Pure and Applied Chemistry reports dating from 1988 and 2021.
S-block
Francium and radium make up the s-block elements of the 7th period.
Francium (Fr, atomic number 87) is a highly radioactive metal that decays into astatine, radium, or radon. It is one of the two least electronegative elements; the other is caesium. As an alkali metal, it has one valence electron. Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. It was the last element discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium and thorium ores, where the isotope francium-223 continually forms and decays. As little as 20–30 g (one ounce) exists at any given time throughout Earth's crust; the other isotopes are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms.
Radium (Ra, atomic number 88) is an almost pure-white alkaline earth metal, but it readily oxidizes, reacting with nitrogen (rather than oxygen) on exposure to air, becoming black in color. All isotopes of radium are radioactive; the most stable is radium-226, which has a half-life of 1601 years and decays into radon. Due to such instability, radium luminesces, glowing a faint blue. Radium, in the form of radium chloride, was discovered by Marie and Pierre Curie in 1898. They extracted the radium compound from uraninite and published the discovery at the French Academy of Sciences five days later. Radium was isolated in its metallic state by Marie Curie and André-Louis Debierne through electrolysis of radium chloride in 1910. Since its discovery, it has given names such as radium A and radium C to several isotopes of other elements that are decay products of radium-226. In nature, radium is found in uranium ores in trace amounts as small as a seventh of a gram per ton of uraninite. Radium is not necessary for living things, and adverse health effects are likely when it is incorporated into biochemical processes due to its radioactivity and chemical reactivity.
Actinides
The actinide or actinoid (IUPAC nomenclature) series encompasses the 15 metallic chemical elements with atomic numbers from 89 to 103, actinium through lawrencium.
The actinide series is named after its first element actinium. All but one of the actinides are f-block elements, corresponding to the filling of the 5f electron shell; lawrencium, a d-block element, is also generally considered an actinide. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence.
Of the actinides, thorium and uranium occur naturally in substantial, primordial, quantities. Radioactive decay of uranium produces transient amounts of actinium, protactinium and plutonium, and atoms of neptunium and plutonium are occasionally produced from transmutation in uranium ores. The other actinides are purely synthetic elements, though the first six actinides after plutonium would have been produced at Oklo (and long since decayed away), and curium almost certainly previously existed in nature as an extinct radionuclide. Nuclear tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium.
All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These are used in nuclear reactors and nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors.
In presentations of the periodic table, the lanthanides and the actinides are customarily shown as two additional rows below the main body of the table, with placeholders or else a selected single element of each series (either lanthanum or lutetium, and either actinium or lawrencium, respectively) shown in a single cell of the main table, between barium and hafnium, and radium and rutherfordium, respectively. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table (32 columns) shows the lanthanide and actinide series in their proper columns, as parts of the table's sixth and seventh rows (periods).
Transactinides
Transactinide elements (also, transactinides, or super-heavy elements, or superheavies) are the chemical elements with atomic numbers greater than those of the actinides, the heaviest of which is lawrencium (103). All transactinides of period 7 have been discovered, up to oganesson (element 118).
Superheavies are also transuranic elements, that is, have atomic number greater than that of uranium (92). The further distinction of having an atomic number greater than the actinides is significant in several ways:
The transactinide elements all have electrons in the 6d subshell in their ground state (and thus are placed in the d-block).
Even the longest-lived known isotopes of many transactinides have extremely short half-lives, measured in seconds or smaller units.
The element naming controversy involved the first five or six transactinides. These elements thus used three-letter systematic names for many years after their discovery was confirmed. (Usually, the three-letter symbols are replaced with two-letter symbols relatively soon after a discovery has been confirmed.)
Transactinides are radioactive and have only been obtained synthetically in laboratories. None of these elements has ever been collected in a macroscopic sample. Transactinides are all named after scientists, or important locations involved in the synthesis of the elements.
Chemistry Nobel Prize winner Glenn T. Seaborg, who first proposed the actinide concept which led to the acceptance of the actinide series, also proposed the existence of a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153. The transactinide seaborgium is named in his honor.
IUPAC defines an element to exist if its lifetime is longer than 10 second, the time needed to form an electron cloud.
Notes
References
Periods (periodic table) | Period 7 element | [
"Chemistry"
] | 3,404 | [
"Periodic table",
"Periods (periodic table)"
] |
199,121 | https://en.wikipedia.org/wiki/Rydberg%20constant | In spectroscopy, the Rydberg constant, symbol for
heavy atoms or for hydrogen, named after the Swedish physicist Johannes Rydberg, is a physical constant relating to the electromagnetic spectra of an atom. The constant first arose as an empirical fitting parameter in the Rydberg formula for the hydrogen spectral series, but Niels Bohr later showed that its value could be calculated from more fundamental constants according to his model of the atom.
Before the 2019 revision of the SI, and the electron spin g-factor were the most accurately measured physical constants.
The constant is expressed for either hydrogen as , or at the limit of infinite nuclear mass as . In either case, the constant is used to express the limiting value of the highest wavenumber (inverse wavelength) of any photon that can be emitted from a hydrogen atom, or, alternatively, the wavenumber of the lowest-energy photon capable of ionizing a hydrogen atom from its ground state. The hydrogen spectral series can be expressed simply in terms of the Rydberg constant for hydrogen and the Rydberg formula.
In atomic physics, Rydberg unit of energy, symbol Ry, corresponds to the energy of the photon whose wavenumber is the Rydberg constant, i.e. the ionization energy of the hydrogen atom in a simplified Bohr model.
Value
Rydberg constant
The CODATA value is
where
is the rest mass of the electron (i.e. the electron mass),
is the elementary charge,
is the permittivity of free space,
is the Planck constant, and
is the speed of light in vacuum.
The symbol means that the nucleus is assumed to be infinitely heavy, an improvement of the value can be made using the reduced mass of the atom:
with the mass of the nucleus. The corrected Rydberg constant is:
that for hydrogen, where is the mass of the proton, becomes:
Since the Rydberg constant is related to the spectrum lines of the atom, this correction leads to an isotopic shift between different isotopes. For example, deuterium, an isotope of hydrogen with a nucleus formed by a proton and a neutron (), was discovered thanks to its slightly shifted spectrum.
Rydberg unit of energy
The Rydberg unit of energy is
=
=
Rydberg frequency
=
Rydberg wavelength
.
The corresponding angular wavelength is
.
Bohr model
The Bohr model explains the atomic spectrum of hydrogen (see Hydrogen spectral series) as well as various other atoms and ions. It is not perfectly accurate, but is a remarkably good approximation in many cases, and historically played an important role in the development of quantum mechanics. The Bohr model posits that electrons revolve around the atomic nucleus in a manner analogous to planets revolving around the Sun.
In the simplest version of the Bohr model, the mass of the atomic nucleus is considered to be infinite compared to the mass of the electron, so that the center of mass of the system, the barycenter, lies at the center of the nucleus. This infinite mass approximation is what is alluded to with the subscript. The Bohr model then predicts that the wavelengths of hydrogen atomic transitions are (see Rydberg formula):
where n1 and n2 are any two different positive integers (1, 2, 3, ...), and is the wavelength (in vacuum) of the emitted or absorbed light, giving
where and M is the total mass of the nucleus. This formula comes from substituting the reduced mass of the electron.
Precision measurement
The Rydberg constant was one of the most precisely determined physical constants, with a relative standard uncertainty of This precision constrains the values of the other physical constants that define it.
Since the Bohr model is not perfectly accurate, due to fine structure, hyperfine splitting, and other such effects, the Rydberg constant cannot be directly measured at very high accuracy from the atomic transition frequencies of hydrogen alone. Instead, the Rydberg constant is inferred from measurements of atomic transition frequencies in three different atoms (hydrogen, deuterium, and antiprotonic helium). Detailed theoretical calculations in the framework of quantum electrodynamics are used to account for the effects of finite nuclear mass, fine structure, hyperfine splitting, and so on. Finally, the value of is determined from the best fit of the measurements to the theory.
Alternative expressions
The Rydberg constant can also be expressed as in the following equations.
and in energy units
where
is the electron rest mass,
is the electric charge of the electron,
is the Planck constant,
is the reduced Planck constant,
is the speed of light in vacuum,
is the electric constant (vacuum permittivity),
is the fine-structure constant,
is the Compton wavelength of the electron,
is the Compton frequency of the electron,
is the Compton angular frequency of the electron,
is the Bohr radius,
is the classical electron radius.
The last expression in the first equation shows that the wavelength of light needed to ionize a hydrogen atom is 4π/α times the Bohr radius of the atom.
The second equation is relevant because its value is the coefficient for the energy of the atomic orbitals of a hydrogen atom: .
See also
Lyman limit
References
Emission spectroscopy
Physical constants
Units of energy | Rydberg constant | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,055 | [
"Spectrum (physical sciences)",
"Physical quantities",
"Quantity",
"Emission spectroscopy",
"Units of energy",
"Physical constants",
"Spectroscopy",
"Units of measurement"
] |
199,304 | https://en.wikipedia.org/wiki/Amp%C3%A8re%27s%20circuital%20law | In classical electromagnetism, Ampère's circuital law (not to be confused with Ampère's force law) relates the circulation of a magnetic field around a closed loop to the electric current passing through the loop.
James Clerk Maxwell derived it using hydrodynamics in his 1861 published paper "On Physical Lines of Force". In 1865 he generalized the equation to apply to time-varying currents by adding the displacement current term, resulting in the modern form of the law, sometimes called the Ampère–Maxwell law, which is one of Maxwell's equations that form the basis of classical electromagnetism.
Ampère's original circuital law
In 1820 Danish physicist Hans Christian Ørsted discovered that an electric current creates a magnetic field around it, when he noticed that the needle of a compass next to a wire carrying current turned so that the needle was perpendicular to the wire. He investigated and discovered the rules which govern the field around a straight current-carrying wire:
The magnetic field lines encircle the current-carrying wire.
The magnetic field lines lie in a plane perpendicular to the wire.
If the direction of the current is reversed, the direction of the magnetic field reverses.
The strength of the field is directly proportional to the magnitude of the current.
The strength of the field at any point is inversely proportional to the distance of the point from the wire.
This sparked a great deal of research into the relation between electricity and magnetism. André-Marie Ampère investigated the magnetic force between two current-carrying wires, discovering Ampère's force law. In the 1850s Scottish mathematical physicist James Clerk Maxwell generalized these results and others into a single mathematical law. The original form of Maxwell's circuital law, which he derived as early as 1855 in his paper "On Faraday's Lines of Force" based on an analogy to hydrodynamics, relates magnetic fields to electric currents that produce them. It determines the magnetic field associated with a given current, or the current associated with a given magnetic field.
The original circuital law only applies to a magnetostatic situation, to continuous steady currents flowing in a closed circuit. For systems with electric fields that change over time, the original law (as given in this section) must be modified to include a term known as Maxwell's correction (see below).
Equivalent forms
The original circuital law can be written in several different forms, which are all ultimately equivalent:
An "integral form" and a "differential form". The forms are exactly equivalent, and related by the Kelvin–Stokes theorem (see the "proof" section below).
Forms using SI units, and those using cgs units. Other units are possible, but rare. This section will use SI units, with cgs units discussed later.
Forms using either or magnetic fields. These two forms use the total current density and free current density, respectively. The and fields are related by the constitutive equation: in non-magnetic materials where is the magnetic constant.
Explanation
The integral form of the original circuital law is a line integral of the magnetic field around some closed curve (arbitrary but must be closed). The curve in turn bounds both a surface which the electric current passes through (again arbitrary but not closed—since no three-dimensional volume is enclosed by ), and encloses the current. The mathematical statement of the law is a relation between the circulation of the magnetic field around some path (line integral) due to the current which passes through that enclosed path (surface integral).
In terms of total current, (which is the sum of both free current and bound current) the line integral of the magnetic -field (in teslas, T) around closed curve is proportional to the total current passing through a surface (enclosed by ). In terms of free current, the line integral of the magnetic -field (in amperes per metre, A·m−1) around closed curve equals the free current through a surface .
is the total current density (in amperes per square metre, A·m−2),
is the free current density only,
is the closed line integral around the closed curve ,
denotes a surface integral over the surface bounded by the curve ,
is the vector dot product,
is an infinitesimal element (a differential) of the curve (i.e. a vector with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve )
is the vector area of an infinitesimal element of surface (that is, a vector with magnitude equal to the area of the infinitesimal surface element, and direction normal to surface . The direction of the normal must correspond with the orientation of by the right hand rule), see below for further explanation of the curve and surface .
is the curl operator.
Ambiguities and sign conventions
There are a number of ambiguities in the above definitions that require clarification and a choice of convention.
First, three of these terms are associated with sign ambiguities: the line integral could go around the loop in either direction (clockwise or counterclockwise); the vector area could point in either of the two directions normal to the surface; and is the net current passing through the surface , meaning the current passing through in one direction, minus the current in the other direction—but either direction could be chosen as positive. These ambiguities are resolved by the right-hand rule: With the palm of the right-hand toward the area of integration, and the index-finger pointing along the direction of line-integration, the outstretched thumb points in the direction that must be chosen for the vector area . Also the current passing in the same direction as must be counted as positive. The right hand grip rule can also be used to determine the signs.
Second, there are infinitely many possible surfaces that have the curve as their border. (Imagine a soap film on a wire loop, which can be deformed by blowing on the film). Which of those surfaces is to be chosen? If the loop does not lie in a single plane, for example, there is no one obvious choice. The answer is that it does not matter: in the magnetostatic case, the current density is solenoidal (see next section), so the divergence theorem and continuity equation imply that the flux through any surface with boundary , with the same sign convention, is the same. In practice, one usually chooses the most convenient surface (with the given boundary) to integrate over.
Free current versus bound current
The electric current that arises in the simplest textbook situations would be classified as "free current"—for example, the current that passes through a wire or battery. In contrast, "bound current" arises in the context of bulk materials that can be magnetized and/or polarized. (All materials can to some extent.)
When a material is magnetized (for example, by placing it in an external magnetic field), the electrons remain bound to their respective atoms, but behave as if they were orbiting the nucleus in a particular direction, creating a microscopic current. When the currents from all these atoms are put together, they create the same effect as a macroscopic current, circulating perpetually around the magnetized object. This magnetization current is one contribution to "bound current".
The other source of bound current is bound charge. When an electric field is applied, the positive and negative bound charges can separate over atomic distances in polarizable materials, and when the bound charges move, the polarization changes, creating another contribution to the "bound current", the polarization current .
The total current density due to free and bound charges is then:
with the "free" or "conduction" current density.
All current is fundamentally the same, microscopically. Nevertheless, there are often practical reasons for wanting to treat bound current differently from free current. For example, the bound current usually originates over atomic dimensions, and one may wish to take advantage of a simpler theory intended for larger dimensions. The result is that the more microscopic Ampère's circuital law, expressed in terms of and the microscopic current (which includes free, magnetization and polarization currents), is sometimes put into the equivalent form below in terms of and the free current only. For a detailed definition of free current and bound current, and the proof that the two formulations are equivalent, see the "proof" section below.
Shortcomings of the original formulation of the circuital law
There are two important issues regarding the circuital law that require closer scrutiny. First, there is an issue regarding the continuity equation for electrical charge. In vector calculus, the identity for the divergence of a curl states that the divergence of the curl of a vector field must always be zero. Hence
and so the original Ampère's circuital law implies that
i.e. that the current density is solenoidal.
But in general, reality follows the continuity equation for electric charge:
which is nonzero for a time-varying charge density. An example occurs in a capacitor circuit where time-varying charge densities exist on the plates.
Second, there is an issue regarding the propagation of electromagnetic waves. For example, in free space, where
the circuital law implies that
i.e. that the magnetic field is irrotational, but to maintain consistency with the continuity equation for electric charge, we must have
To treat these situations, the contribution of displacement current must be added to the current term in the circuital law.
James Clerk Maxwell conceived of displacement current as a polarization current in the dielectric vortex sea, which he used to model the magnetic field hydrodynamically and mechanically. He added this displacement current to Ampère's circuital law at equation 112 in his 1861 paper "On Physical Lines of Force".
Displacement current
In free space, the displacement current is related to the time rate of change of electric field.
In a dielectric the above contribution to displacement current is present too, but a major contribution to the displacement current is related to the polarization of the individual molecules of the dielectric material. Even though charges cannot flow freely in a dielectric, the charges in molecules can move a little under the influence of an electric field. The positive and negative charges in molecules separate under the applied field, causing an increase in the state of polarization, expressed as the polarization density . A changing state of polarization is equivalent to a current.
Both contributions to the displacement current are combined by defining the displacement current as:
where the electric displacement field is defined as:
where is the electric constant, the relative static permittivity, and is the polarization density. Substituting this form for in the expression for displacement current, it has two components:
The first term on the right hand side is present everywhere, even in a vacuum. It doesn't involve any actual movement of charge, but it nevertheless has an associated magnetic field, as if it were an actual current. Some authors apply the name displacement current to only this contribution.
The second term on the right hand side is the displacement current as originally conceived by Maxwell, associated with the polarization of the individual molecules of the dielectric material.
Maxwell's original explanation for displacement current focused upon the situation that occurs in dielectric media. In the modern post-aether era, the concept has been extended to apply to situations with no material media present, for example, to the vacuum between the plates of a charging vacuum capacitor. The displacement current is justified today because it serves several requirements of an electromagnetic theory: correct prediction of magnetic fields in regions where no free current flows; prediction of wave propagation of electromagnetic fields; and conservation of electric charge in cases where charge density is time-varying. For greater discussion see Displacement current.
Extending the original law: the Ampère–Maxwell equation
Next, the circuital equation is extended by including the polarization current, thereby remedying the limited applicability of the original circuital law.
Treating free charges separately from bound charges, the equation including Maxwell's correction in terms of the -field is (the -field is used because it includes the magnetization currents, so does not appear explicitly, see -field and also Note):
(integral form), where is the magnetic field (also called "auxiliary magnetic field", "magnetic field intensity", or just "magnetic field"), is the electric displacement field, and is the enclosed conduction current or free current density. In differential form,
On the other hand, treating all charges on the same footing (disregarding whether they are bound or free charges), the generalized Ampère's equation, also called the Maxwell–Ampère equation, is in integral form (see the "proof" section below):
In differential form,
In both forms includes magnetization current density as well as conduction and polarization current densities. That is, the current density on the right side of the Ampère–Maxwell equation is:
where current density is the displacement current, and is the current density contribution actually due to movement of charges, both free and bound. Because , the charge continuity issue with Ampère's original formulation is no longer a problem. Because of the term in , wave propagation in free space now is possible.
With the addition of the displacement current, Maxwell was able to hypothesize (correctly) that light was a form of electromagnetic wave. See electromagnetic wave equation for a discussion of this important discovery.
Proof of equivalence
Proof that the formulations of the circuital law in terms of free current are equivalent to the formulations involving total current
In this proof, we will show that the equation
is equivalent to the equation
Note that we are only dealing with the differential forms, not the integral forms, but that is sufficient since the differential and integral forms are equivalent in each case, by the Kelvin–Stokes theorem.
We introduce the polarization density , which has the following relation to and :
Next, we introduce the magnetization density , which has the following relation to and :
and the following relation to the bound current:
where
is called the magnetization current density, and
is the polarization current density. Taking the equation for :
Consequently, referring to the definition of the bound current:
as was to be shown.
Ampère's circuital law in cgs units
In cgs units, the integral form of the equation, including Maxwell's correction, reads
where is the speed of light.
The differential form of the equation (again, including Maxwell's correction) is
See also
Biot–Savart law
Displacement current
Capacitance
Ampèrian magnetic dipole model
Electromagnetic wave equation
Maxwell's equations
Faraday's law of induction
Polarization density
Electric current
Vector calculus
Stokes' theorem
Notes
Further reading
External links
MISN-0-138 Ampere's Law (PDF file) by Kirby Morgan for Project PHYSNET.
MISN-0-145 The Ampere–Maxwell Equation; Displacement Current (PDF file) by J. S. Kovacs for Project PHYSNET.
A Dynamical Theory of the Electromagnetic Field Maxwell's paper of 1864
Ampere's law
Eponymous laws of physics
Ampere's law
Maxwell's equations
Electromagnetism | Ampère's circuital law | [
"Physics"
] | 3,113 | [
"Electromagnetism",
"Physical phenomena",
"Equations of physics",
"Fundamental interactions",
"Maxwell's equations"
] |
199,410 | https://en.wikipedia.org/wiki/Thin-film%20transistor | A thin-film transistor (TFT) is a special type of field-effect transistor (FET) where the transistor is made by thin film deposition. TFTs are grown on a supporting (but non-conducting) substrate, such as glass. This differs from the conventional bulk metal-oxide-semiconductor field-effect transistor (MOSFET), where the semiconductor material typically is the substrate, such as a silicon wafer. The traditional application of TFTs is in TFT liquid-crystal displays.
Design and manufacture
TFTs can be fabricated with a wide variety of semiconductor materials. Because it is naturally abundant and well understood, amorphous or polycrystalline silicon were (and still are) used as the semiconductor layer. However, because of the low mobility of amorphous silicon and the large device-to-device variations found in polycrystalline silicon, other materials have been studied for use in TFTs. These include cadmium selenide, metal oxides such as indium gallium zinc oxide (IGZO) or zinc oxide, organic semiconductors, carbon nanotubes, or metal halide perovskites.Because TFTs are grown on inert substrates, rather than on wafers, the semiconductor must be deposited in a dedicated process. A variety of techniques are used to deposit semiconductors in TFTs. These include chemical vapor deposition (CVD), atomic layer deposition (ALD), and sputtering. The semiconductor can also be deposited from solution, via techniques such as printing or spray coating. Solution-based techniques are hoped to lead to low-cost, mechanically flexible electronics. Because typical substrates will deform or melt at high temperatures, the deposition process must be carried out under relatively low temperatures compared to traditional electronic material processing.
Some wide band gap semiconductors, most notable metal oxides, are optically transparent. By also employing transparent substrates, such as glass, and transparent electrodes, such as indium tin oxide (ITO), some TFT devices can be designed to be completely optically transparent. Such transparent TFTs (TTFTs) could be used to enable head-up displays (such as on a car windshield).The first solution-processed TTFTs, based on zinc oxide, were reported in 2003 by researchers at Oregon State University. The Portuguese laboratory CENIMAT at the Universidade Nova de Lisboa has produced the world's first completely transparent TFT at room temperature. CENIMAT also developed the first paper transistor, which may lead to applications such as magazines and journal pages with moving images.
Many AMOLED displays use LTPO (Low-temperature Poly-Crystalline Silicon and Oxide) TFT transistors. These transistors offer stability at low refresh rates, and variable refresh rates, which allows for power saving displays that do not show visual artifacts. Large OLED displays usually use AOS (amporphous oxide semiconductor) TFT transistors instead, also called oxide TFTs and these are usually based on IGZO.
Applications
The best known application of thin-film transistors is in TFT LCDs, an implementation of liquid-crystal display technology. Transistors are embedded within the panel itself, reducing crosstalk between pixels and improving image stability.
, many color LCD TVs and monitors use this technology. TFT panels are frequently used in digital radiography applications in general radiography. A TFT is used in both direct and indirect capture as a base for the image receptor in medical radiography.
, all modern high-resolution and high-quality electronic visual display devices use TFT-based active matrix displays.
AMOLED displays also contain a TFT layer for active-matrix pixel addressing of individual organic light-emitting diodes.
The most beneficial aspect of TFT technology is its use of a separate transistor for each pixel on the display. Because each transistor is small, the amount of charge needed to control it is also small. This allows for very fast re-drawing of the display.
Structure of a TFT-display matrix
This picture does not include the actual light-source (usually cold-cathode fluorescent lamps or white LEDs), just the TFT-display matrix.
1 – Glass plates
2/3 – Horizontal and vertical polarisers
4 – RGB colour mask
5/6 – Horizontal and vertical command lines
7 – Rubbed polymer layer
8 – Spacers
9 – Thin-film transistors
10 – Front electrode
11 – Rear electrodes
History
In February 1957, John Wallmark of RCA filed a patent for a thin film MOSFET in which germanium monoxide was used as a gate dielectric. Paul K. Weimer, also of RCA implemented Wallmark's ideas and developed the thin-film transistor (TFT) in 1962, a type of MOSFET distinct from the standard bulk MOSFET. It was made with thin films of cadmium selenide and cadmium sulfide. In 1966, T.P. Brody and H.E. Kunig at Westinghouse Electric fabricated indium arsenide (InAs) MOS TFTs in both depletion and enhancement modes.
The idea of a TFT-based liquid-crystal display (LCD) was conceived by Bernard J. Lechner of RCA Laboratories in 1968. Lechner, F.J. Marlowe, E.O. Nester and J. Tults demonstrated the concept in 1968 with an 18x2 matrix dynamic scattering LCD that used standard discrete MOSFETs, as TFT performance was not adequate at the time. In 1973, T. Peter Brody, J. A. Asars and G. D. Dixon at Westinghouse Research Laboratories developed a CdSe (cadmium selenide) TFT, which they used to demonstrate the first CdSe thin-film-transistor liquid-crystal display (TFT LCD). The Westinghouse group also reported on operational TFT electroluminescence (EL) in 1973, using CdSe. Brody and Fang-Chen Luo demonstrated the first flat active-matrix liquid-crystal display (AM LCD) using CdSe in 1974, and then Brody coined the term "active matrix" in 1975. However, mass production of this device was never realized, due to complications in controlling the compound semiconductor thin film material properties, and device reliability over large areas.
A breakthrough in TFT research came with the development of the amorphous silicon (a-Si) TFT by P.G. le Comber, W.E. Spear and A. Ghaith at the University of Dundee in 1979. They reported the first functional TFT made from hydrogenated a-Si with a silicon nitride gate dielectric layer. The a-Si TFT was soon recognized as being more suitable for a large-area AM LCD. This led to commercial research and development (R&D) of AM LCD panels based on a-Si TFTs in Japan.
By 1982, pocket TVs based on AM LCD technology were developed in Japan. In 1982, Fujitsu's S. Kawai fabricated an a-Si dot-matrix display, and Canon's Y. Okubo fabricated a-Si twisted nematic (TN) and guest-host LCD panels. In 1983, Toshiba's K. Suzuki produced a-Si TFT arrays compatible with CMOS (complementary metal–oxide–semiconductor) integrated circuits (ICs), Canon's M. Sugata fabricated an a-Si color LCD panel, and a joint Sanyo and Sanritsu team including Mitsuhiro Yamasaki, S. Suhibuchi and Y. Sasaki fabricated a 3-inch a-SI color LCD TV.
The first commercial TFT-based AM LCD product was the 2.1-inch Epson ET-10 (Epson Elf), the first color LCD pocket TV, released in 1984. In 1986, a Hitachi research team led by Akio Mimura demonstrated a low-temperature polycrystalline silicon (LTPS) process for fabricating n-channel TFTs on a silicon-on-insulator (SOI), at a relatively low temperature of 200°C. A Hosiden research team led by T. Sunata in 1986 used a-Si TFTs to develop a 7-inch color AM LCD panel, and a 9-inch AM LCD panel. In the late 1980s, Hosiden supplied monochrome TFT LCD panels to Apple Computer. In 1988, a Sharp research team led by engineer T. Nagayasu used hydrogenated a-Si TFTs to demonstrate a 14-inch full-color LCD display, which convinced the electronics industry that LCD would eventually replace cathode-ray tube (CRT) as the standard television display technology. The same year, Sharp launched TFT LCD panels for notebook PCs. In 1992, Toshiba and IBM Japan introduced a 12.1-inch color SVGA panel for the first commercial color laptop by IBM.
TFTs can also be made out of indium gallium zinc oxide (IGZO). TFT-LCDs with IGZO transistors first showed up in 2012, and were first manufactured by Sharp Corporation. IGZO allows for higher refresh rates and lower power consumption. In 2021, the first flexible 32-bit microprocessor was manufactured using IGZO TFT technology on a polyimide substrate.
See also
Metal oxide thin film transistor
Organic field effect transistor
References
MOSFETs
Transistor types
Thin films
Semiconductors | Thin-film transistor | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,999 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Thin films",
"Materials science",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Nanotechnology",
"Planes (geometry)",
"Solid state engineering",
"Matter"
] |
199,940 | https://en.wikipedia.org/wiki/Massive%20compact%20halo%20object | A MAssive Compact Halo Object (MACHO) is a kind of astronomical body that might explain the apparent presence of dark matter in galactic halos. A MACHO is a body that emits little or no radiation and drifts through interstellar space unassociated with any planetary system (and may or may not be composed of normal baryonic matter). Since MACHOs are not luminous, they are hard to detect. MACHO candidates include black holes or neutron stars as well as brown dwarfs and unassociated planets. White dwarfs and very faint red dwarfs have also been proposed as candidate MACHOs. The term was coined by astrophysicist Kim Griest.
Detection
A MACHO may be detected when it passes in front of or nearly in front of a star and the MACHO's gravity bends the light, causing the star to appear brighter in an example of gravitational lensing known as gravitational microlensing. Several groups have searched for MACHOs by searching for the microlensing amplification of light. These groups have ruled out dark matter being explained by MACHOs with mass in the range solar masses (0.3 lunar masses) to 100 solar masses. One group, the MACHO collaboration, claimed in 2000 to have found enough microlensing to predict the existence of many MACHOs with mean mass of about 0.5 solar masses, enough to make up perhaps 20% of the dark matter in the galaxy.
This suggests that MACHOs could be white dwarfs or red dwarfs which have similar masses. However, red and white dwarfs are not completely dark; they do emit some light, and so can be searched for with the Hubble Space Telescope and with proper motion surveys. These searches have ruled out the possibility that these objects make up a significant fraction of dark matter in our galaxy. Another group, the EROS2 collaboration, does not confirm the signal claims by the MACHO group. They did not find enough microlensing effect with a sensitivity higher by a factor 2. Observations using the Hubble Space Telescope's NICMOS instrument showed that less than one percent of the halo mass is composed of red dwarfs. This corresponds to a negligible fraction of the dark matter halo mass. Therefore, the missing mass problem is not solved by MACHOs.
Types
MACHOs may sometimes be considered to include black holes. Isolated black holes without any matter around them are truly black in that they emit no light and any light shone upon them is absorbed and not reflected.
A black hole can sometimes be detected by the halo of bright gas and dust that forms around it as an accretion disk being pulled in by the black hole's gravity. Such a disk can generate jets of gas that are shot out away from the black hole because it cannot be absorbed quickly enough. An isolated black hole, however, would not have an accretion disk and would only be detectable by gravitational lensing.
Cosmologists doubt non-direct collapse black holes make up a majority of dark matter because the black holes are at isolated points of the galaxy. The largest contributor to the missing mass must be spread throughout the galaxy to balance the gravity. A minority of physicists, including Chapline and Laughlin, believe that the widely accepted model of the black hole is wrong and needs to be replaced by a new model, the dark-energy star; in the general case for the suggested new model, the cosmological distribution of dark energy would be slightly lumpy and dark-energy stars of primordial type might be a possible candidate for MACHOs.
Neutron stars, unlike black holes, are not heavy enough to collapse completely, and instead form a material rather like that of an atomic nucleus called neutron matter. After sufficient time these stars could radiate away enough energy to become cold enough that they would be too faint to see. Likewise, old white dwarfs may also become cold and dead, eventually becoming black dwarfs, although the universe is not thought to be old enough for any stars to have reached this stage.
Brown dwarfs have also been proposed as MACHO candidates. Brown dwarfs are sometimes called "failed stars" as they do not have enough mass for nuclear fusion to begin once their gravity causes them to collapse. Brown dwarfs are about thirteen to seventy-five times the mass of Jupiter. The contraction of material forming the brown dwarf heats them up so they only glow feebly at infrared wavelengths, making them difficult to detect. A survey of gravitational lensing effects in the direction of the Small Magellanic Cloud and Large Magellanic Cloud did not detect the number and type of lensing events expected if brown dwarfs made up a significant fraction of dark matter.
Theoretical considerations
Theoretical work simultaneously also showed that ancient MACHOs are not likely to account for the large amounts of dark matter now thought to be present in the universe. The Big Bang as it is currently understood could not have produced enough baryons and still be consistent with the observed elemental abundances, including the abundance of deuterium. Furthermore, separate observations of baryon acoustic oscillations, both in the cosmic microwave background and large-scale structure of galaxies, set limits on the ratio of baryons to the total amount of matter. These observations show that a large fraction of non-baryonic matter is necessary regardless of the presence or absence of MACHOs; however, MACHO candidates such as primordial black holes could be formed of non-baryonic matter (from pre-baryonic epochs of the early Big Bang).
See also
Weakly interacting massive particles (WIMPS), an alternative theory of dark matter
Robust associations of massive baryonic objects (RAMBOs)
MACHO Project, an observational search for MACHOs
References
Dark matter
Exotic matter | Massive compact halo object | [
"Physics",
"Astronomy"
] | 1,161 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
199,949 | https://en.wikipedia.org/wiki/Many-minds%20interpretation | The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer. The concept was first introduced in 1970 by H. Dieter Zeh as a variant of the Hugh Everett interpretation in connection with quantum decoherence, and later (in 1981) explicitly called a many or multi-consciousness interpretation. The name many-minds interpretation was first used by David Albert and Barry Loewer in 1988.
History
Interpretations of quantum mechanics
The various interpretations of quantum mechanics typically involve explaining the mathematical formalism of quantum mechanics, or to create a physical picture of the theory. While the mathematical structure has a strong foundation, there is still much debate about the physical and philosophical interpretation of the theory. These interpretations aim to tackle various concepts such as:
Evolution of the state of a quantum system (given by the wavefunction), typically through the use of the Schrödinger equation. This concept is almost universally accepted, and is rarely put to debate.
The measurement problem, which relates to what is called wavefunction collapse – the collapse of a quantum state into a definite measurement (i.e. a specific eigenstate of the wavefunction). The debate on whether this collapse actually occurs is a central problem in interpreting quantum mechanics.
The standard solution to the measurement problem is the "Orthodox" or "Copenhagen" interpretation, which claims that the wave function collapses as the result of a measurement by an observer or apparatus external to the quantum system. An alternative interpretation, the Many-worlds Interpretation, was first described by Hugh Everett in 1957 (where it was called the relative state interpretation, the name Many-worlds was coined by Bryce Seligman DeWitt starting in the 1960s and finalized in the 1970s). His formalism of quantum mechanics denied that a measurement requires a wave collapse, instead suggesting that all that is truly necessary of a measurement is that a quantum connection is formed between the particle, the measuring device, and the observer.
The many-worlds interpretation
In the original relative state formulation, Everett proposed that there is one universal wavefunction that describes the objective reality of the whole universe. He stated that when subsystems interact, the total system becomes a superposition of these subsystems. This includes observers and measurement systems, which become part of one universal state (the wavefunction) that is always described via the Schrödinger Equation (or its relativistic alternative). That is, the states of the subsystems that interacted become "entangled" in such a way that any definition of one must necessarily involve the other. Thus, each subsystem's state can only be described relative to each subsystem with which it interacts (hence the name relative state).
Everett suggested that the universe is actually indeterminate as a whole. For example, consider an observer measuring some particle that starts in an undetermined state, as both spin-up and spin-down, that is – a superposition of both possibilities. When an observer measures that particle's spin, however, it always registers as either up or down. The problem of how to understand this sudden shift from "both up and down" to "either up or down" is called the Measurement problem. According to the many-worlds interpretation, the act of measurement forced a “splitting” of the universe into two states, one spin-up and the other spin-down, and the two branches that extend from those two subsequently independent states. One branch measures up. The other measures down. Looking at the instrument informs the observer which branch he is on, but the system itself is indeterminate at this and, by logical extension, presumably any higher level.
The “worlds” in the many worlds theory is then just the complete measurement history up until and during the measurement in question, where splitting happens. These “worlds” each describe a different state of the universal wave function and cannot communicate. There is no collapse of the wavefunction into one state or another, but rather an observer finds itself in the world leading up to what measurement it has made and is unaware of the other possibilities that are equally real.
The many-minds interpretation
The many-minds interpretation of quantum theory is many-worlds with the distinction between worlds constructed at the level of the individual observer. Rather than the worlds that branch, it is the observer's mind that branches.
The purpose of this interpretation is to overcome the fundamentally strange concept of observers being in a superposition with themselves. In their 1988 paper, Albert and Loewer argue that it simply makes no sense for one to think of the mind of an observer to be in an indefinite state. Rather, when someone answers the question about which state of a system they have observed, they must answer with complete certainty. If they are in a superposition of states, then this certainty is not possible and we arrive at a contradiction. To overcome this, they then suggest that it is merely the “bodies” of the minds that are in a superposition, and that the minds must have definite states that are never in superposition.
When an observer measures a quantum system and becomes entangled with it, it now constitutes a larger quantum system. In regards to each possibility within the wave function, a mental state of the brain corresponds. Ultimately, only one mind is experienced, leading the others to branch off and become inaccessible, albeit real. In this way, every sentient being is attributed with an infinity of minds, whose prevalence correspond to the amplitude of the wavefunction. As an observer checks a measurement, the probability of realizing a specific measurement directly correlates to the number of minds they have where they see that measurement. It is in this way that the probabilistic nature of quantum measurements are obtained by the Many-minds Interpretation.
Quantum non-locality in the many-minds interpretation
Consider an experiment that measures the polarization of two photons. When the photon is created, it has an indeterminate polarization. If a stream of these photons is passed through a polarization filter, 50% of the light is passed through. This corresponds to each photon having a 50% chance of aligning with the filter and thus passing, or being misaligned (by 90 degrees relative to the polarization filter) and being absorbed. Quantum mechanically, this means the photon is in a superposition of states where it is either passed or absorbed. Now, consider the inclusion of another photon and polarization detector. Now, the photons are created in such a way that they are entangled. That is, when one photon takes on a polarization state, the other photon will always behave as if it has the same polarization. For simplicity, take the second filter to either be perfectly aligned with the first, or to be perfectly misaligned (90 degree difference in angle, such that it is absorbed). If the detectors are aligned, both photons are passed (i.e. they are said to agree). If they are misaligned, only the first passes and the second is absorbed (now they disagree). Thus, the entanglement causes perfect correlations between the two measurements – regardless of separation distance, making the interaction non-local. This sort of experiment is further explained in Tim Maudlin's Quantum Non-Locality and Relativity, and can be related to Bell test experiments. Now, consider the analysis of this experiment from the many minds point of view:
No sentient observer
Consider the case where there is no sentient observer, i.e. no mind present to observe the experiment. In this case, the detector will be in an indefinite state. The photon is both passed and absorbed, and will remain in this state. The correlations are withheld in that none of the possible "minds", or wave function states, correspond to non correlated results.
One sentient observer
Now expand the situation to have one sentient being observing the device. Now, they too enter the indefinite state. Their eyes, body, and brain are seeing both spins at the same time. The mind however, stochastically chooses one of the directions, and that is what the mind sees. When this observer views the second detector, their body will see both results. Their mind will choose the result that agrees with the first detector, and the observer will see the expected results. However, the observer's mind seeing one result does not directly affect the distant state – there is just no wave function in which the expected correlations do not exist. The true correlation only happens when they actually view the second detector.
Two sentient observers
When two people look at two different detectors that scan entangled particles, both observers will enter an indefinite state, as with one observer. These results need not agree – the second observer's mind does not have to have results that correlate with the first's. When one observer tells the results to the second observer, their two minds cannot communicate and thus will only interact with the other's body, which is still indefinite. When the second observer responds, his body will respond with whatever result agrees with the first observer's mind. This means that both observer's minds will be in a state of the wavefunction that always get the expected results, but individually their results could be different.
Non-locality of the many-minds interpretation
As we have thus seen, any correlations seen in the wavefunction of each observer's minds are only concrete after interaction between the different polarizers. The correlations on the level of individual minds correspond to the appearance of quantum non-locality (or equivalently, violation of Bell's inequality). So the many world is non-local, or it cannot explain EPR-GHZ correlations.
Support
There is currently no empirical evidence for the many-minds interpretation. However, there are theories that do not discredit the many-minds interpretation. In light of Bell's analysis of the consequences of quantum non-locality, empirical evidence is needed to avoid inventing novel fundamental concepts (hidden variables). Two different solutions of the measurement problem then appear conceivable: von Neumann's collapse or Everett's relative state interpretation. In both cases a (suitably modified) psycho-physical parallelism can be re-established.
If neural processes can be described and analyzed then some experiments could potentially be created to test whether affecting neural processes can have an effect on a quantum system. Speculation about the details of this awareness-local physical system coupling on a purely theoretical basis could occur, however experimentally searching for them through neurological and psychological studies would be ideal.
Objections
Nothing within quantum theory itself requires each possibility within a wave function to complement a mental state. As all physical states (i.e. brain states) are quantum states, their associated mental states should be also. Nonetheless, it is not what one experiences within physical reality. Albert and Loewer argue that the mind must be intrinsically different than the physical reality as described by quantum theory. Thereby, they reject type-identity physicalism in favour of a non-reductive stance. However, Lockwood saves materialism through the notion of supervenience of the mental on the physical.
Nonetheless, the many-minds interpretation does not solve the mindless hulks problem as a problem of supervenience. Mental states do not supervene on brain states as a given brain state is compatible with different configurations of mental states.
Another serious objection is that workers in no collapse interpretations have produced no more than elementary models based on the definite existence of specific measuring devices. They have assumed, for example, that the Hilbert space of the universe splits naturally into a tensor product structure compatible with the measurement under consideration. They have also assumed, even when describing the behaviour of macroscopic objects, that it is appropriate to employ models in which only a few dimensions of Hilbert space are used to describe all the relevant behaviour.
Furthermore, as the many-minds interpretation is corroborated by our experience of physical reality, a notion of many unseen worlds and its compatibility with other physical theories (i.e. the principle of the conservation of mass) is difficult to reconcile. According to Schrödinger's equation, the mass-energy of the combined observed system and measurement apparatus is the same before and after. However, with every measurement process (i.e. splitting), the total mass-energy would seemingly increase.
Peter J. Lewis argues that the many-minds interpretation of quantum mechanics has absurd implications for agents facing life-or-death decisions.
In general, the many-minds theory holds that a conscious being who observes the outcome of a random zero-sum experiment will evolve into two successors in different observer states, each of whom observes one of the possible outcomes. Moreover, the theory advises one to favour choices in such situations in proportion to the probability that they will bring good results to one's various successors. But in a life-or-death case like an observer getting into the box with Schrödinger's cat, the observer will only have one successor, since one of the outcomes will ensure the observers death. So it seems that the many-minds interpretation advises one to get in the box with the cat, since it is certain that one's only successor will emerge unharmed. See also quantum suicide and immortality.
Finally, it supposes that there is some physical distinction between a conscious observer and a non-conscious measuring device, so it seems to require eliminating the strong Church–Turing hypothesis or postulating a physical model for consciousness.
See also
Consciousness
Quantum suicide and immortality
Quantum mind
Many-worlds interpretation
Wave function
References
External links
Wikibook on consciousness
Bibliography on the Many-minds interpretation
Quantum measurement
Interpretations of quantum mechanics
Quantum mind | Many-minds interpretation | [
"Physics"
] | 2,806 | [
"Interpretations of quantum mechanics",
"Quantum mind",
"Quantum measurement",
"Quantum mechanics"
] |
200,115 | https://en.wikipedia.org/wiki/Trajectory | A trajectory or flight path is the path that an object with mass in motion follows through space as a function of time. In classical mechanics, a trajectory is defined by Hamiltonian mechanics via canonical coordinates; hence, a complete trajectory is defined by position and momentum, simultaneously.
The mass might be a projectile or a satellite. For example, it can be an orbit — the path of a planet, asteroid, or comet as it travels around a central mass.
In control theory, a trajectory is a time-ordered set of states of a dynamical system (see e.g. Poincaré map). In discrete mathematics, a trajectory is a sequence of values calculated by the iterated application of a mapping to an element of its source.
Physics of trajectories
A familiar example of a trajectory is the path of a projectile, such as a thrown ball or rock. In a significantly simplified model, the object moves only under the influence of a uniform gravitational force field. This can be a good approximation for a rock that is thrown for short distances, for example at the surface of the Moon. In this simple approximation, the trajectory takes the shape of a parabola. Generally when determining trajectories, it may be necessary to account for nonuniform gravitational forces and air resistance (drag and aerodynamics). This is the focus of the discipline of ballistics.
One of the remarkable achievements of Newtonian mechanics was the derivation of Kepler's laws of planetary motion. In the gravitational field of a point mass or a spherically-symmetrical extended mass (such as the Sun), the trajectory of a moving object is a conic section, usually an ellipse or a hyperbola. This agrees with the observed orbits of planets, comets, and artificial spacecraft to a reasonably good approximation, although if a comet passes close to the Sun, then it is also influenced by other forces such as the solar wind and radiation pressure, which modify the orbit and cause the comet to eject material into space.
Newton's theory later developed into the branch of theoretical physics known as classical mechanics. It employs the mathematics of differential calculus (which was also initiated by Newton in his youth). Over the centuries, countless scientists have contributed to the development of these two disciplines. Classical mechanics became a most prominent demonstration of the power of rational thought, i.e. reason, in science as well as technology. It helps to understand and predict an enormous range of phenomena; trajectories are but one example.
Consider a particle of mass , moving in a potential field . Physically speaking, mass represents inertia, and the field represents external forces of a particular kind known as "conservative". Given at every relevant position, there is a way to infer the associated force that would act at that position, say from gravity. Not all forces can be expressed in this way, however.
The motion of the particle is described by the second-order differential equation
On the right-hand side, the force is given in terms of , the gradient of the potential, taken at positions along the trajectory. This is the mathematical form of Newton's second law of motion: force equals mass times acceleration, for such situations.
Examples
Uniform gravity, neither drag nor wind
The ideal case of motion of a projectile in a uniform gravitational field in the absence of other forces (such as air drag) was first investigated by Galileo Galilei. To neglect the action of the atmosphere in shaping a trajectory would have been considered a futile hypothesis by practical-minded investigators all through the Middle Ages in Europe. Nevertheless, by anticipating the existence of the vacuum, later to be demonstrated on Earth by his collaborator Evangelista Torricelli, Galileo was able to initiate the future science of mechanics. In a near vacuum, as it turns out for instance on the Moon, his simplified parabolic trajectory proves essentially correct.
In the analysis that follows, we derive the equation of motion of a projectile as measured from an inertial frame at rest with respect to the ground. Associated with the frame is a right-hand coordinate system with its origin at the point of launch of the projectile. The -axis is tangent to the ground, and the axis is perpendicular to it ( parallel to the gravitational field lines ). Let be the acceleration of gravity. Relative to the flat terrain, let the initial horizontal speed be and the initial vertical speed be . It will also be shown that the range is , and the maximum altitude is . The maximum range for a given initial speed is obtained when , i.e. the initial angle is 45. This range is , and the maximum altitude at the maximum range is .
Derivation of the equation of motion
Assume the motion of the projectile is being measured from a free fall frame which happens to be at (x,y) = (0,0) at t = 0. The equation of motion of the projectile in this frame (by the equivalence principle) would be . The co-ordinates of this free-fall frame, with respect to our inertial frame would be . That is, .
Now translating back to the inertial frame the co-ordinates of the projectile becomes That is:
(where v0 is the initial velocity, is the angle of elevation, and g is the acceleration due to gravity).
Range and height
The range, R, is the greatest distance the object travels along the x-axis in the I sector. The initial velocity, vi, is the speed at which said object is launched from the point of origin. The initial angle, θi, is the angle at which said object is released. The g is the respective gravitational pull on the object within a null-medium.
The height, h, is the greatest parabolic height said object reaches within its trajectory
Angle of elevation
In terms of angle of elevation and initial speed :
giving the range as
This equation can be rearranged to find the angle for a required range
(Equation II: angle of projectile launch)
Note that the sine function is such that there are two solutions for for a given range . The angle giving the maximum range can be found by considering the derivative or with respect to and setting it to zero.
which has a nontrivial solution at , or . The maximum range is then . At this angle , so the maximum height obtained is .
To find the angle giving the maximum height for a given speed calculate the derivative of the maximum height with respect to , that is
which is zero when . So the maximum height is obtained when the projectile is fired straight up.
Orbiting objects
If instead of a uniform downwards gravitational force we consider two bodies orbiting with the mutual gravitation between them, we obtain Kepler's laws of planetary motion. The derivation of these was one of the major works of Isaac Newton and provided much of the motivation for the development of differential calculus.
Catching balls
If a projectile, such as a baseball or cricket ball, travels in a parabolic path, with negligible air resistance, and if a player is positioned so as to catch it as it descends, he sees its angle of elevation increasing continuously throughout its flight. The tangent of the angle of elevation is proportional to the time since the ball was sent into the air, usually by being struck with a bat. Even when the ball is really descending, near the end of its flight, its angle of elevation seen by the player continues to increase. The player therefore sees it as if it were ascending vertically at constant speed. Finding the place from which the ball appears to rise steadily helps the player to position himself correctly to make the catch. If he is too close to the batsman who has hit the ball, it will appear to rise at an accelerating rate. If he is too far from the batsman, it will appear to slow rapidly, and then to descend.
Notes
See also
Aft-crossing trajectory
Displacement (geometry)
Galilean invariance
Orbit (dynamics)
Orbit (group theory)
Orbital trajectory
Phugoid
Planetary orbit
Porkchop plot
Projectile motion
Range of a projectile
Rigid body
World line
References
External links
Projectile Motion Flash Applet :)
Trajectory calculator
An interactive simulation on projectile motion
Projectile Lab, JavaScript trajectory simulator
Parabolic Projectile Motion: Shooting a Harmless Tranquilizer Dart at a Falling Monkey by Roberto Castilla-Meléndez, Roxana Ramírez-Herrera, and José Luis Gómez-Muñoz, The Wolfram Demonstrations Project.
Trajectory, ScienceWorld.
Java projectile-motion simulation, with first-order air resistance.
Java projectile-motion simulation; targeting solutions, parabola of safety.
Ballistics
Mechanics | Trajectory | [
"Physics",
"Engineering"
] | 1,732 | [
"Applied and interdisciplinary physics",
"Mechanics",
"Ballistics",
"Mechanical engineering"
] |
200,192 | https://en.wikipedia.org/wiki/Chromatid | A chromatid (Greek khrōmat- 'color' + -id) is one half of a duplicated chromosome. Before replication, one chromosome is composed of one DNA molecule. In replication, the DNA molecule is copied, and the two molecules are known as chromatids. During the later stages of cell division these chromatids separate longitudinally to become individual chromosomes.
Chromatid pairs are normally genetically identical, and said to be homozygous. However, if mutations occur, they will present slight differences, in which case they are heterozygous. The pairing of chromatids should not be confused with the ploidy of an organism, which is the number of homologous versions of a chromosome.
Sister chromatids
Chromatids may be sister or non-sister chromatids. A sister chromatid is either one of the two chromatids of the same chromosome joined together by a common centromere. A pair of sister chromatids is called a dyad. Once sister chromatids have separated (during the anaphase of mitosis or the anaphase II of meiosis during sexual reproduction), they are again called chromosomes, each having the same genetic mass as one of the individual chromatids that made up its parent. The DNA sequence of two sister chromatids is completely identical (apart from very rare DNA copying errors).
Sister chromatid exchange (SCE) is the exchange of genetic information between two sister chromatids. SCEs can occur during mitosis or meiosis. SCEs appear to primarily reflect DNA recombinational repair processes responding to DNA damage (see article Sister chromatid exchange).
Non-sister chromatids, on the other hand, refers to either of the two chromatids of paired homologous chromosomes, that is, the pairing of a paternal chromosome and a maternal chromosome. In chromosomal crossovers, non-sister (homologous) chromatids form chiasmata to exchange genetic material during the prophase I of meiosis (See Homologous chromosome pair).
See also
Kinetochore
References
Chromosomes
Molecular genetics
Cell biology
Mitosis
Telomeres | Chromatid | [
"Chemistry",
"Biology"
] | 472 | [
"Cell biology",
"Senescence",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Telomeres",
"Mitosis"
] |
200,201 | https://en.wikipedia.org/wiki/Lactic%20acid%20fermentation | Lactic acid fermentation is a metabolic process by which glucose or other six-carbon sugars (also, disaccharides of six-carbon sugars, e.g. sucrose or lactose) are converted into cellular energy and the metabolite lactate, which is lactic acid in solution. It is an anaerobic fermentation reaction that occurs in some bacteria and animal cells, such as muscle cells.
If oxygen is present in the cell, many organisms will bypass fermentation and undergo cellular respiration; however, facultative anaerobic organisms will both ferment and undergo respiration in the presence of oxygen. Sometimes even when oxygen is present and aerobic metabolism is happening in the mitochondria, if pyruvate is building up faster than it can be metabolized, the fermentation will happen anyway.
Lactate dehydrogenase catalyzes the interconversion of pyruvate and lactate with concomitant interconversion of NADH and NAD+.
In homolactic fermentation, one molecule of glucose is ultimately converted to two molecules of lactic acid. Heterolactic fermentation, by contrast, yields carbon dioxide and ethanol in addition to lactic acid, in a process called the phosphoketolase pathway.
History
Chemical analysis of archaeological finds show that milk fermentation had been used since prehistory; its first applications were probably a part of the Neolithic Revolution. Since milk naturally contains lactic acid bacteria, the discovery of the fermentation process was quite evident, since it happens spontaneously at an adequate temperature. The problem of these first farmers was that fresh milk is nearly indigestible by adults, so they had an interest to discover this mechanism. In fact, lactic acid bacteria contain the needed enzymes to digest lactose, and their populations multiply strongly during the fermentation. Therefore, milk fermented even a short time contains enough enzymes to digest the lactose molecules, after the milk is in the human body, which allows adults to consume it. Even safer was a longer fermentation, which was practiced for cheesemaking. This process was also discovered a very long time ago, which is proven by recipes for cheese production on Cuneiform scripts, the first written documents that exist, and later in Babylonian and Egyptian texts. There is a theory of competitive advantage related to fermented milk products. This theory suggests that the women of these first settled agricultural civilisations could shorten the time between two children thanks to the additional lactose uptake from milk consumption. This factor may have given them an important advantage to out-compete the hunter-gatherer societies.
With the increasing consumption of milk products these societies developed a lactase persistence by epigenetic inheritance, which means that the milk-digesting enzyme lactase was present in their bodies during the whole lifetime, so they could drink unfermented milk as adults too. This early habituation to lactose consumption in the first settler societies can still be observed today in regional differences of this mutation's concentration. It is estimated that about 65% of world population still lacks it. Since these first societies came from regions around eastern Turkey to central Europe, the gene appears more frequently there and in North America, as it was settled by Europeans. It is because of the dominance of this mutation that Western cultures believe it is unusual to have a lactose intolerance, when it is in fact more common than the mutation. On the contrary, lactose intolerance is much more present in Asian countries.
Milk products and their fermentation have had an important influence on some cultures' development. This is the case in Mongolia, where people often practice a pastoral form of agriculture. The milk that they produce and consume in these cultures is mainly mare milk and has a long tradition. But not every part or product of the fresh milk has the same meaning. For instance, the fattier part on the top, the "deež", is seen as the most valuable part and is therefore often used to honor guests.
Very important with often a traditional meaning as well are fermentation products of mare milk, like for example the slightly-alcoholic yogurt kumis. Consumption of these peaks during cultural festivities such as the Mongolian lunar new year (in spring). The time of this celebration is called the "white month", which indicates that milk products (called "white food" together with starchy vegetables, in comparison to meat products, called "black food") are a central part of this tradition. The purpose of these festivities is to "close" the past year – clean the house or the yurt, honor the animals for having provided their food, and prepare everything for the coming summer season – to be ready to "open" the new year. Consuming white food in this festive context is a way to connect to the past and to a national identity, which is the Mongolian empire personified by Genghis Khan. During the time of this empire, the fermented mare milk was the drink to honor and thank warriors and leading persons, it was not meant for everybody. Although it eventually became a drink for normal people, it has kept its honorable meaning. Like many other traditions, this one feels the influence of globalization. Other products, like industrial yogurt, coming mainly from China and western countries, have tended to replace it more and more, mainly in urban areas. However, in rural and poorer regions it is still of great importance.
Although this chemical process had been used in food production for thousand of years, microbial lactic acid fermentation was not properly described before much later. During the 19th century, several chemists discovered some fundamental concepts of organic chemistry. One of these was the French chemist Joseph Louis Gay-Lussac, who was especially interested in fermentation processes, and he passed this fascination to one of his best students, Justus von Liebig. With a difference of some years, each of them described, together with colleagues, the chemical structure of the lactic acid molecule as we know it today. They had a purely chemical understanding of the fermentation process; it could not be observed using a microscope, and could only be optimized by chemical catalyzers. In 1857, the French chemist Louis Pasteur first described lactic acid as the product of a microbial fermentation. During this time, he worked at the University of Lille, where a local distillery asked him for advice concerning some fermentation problems. Per chance and with the badly equipped laboratory he had at that time, he was able to discover that in this distillery, two fermentations were taking place, a lactic acid one and an alcoholic one, both induced by microorganisms. He then continued the research on these discoveries in Paris, where he also published his theories that presented a stable contradiction to the purely chemical version represented by Liebig and his followers. Even though Pasteur described some concepts that are still accepted today, Liebig refused to accept them. But even Pasteur himself wrote that he was "driven" to a completely new understanding of this chemical phenomenon. Although Pasteur didn't find every detail of this process, he still discovered the main mechanism of how microbial lactic acid fermentation works. He was the first to describe fermentation as a "form of life without air".
Biochemistry
Homofermentative process
Homofermentative bacteria convert glucose to two molecules of lactate and use this reaction to perform substrate-level phosphorylation to make two molecules of ATP:
Glucose + 2 ADP + 2 Pi → 2 Lactate + 2 ATP
Heterofermentative process
Heterofermentative bacteria produce less lactate and less ATP, but produce several other end products:
Glucose + ADP + Pi → Lactate + Ethanol + CO2 + ATP
Examples include Leuconostoc mesenteroides, Lactobacillus bifermentous, and Leuconostoc lactis.
Bifidum pathway
Bifidobacterium bifidum utilizes a lactic acid fermentation pathway that produces more ATP than either homolactic fermentation or heterolactic fermentation:
2 Glucose + 5 ADP + 5 Pi → 3 Acetate + 2 Lactate + 5 ATP
Major genera of lactose-fermenting bacteria
Some major bacterial strains identified as being able to ferment lactose are in the genera Escherichia, Citrobacter, Enterobacter and Klebsiella . All four of these groups fall underneath the family of Enterobacteriaceae. These four genera are able to be separated from each other by using biochemical testing, and simple biological tests are readily available. Apart from whole-sequence genomics, common tests include H2S production, motility and citrate use, indole, methyl red and Voges-Proskauer tests.
Applications
Lactic acid fermentation is used in many areas of the world to produce foods that cannot be produced through other methods. The most commercially important genus of lactic acid-fermenting bacteria is Lactobacillus, though other bacteria and even yeast are sometimes used. Two of the most common applications of lactic acid fermentation are in the production of yogurt and sauerkraut.
Pickles
Fermented fish
In some Asian cuisines, fish is traditionally fermented with rice to produce lactic acid that preserves the fish. Examples of these dishes include burong isda of the Philippines; narezushi of Japan; and pla ra of Thailand. The same process is also used for shrimp in the Philippines in the dish known as balao-balao.
Kimchi
Kimchi also uses lactic acid fermentation.
Sauerkraut
Lactic acid fermentation is also used in the production of sauerkraut. The main type of bacteria used in the production of sauerkraut is of the genus Leuconostoc.
As in yogurt, when the acidity rises due to lactic acid-fermenting organisms, many other pathogenic microorganisms are killed. The bacteria produce lactic acid, as well as simple alcohols and other hydrocarbons. These may then combine to form esters, contributing to the unique flavor of sauerkraut.
Sour beer
Lactic acid is a component in the production of sour beers, including Lambics and Berliner Weisses.
Yogurt
The main method of producing yogurt is through the lactic acid fermentation of milk with harmless bacteria. The primary bacteria used are typically Lactobacillus bulgaricus and Streptococcus thermophilus, and United States as well as European law requires all yogurts to contain these two cultures (though others may be added as probiotic cultures). These bacteria produce lactic acid in the milk culture, decreasing its pH and causing it to congeal. The bacteria also produce compounds that give yogurt its distinctive flavor. An additional effect of the lowered pH is the incompatibility of the acidic environment with many other types of harmful bacteria.
For a probiotic yogurt, additional types of bacteria such as Lactobacillus acidophilus are also added to the culture.
In vegetables
Lactic acid bacteria (LAB) already exists as part of the natural flora in most vegetables. Lettuce and cabbage were examined to determine the types of lactic acid bacteria that exist in the leaves. Different types of LAB will produce different types of silage fermentation, which is the fermentation of the leafy foliage. Silage fermentation is an anaerobic reaction that reduces sugars to fermentation byproducts like lactic acid.
Physiological
Lactobacillus fermentation and accompanying production of acid provides a protective vaginal microbiome that protects against the proliferation of pathogenic organisms.
Lactate fermentation and muscle cramps
During the 1990s, the lactic acid hypothesis was created to explain why people experienced burning or muscle cramps that occurred during and after intense exercise. The hypothesis proposes that a lack of oxygen in muscle cells results in a switch from cellular respiration to fermentation. Lactic acid created as a byproduct of fermentation of pyruvate from glycolysis accumulates in muscles causing a burning sensation and cramps.
Research from 2006 has suggested that acidosis isn't the main cause of muscle cramps. Instead cramps may be due to a lack of potassium in muscles, leading to contractions under high stress.
Animals, in fact, do not produce lactic acid during fermentation. Despite the common use of the term lactic acid in the literature, the byproduct of fermentation in animal cells is lactate.
Another change to the lactic acid hypothesis is that when sodium lactate is inside of the body, there is a higher period of exhaustion in the host after a period of exercise.
Lactate fermentation is important to muscle cell physiology. When muscle cells are undergoing intense activity, like sprinting, they need energy quickly. There is only enough ATP stored in muscles cells to last a few seconds of sprinting. The cells then default to fermentation, since they are in an anaerobic environment. Through lactate fermentation, muscle cells are able to regenerate NAD+ to continue glycolysis, even under strenuous activity. [5]
The vaginal environment is heavily influenced by lactic acid producing bacteria. Lactobacilli spp. that live in the vaginal canal assist in pH control. If the pH in the vagina becomes too basic, more lactic acid will be produced to lower the pH back to a more acidic level. Lactic acid producing bacteria also act as a protective barrier against possible pathogens such as bacterial vaginosis and vaginitis species, different fungi, and protozoa through the production of hydrogen peroxide, and antibacterial compounds. It is unclear if further use of lactic acid, through fermentation, in the vaginal canal is present [6]
Benefits for the lactose intolerant
In small amounts, lactic acid is good for the human body by providing energy and substrates while it moves through the cycle. In lactose intolerant people, the fermentation of lactose to lactic acid has been shown in small studies to help lactose intolerant people. The process of fermentation limits the amount of lactose available. With the amount of lactose lowered, there is less build up inside of the body, reducing bloating. Success of lactic fermentation was most evident in yogurt cultures. Further studies are being conducted on other milk products like acidophilus milk.
Notes and references
Carbohydrate metabolism
Fermentation
Metabolic pathways | Lactic acid fermentation | [
"Chemistry",
"Biology"
] | 3,075 | [
"Carbohydrate metabolism",
"Cellular respiration",
"Biochemistry",
"Carbohydrate chemistry",
"Metabolic pathways",
"Metabolism",
"Fermentation"
] |
200,612 | https://en.wikipedia.org/wiki/Automotive%20aerodynamics | Automotive aerodynamics is the study of the aerodynamics of road vehicles. Its main goals are reducing drag and wind noise, minimizing noise emission, and preventing undesired lift forces and other causes of aerodynamic instability at high speeds. Air is also considered a fluid in this case. For some classes of racing vehicles, it may also be important to produce downforce to improve traction and thus cornering abilities.
History
The frictional force of aerodynamic drag increases significantly with vehicle speed. As early as the 1920s engineers began to consider automobile shape in reducing aerodynamic drag at higher speeds. By the 1950s German and British automotive engineers were systematically analyzing the effects of automotive drag for the higher performance vehicles. By the late 1960s scientists also became aware of the significant increase in sound levels emitted by automobiles at high speed. These effects were understood to increase the intensity of sound levels for adjacent land uses at a non-linear rate. Soon highway engineers began to design roadways to consider the speed effects of aerodynamic drag produced sound levels, and automobile manufacturers considered the same factors in vehicle design.
Strategies for reducing drag
The deletion of parts on a vehicle is an easy way for designers and vehicle owners to reduce parasitic and frontal drag of the vehicle with little cost and effort. Deletion can be as simple as removing an aftermarket part, or part that has been installed on the vehicle after production, or having to modify and remove an OEM part, meaning any part of the vehicle that was originally manufactured on the vehicle. Most production sports cars and high efficiency vehicles come standard with many of these deletions in order to be competitive in the automotive and race market, while others choose to keep these drag-increasing aspects of the vehicle for their visual aspects, or to fit the typical uses of their customer base.
Spoilers
A rear spoiler usually comes standard in most sports vehicles and resembles the shape of a raised wing in the rear of the vehicle. The main purpose of a rear spoiler in a vehicle's design is to counteract lift, thereby increasing stability at higher speeds. In order to achieve the lowest possible drag, air must flow around the streamlined body of the vehicle without coming into contact with any areas of possible turbulence. A rear spoiler design that stands off the rear deck lid will increase downforce, reducing lift at high speeds while incurring a drag penalty. Flat spoilers, possibly angled slightly downward may reduce turbulence and thereby reduce the coefficient of drag. Some cars now feature automatically adjustable rear spoilers, so at lower speed the effect on drag is reduced when the benefits of reduced lift are not required.
Mirrors
Side mirrors both increase the frontal area of the vehicle and increase the coefficient of drag since they protrude from the side of the vehicle. In order to decrease the impact that side mirrors have on the drag of the vehicle the side mirrors can be replaced with smaller mirrors or mirrors with a different shape. Several concept cars of the 2010s are replacing mirrors with tiny cameras but this option is not common for production cars because most countries require side mirrors. One of the first production passenger automobiles to swap out mirrors for cameras was the Honda e, and in this case the cameras are claimed by Honda to have decreased aerodynamic drag by "around 90% compared to conventional door mirrors" which contributed to an approximately 3.8% reduction in drag for the entire vehicle. It is estimated that two side mirrors are responsible for 2 to 7% of the total aerodynamic drag of a motor vehicle, and that removing them could improve fuel economy by 1.5–2 miles per US gallon.
Radio antennas
While they do not have the biggest impact on the drag coefficient due to their small size, radio antennas commonly found protruding from the front of the vehicle can be relocated and changed in design to rid the car of this added drag. The most common replacement for the standard car antenna is the shark fin antenna found in most high efficiency vehicles.
Wheels
When air flows around the wheel wells it gets disturbed by the rims of the vehicles and forms an area of turbulence around the wheel. In order for the air to flow more smoothly around the wheel well, smooth wheel covers are often applied. Smooth wheel covers are hub caps with no holes in them for air to pass through. This design reduces drag; however, it may cause the brakes to heat up more quickly because the covers prevent airflow around the brake system. As a result, this modification is more commonly seen in high efficiency vehicles rather than sports cars or racing vehicles.
Air curtains
Air curtains divert air flow from slots in the body and guide it towards the outside edges of the wheel wells.
Partial grille blocks
The front grille of a vehicle is used to direct air through the radiator. In a streamlined design the air flows around the vehicle rather than through; however, the grille of a vehicle redirects airflow from around the vehicle to through the vehicle, which then increases the drag. In order to reduce this impact a grille block is often used. A grille block covers up a portion of, or the entirety of, the front grille of a vehicle. In most high efficiency models or in vehicles with low drag coefficients, a very small grille will already be built into the vehicle's design, eliminating the need for a grille block. The grille in most production vehicles is generally designed to maximize air flow through the radiator where it exits into the engine compartment. This design can actually create too much airflow into the engine compartment, preventing it from warming up in a timely manner, and in such cases a grille block is used to increase engine performance and reduce vehicle drag simultaneously.
Under trays
The underside of a vehicle often traps air in various places and adds turbulence around the vehicle. In most racing vehicles this is eliminated by covering the entire underside of the vehicle in what is called an under tray. This tray prevents any air from becoming trapped under the vehicle and reduces drag.
Fender skirts
Fender skirts are often made as extensions of the body panels of the vehicles and cover the entire wheel wells. Much like smooth wheel covers this modification reduces the drag of the vehicle by preventing any air from becoming trapped in the wheel well and assists in streamlining the body of the vehicle. Fender skirts are more commonly found on the rear wheel wells of a vehicle because the rear tires do not pivot when steering. This is commonly seen in vehicles such as the first generation Honda Insight. Front fender skirts have the same effect on reducing drag as the rear wheel skirts, but must be further offset from the body in order to compensate for the tire sticking out from the body of the vehicle as turns are made.
Boattails and Kammbacks
A boattail can greatly reduce a vehicle's total drag. Boattails create a teardrop shape that will give the vehicle a more streamlined profile, reducing the occurrence of drag inducing flow separation. A kammback is a truncated boattail. It is created as an extension of the rear of the vehicle, moving the rear backward at a slight angle toward the bumper of the car. This can reduce drag as well but a boattail would reduce the vehicle's drag more. Nonetheless, for practical and style reasons, a kammback is more commonly seen in racing, high efficiency vehicles, and trucking.
Comparison with aircraft aerodynamics
Automotive aerodynamics differs from aircraft aerodynamics in several ways:
The characteristic shape of a road vehicle is much less streamlined compared to an aircraft.
The vehicle operates very close to the ground, rather than in free air.
The operating speeds are lower (and aerodynamic drag varies as the square of speed).
A ground vehicle has fewer degrees of freedom than an aircraft, and its motion is less affected by aerodynamic forces.
Passenger and commercial ground vehicles have very specific design constraints such as their intended purpose, high safety standards (requiring, for example, more 'dead' structural space to act as crumple zones), and certain regulations.
Methods of studying aerodynamics
Automotive aerodynamics is studied using both computer modelling and wind tunnel testing. For the most accurate results from a wind tunnel test, the tunnel is sometimes equipped with a rolling road. This is a movable floor for the working section, which moves at the same speed as the air flow. This prevents a boundary layer from forming on the floor of the working section and affecting the results.
Downforce
Downforce describes the downward pressure created by the aerodynamic characteristics of a car that allows it to travel faster through a corner by holding the car to the track or road surface. Some elements to increase vehicle downforce will also increase drag.
It is very important to produce a good downward aerodynamic force because it affects the car's speed and traction.
See also
Aerodynamic stability
Drafting
Drag reduction system
Downforce
Flight dynamics
Fluid dynamics
Ground effect in cars
Slipstream
Wing
References
External links
One of the first cars to generate downforce - The Prevost analysed in CFD
Automotive engineering
Aerodynamics
Articles containing video clips
Vehicle dynamics | Automotive aerodynamics | [
"Chemistry",
"Engineering"
] | 1,790 | [
"Aerodynamics",
"Automotive engineering",
"Mechanical engineering by discipline",
"Aerospace engineering",
"Fluid dynamics"
] |
201,002 | https://en.wikipedia.org/wiki/Magnetostriction | Magnetostriction is a property of magnetic materials that causes them to change their shape or dimensions during the process of magnetization. The variation of materials' magnetization due to the applied magnetic field changes the magnetostrictive strain until reaching its saturation value, λ. The effect was first identified in 1842 by James Joule when observing a sample of iron.
Magnetostriction applies to magnetic fields, while electrostriction applies to electric fields.
Magnetostriction causes energy loss due to frictional heating in susceptible ferromagnetic cores, and is also responsible for the low-pitched humming sound that can be heard coming from transformers, where alternating currents produce a changing magnetic field.
Explanation
Internally, ferromagnetic materials have a structure that is divided into domains, each of which is a region of uniform magnetization. When a magnetic field is applied, the boundaries between the domains shift and the domains rotate; both of these effects cause a change in the material's dimensions. The reason that a change in the magnetic domains of a material results in a change in the material's dimensions is a consequence of magnetocrystalline anisotropy; it takes more energy to magnetize a crystalline material in one direction than in another. If a magnetic field is applied to the material at an angle to an easy axis of magnetization, the material will tend to rearrange its structure so that an easy axis is aligned with the field to minimize the free energy of the system. Since different crystal directions are associated with different lengths, this effect induces a strain in the material.
The reciprocal effect, the change of the magnetic susceptibility (response to an applied field) of a material when subjected to a mechanical stress, is called the Villari effect. Two other effects are related to magnetostriction: the Matteucci effect is the creation of a helical anisotropy of the susceptibility of a magnetostrictive material when subjected to a torque and the Wiedemann effect is the twisting of these materials when a helical magnetic field is applied to them.
The Villari reversal is the change in sign of the magnetostriction of iron from positive to negative when exposed to magnetic fields of approximately 40 kA/m.
On magnetization, a magnetic material undergoes changes in volume which are small: of the order 10−6.
Magnetostrictive hysteresis loop
Like flux density, the magnetostriction also exhibits hysteresis versus the strength of the magnetizing field. The shape of this hysteresis loop (called "dragonfly loop") can be reproduced using the Jiles-Atherton model.
Magnetostrictive materials
Magnetostrictive materials can convert magnetic energy into kinetic energy, or the reverse, and are used to build actuators and sensors. The property can be quantified by the magnetostrictive coefficient, λ, which may be positive or negative and is defined as the fractional change in length as the magnetization of the material increases from zero to the saturation value. The effect is responsible for the familiar "electric hum" () which can be heard near transformers and high power electrical devices.
Cobalt exhibits the largest room-temperature magnetostriction of a pure element at 60 microstrains. Among alloys, the highest known magnetostriction is exhibited by Terfenol-D, (Ter for terbium, Fe for iron, NOL for Naval Ordnance Laboratory, and D for dysprosium). Terfenol-D, , exhibits about 2,000 microstrains in a field of 160 kA/m (2 kOe) at room temperature and is the most commonly used engineering magnetostrictive material. Galfenol, , and Alfer, , are newer alloys that exhibit 200-400 microstrains at lower applied fields (~200 Oe) and have enhanced mechanical properties from the brittle Terfenol-D. Both of these alloys have <100> easy axes for magnetostriction and demonstrate sufficient ductility for sensor and actuator applications.
Another very common magnetostrictive composite is the amorphous alloy with its trade name Metglas 2605SC. Favourable properties of this material are its high saturation-magnetostriction constant, λ, of about 20 microstrains and more, coupled with a low magnetic-anisotropy field strength, HA, of less than 1 kA/m (to reach magnetic saturation). Metglas 2605SC also exhibits a very strong ΔE-effect with reductions in the effective Young's modulus up to about 80% in bulk. This helps build energy-efficient magnetic MEMS.
Cobalt ferrite, (CoO·Fe2O3), is also mainly used for its magnetostrictive applications like sensors and actuators, thanks to its high saturation magnetostriction (~200 parts per million). In the absence of rare-earth elements, it is a good substitute for Terfenol-D. Moreover, its magnetostrictive properties can be tuned by inducing a magnetic uniaxial anisotropy. This can be done by magnetic annealing, magnetic field assisted compaction, or reaction under uniaxial pressure. This last solution has the advantage of being ultrafast (20 min), thanks to the use of spark plasma sintering.
In early sonar transducers during World War II, nickel was used as a magnetostrictive material. To alleviate the shortage of nickel, the Japanese navy used an iron-aluminium alloy from the Alperm family.
Mechanical behaviors of magnetostrictive alloys
Effect of microstructure on elastic strain alloys
Single-crystal alloys exhibit superior microstrain, but are vulnerable to yielding due to the anisotropic mechanical properties of most metals. It has been observed that for polycrystalline alloys with a high area coverage of preferential grains for microstrain, the mechanical properties (ductility) of magnetostrictive alloys can be significantly improved. Targeted metallurgical processing steps promote abnormal grain growth of {011} grains in galfenol and alfenol thin sheets, which contain two easy axes for magnetic domain alignment during magnetostriction. This can be accomplished by adding particles such as boride species and niobium carbide () during initial chill casting of the ingot.
For a polycrystalline alloy, an established formula for the magnetostriction, λ, from known directional microstrain measurements is:
λs = 1/5(2λ100+3λ111)
During subsequent hot rolling and recrystallization steps, particle strengthening occurs in which the particles introduce a “pinning” force at grain boundaries that hinders normal (stochastic) grain growth in an annealing step assisted by a atmosphere. Thus, single-crystal-like texture (~90% {011} grain coverage) is attainable, reducing the interference with magnetic domain alignment and increasing microstrain attainable for polycrystalline alloys as measured by semiconducting strain gauges. These surface textures can be visualized using electron backscatter diffraction (EBSD) or related diffraction techniques.
Compressive stress to induce domain alignment
For actuator applications, maximum rotation of magnetic moments leads to the highest possible magnetostriction output. This can be achieved by processing techniques such as stress annealing and field annealing. However, mechanical pre-stresses can also be applied to thin sheets to induce alignment perpendicular to actuation as long as the stress is below the buckling limit. For example, it has been demonstrated that applied compressive pre-stress of up to ~50 MPa can result in an increase of magnetostriction by ~90%. This is hypothesized to be due to a "jump" in initial alignment of domains perpendicular to applied stress and improved final alignment parallel to applied stress.
Constitutive behavior of magnetostrictive materials
These materials generally show non-linear behavior with a change in applied magnetic field or stress. For small magnetic fields, linear piezomagnetic constitutive behavior is enough. Non-linear magnetic behavior is captured using a classical macroscopic model such as the Preisach model and Jiles-Atherton model. For capturing magneto-mechanical behavior, Armstrong proposed an "energy average" approach. More recently, Wahi et al. have proposed a computationally efficient constitutive model wherein constitutive behavior is captured using a "locally linearizing" scheme.
Applications
Electronic article surveillance – using magnetostriction to prevent shoplifting
Magnetostrictive delay lines - an earlier form of computer memory
Magnetostrictive loudspeakers and headphones
See also
Electromagnetically induced acoustic noise and vibration
Inverse magnetostrictive effect
Wiedemann effect – a torsional force caused by magnetostriction
Magnetomechanical effects for a collection of similar effects
Magnetocaloric effect
Electrostriction
Piezoelectricity
Piezomagnetism
SoundBug
FeONIC – developer of audio products using magnetostriction
Terfenol-D
Galfenol
References
External links
Magnetostriction
Invisible Speakers from Feonic that use Magnetostriction
Magnetostrictive alloy maker: REMA-CN
Magnetic ordering | Magnetostriction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,927 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
201,080 | https://en.wikipedia.org/wiki/Wiedemann%20effect | The twisting of a ferromagnetic rod through which an electric current is flowing when the rod is placed in a longitudinal magnetic field. It was discovered by the German physicist Gustav Wiedemann in 1858
. The Wiedemann effect is one of the manifestations of magnetostriction in a field formed by the combination of a longitudinal magnetic field and a circular magnetic field that is created by an electric current. If the electric current (or the magnetic field) is alternating, the rod will begin torsional oscillation.
In linear approach angle of rod torsion α does not depend on its cross-section form and is defined only by current density and magnetoelastic properties of the rod:
,
where
is current density;
is magnetoelastic parameter, proportional to longitudinal magnetic field value;
is the shear modulus.
Applications
Magnetostrictive position sensors use the Wiedemann effect to excite an ultrasonic pulse. Typically a small magnet is used to mark a position along a magnetostrictive wire. The magnetic field from a short current pulse in the wire combined with that from the position magnet excites the ultrasonic pulse. The time required for this pulse to travel from the point of excitation to a pickup at the end of the wire gives the position. Reflections from the other end of the wire could lead to disturbances. In order to avoid this the wire is connected to a mechanical damper that end.
See also
Matteucci effect the inverse effect
Magnetostriction
Magnetomechanical effects for a collection of similar effects
References
Magnetism
Magnetic ordering | Wiedemann effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 320 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
201,268 | https://en.wikipedia.org/wiki/RNA%20polymerase | In molecular biology, RNA polymerase (abbreviated RNAP or RNApol), or more specifically DNA-directed/dependent RNA polymerase (DdRP), is an enzyme that catalyzes the chemical reactions that synthesize RNA from a DNA template.
Using the enzyme helicase, RNAP locally opens the double-stranded DNA so that one strand of the exposed nucleotides can be used as a template for the synthesis of RNA, a process called transcription. A transcription factor and its associated transcription mediator complex must be attached to a DNA binding site called a promoter region before RNAP can initiate the DNA unwinding at that position. RNAP not only initiates RNA transcription, it also guides the nucleotides into position, facilitates attachment and elongation, has intrinsic proofreading and replacement capabilities, and termination recognition capability. In eukaryotes, RNAP can build chains as long as 2.4 million nucleotides.
RNAP produces RNA that, functionally, is either for protein coding, i.e. messenger RNA (mRNA); or non-coding (so-called "RNA genes"). Examples of four functional types of RNA genes are:
Transfer RNA (tRNA) Transfers specific amino acids to growing polypeptide chains at the ribosomal site of protein synthesis during translation;
Ribosomal RNA (rRNA) Incorporates into ribosomes;
Micro RNA (miRNA) Regulates gene activity; and, RNA silencing
Catalytic RNA (ribozyme) Functions as an enzymatically active RNA molecule.
RNA polymerase is essential to life, and is found in all living organisms and many viruses. Depending on the organism, a RNA polymerase can be a protein complex (multi-subunit RNAP) or only consist of one subunit (single-subunit RNAP, ssRNAP), each representing an independent lineage. The former is found in bacteria, archaea, and eukaryotes alike, sharing a similar core structure and mechanism. The latter is found in phages as well as eukaryotic chloroplasts and mitochondria, and is related to modern DNA polymerases. Eukaryotic and archaeal RNAPs have more subunits than bacterial ones do, and are controlled differently.
Bacteria and archaea only have one RNA polymerase. Eukaryotes have multiple types of nuclear RNAP, each responsible for synthesis of a distinct subset of RNA:
Structure
The 2006 Nobel Prize in Chemistry was awarded to Roger D. Kornberg for creating detailed molecular images of RNA polymerase during various stages of the transcription process.
In most prokaryotes, a single RNA polymerase species transcribes all types of RNA. RNA polymerase "core" from E. coli consists of five subunits: two alpha (α) subunits of 36 kDa, a beta (β) subunit of 150 kDa, a beta prime subunit (β′) of 155 kDa, and a small omega (ω) subunit. A sigma (σ) factor binds to the core, forming the holoenzyme. After transcription starts, the factor can unbind and let the core enzyme proceed with its work. The core RNA polymerase complex forms a "crab claw" or "clamp-jaw" structure with an internal channel running along the full length. Eukaryotic and archaeal RNA polymerases have a similar core structure and work in a similar manner, although they have many extra subunits.
All RNAPs contain metal cofactors, in particular zinc and magnesium cations which aid in the transcription process.
Function
Control of the process of gene transcription affects patterns of gene expression and, thereby, allows a cell to adapt to a changing environment, perform specialized roles within an organism, and maintain basic metabolic processes necessary for survival. Therefore, it is hardly surprising that the activity of RNAP is long, complex, and highly regulated. In Escherichia coli bacteria, more than 100 transcription factors have been identified, which modify the activity of RNAP.
RNAP can initiate transcription at specific DNA sequences known as promoters. It then produces an RNA chain, which is complementary to the template DNA strand. The process of adding nucleotides to the RNA strand is known as elongation; in eukaryotes, RNAP can build chains as long as 2.4 million nucleotides (the full length of the dystrophin gene). RNAP will preferentially release its RNA transcript at specific DNA sequences encoded at the end of genes, which are known as terminators.
Products of RNAP include:
Messenger RNA (mRNA)—template for the synthesis of proteins by ribosomes.
Non-coding RNA or "RNA genes"—a broad class of genes that encode RNA that is not translated into protein. The most prominent examples of RNA genes are transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. However, since the late 1990s, many new RNA genes have been found, and thus RNA genes may play a much more significant role than previously thought.
Transfer RNA (tRNA)—transfers specific amino acids to growing polypeptide chains at the ribosomal site of protein synthesis during translation
Ribosomal RNA (rRNA)—a component of ribosomes
Micro RNA—regulates gene activity
Catalytic RNA (Ribozyme)—enzymatically active RNA molecules
RNAP accomplishes de novo synthesis. It is able to do this because specific interactions with the initiating nucleotide hold RNAP rigidly in place, facilitating chemical attack on the incoming nucleotide. Such specific interactions explain why RNAP prefers to start transcripts with ATP (followed by GTP, UTP, and then CTP). In contrast to DNA polymerase, RNAP includes helicase activity, therefore no separate enzyme is needed to unwind DNA.
Action
Initiation
RNA polymerase binding in bacteria involves the sigma factor recognizing the core promoter region containing the −35 and −10 elements (located before the beginning of sequence to be transcribed) and also, at some promoters, the α subunit C-terminal domain recognizing promoter upstream elements. There are multiple interchangeable sigma factors, each of which recognizes a distinct set of promoters. For example, in E. coli, σ70 is expressed under normal conditions and recognizes promoters for genes required under normal conditions ("housekeeping genes"), while σ32 recognizes promoters for genes required at high temperatures ("heat-shock genes"). In archaea and eukaryotes, the functions of the bacterial general transcription factor sigma are performed by multiple general transcription factors that work together. The RNA polymerase-promoter closed complex is usually referred to as the "transcription preinitiation complex."
After binding to the DNA, the RNA polymerase switches from a closed complex to an open complex. This change involves the separation of the DNA strands to form an unwound section of DNA of approximately 13 bp, referred to as the "transcription bubble". Supercoiling plays an important part in polymerase activity because of the unwinding and rewinding of DNA. Because regions of DNA in front of RNAP are unwound, there are compensatory positive supercoils. Regions behind RNAP are rewound and negative supercoils are present.
Promoter escape
RNA polymerase then starts to synthesize the initial DNA-RNA heteroduplex, with ribonucleotides base-paired to the template DNA strand according to Watson-Crick base-pairing interactions. As noted above, RNA polymerase makes contacts with the promoter region. However these stabilizing contacts inhibit the enzyme's ability to access DNA further downstream and thus the synthesis of the full-length product. In order to continue RNA synthesis, RNA polymerase must escape the promoter. It must maintain promoter contacts while unwinding more downstream DNA for synthesis, "scrunching" more downstream DNA into the initiation complex. During the promoter escape transition, RNA polymerase is considered a "stressed intermediate." Thermodynamically the stress accumulates from the DNA-unwinding and DNA-compaction activities. Once the DNA-RNA heteroduplex is long enough (~10 bp), RNA polymerase releases its upstream contacts and effectively achieves the promoter escape transition into the elongation phase. The heteroduplex at the active center stabilizes the elongation complex.
However, promoter escape is not the only outcome. RNA polymerase can also relieve the stress by releasing its downstream contacts, arresting transcription. The paused transcribing complex has two options: (1) release the nascent transcript and begin anew at the promoter or (2) reestablish a new 3′-OH on the nascent transcript at the active site via RNA polymerase's catalytic activity and recommence DNA scrunching to achieve promoter escape. Abortive initiation, the unproductive cycling of RNA polymerase before the promoter escape transition, results in short RNA fragments of around 9 bp in a process known as abortive transcription. The extent of abortive initiation depends on the presence of transcription factors and the strength of the promoter contacts.
Elongation
The 17-bp transcriptional complex has an 8-bp DNA-RNA hybrid, that is, 8 base-pairs involve the RNA transcript bound to the DNA template strand. As transcription progresses, ribonucleotides are added to the 3′ end of the RNA transcript and the RNAP complex moves along the DNA. The characteristic elongation rates in prokaryotes and eukaryotes are about 10–100 nts/sec.
Aspartyl (asp) residues in the RNAP will hold on to Mg2+ ions, which will, in turn, coordinate the phosphates of the ribonucleotides. The first Mg2+ will hold on to the α-phosphate of the NTP to be added. This allows the nucleophilic attack of the 3′-OH from the RNA transcript, adding another NTP to the chain. The second Mg2+ will hold on to the pyrophosphate of the NTP. The overall reaction equation is:
(NMP)n + NTP → (NMP)n+1 + PPi
Fidelity
Unlike the proofreading mechanisms of DNA polymerase those of RNAP have only recently been investigated. Proofreading begins with separation of the mis-incorporated nucleotide from the DNA template. This pauses transcription. The polymerase then backtracks by one position and cleaves the dinucleotide that contains the mismatched nucleotide. In the RNA polymerase this occurs at the same active site used for polymerization and is therefore markedly different from the DNA polymerase where proofreading occurs at a distinct nuclease active site.
The overall error rate is around 10−4 to 10−6.
Termination
In bacteria, termination of RNA transcription can be rho-dependent or rho-independent. The former relies on the rho factor, which destabilizes the DNA-RNA heteroduplex and causes RNA release. The latter, also known as intrinsic termination, relies on a palindromic region of DNA. Transcribing the region causes the formation of a "hairpin" structure from the RNA transcription looping and binding upon itself. This hairpin structure is often rich in G-C base-pairs, making it more stable than the DNA-RNA hybrid itself. As a result, the 8 bp DNA-RNA hybrid in the transcription complex shifts to a 4 bp hybrid. These last 4 base pairs are weak A-U base pairs, and the entire RNA transcript will fall off the DNA.
Transcription termination in eukaryotes is less well understood than in bacteria, but involves cleavage of the new transcript followed by template-independent addition of adenines at its new 3′ end, in a process called polyadenylation.
Other organisms
Given that DNA and RNA polymerases both carry out template-dependent nucleotide polymerization, it might be expected that the two types of enzymes would be structurally related. However, x-ray crystallographic studies of both types of enzymes reveal that, other than containing a critical Mg2+ ion at the catalytic site, they are virtually unrelated to each other; indeed template-dependent nucleotide polymerizing enzymes seem to have arisen independently twice during the early evolution of cells. One lineage led to the modern DNA polymerases and reverse transcriptases, as well as to a few single-subunit RNA polymerases (ssRNAP) from phages and organelles. The other multi-subunit RNAP lineage formed all of the modern cellular RNA polymerases.
Bacteria
In bacteria, the same enzyme catalyzes the synthesis of mRNA and non-coding RNA (ncRNA).
RNAP is a large molecule. The core enzyme has five subunits (~400 kDa):
β′ The β′ subunit is the largest subunit, and is encoded by the rpoC gene. The β′ subunit contains part of the active center responsible for RNA synthesis and contains some of the determinants for non-sequence-specific interactions with DNA and nascent RNA. It is split into two subunits in Cyanobacteria and chloroplasts.
β The β subunit is the second-largest subunit, and is encoded by the rpoB gene. The β subunit contains the rest of the active center responsible for RNA synthesis and contains the rest of the determinants for non-sequence-specific interactions with DNA and nascent RNA.
α (αI and αII) Two copies of the α subunit, being the third-largest subunit, are present in a molecule of RNAP: αI and αII (one and two). Each α subunit contains two domains: αNTD (N-terminal domain) and αCTD (C-terminal domain). αNTD contains determinants for assembly of RNAP. αCTD (C-terminal domain) contains determinants for interaction with promoter DNA, making non-sequence-non-specific interactions at most promoters and sequence-specific interactions at upstream-element-containing promoters, and contains determinants for interactions with regulatory factors.
ω The ω subunit is the smallest subunit. The ω subunit facilitates assembly of RNAP and stabilizes assembled RNAP.
In order to bind promoters, RNAP core associates with the transcription initiation factor sigma (σ) to form RNA polymerase holoenzyme. Sigma reduces the affinity of RNAP for nonspecific DNA while increasing specificity for promoters, allowing transcription to initiate at correct sites. The complete holoenzyme therefore has 6 subunits: β′βαI and αIIωσ (~450 kDa).
Eukaryotes
Eukaryotes have multiple types of nuclear RNAP, each responsible for synthesis of a distinct subset of RNA. All are structurally and mechanistically related to each other and to bacterial RNAP:
Eukaryotic chloroplasts contain a multi-subunit RNAP ("PEP, plastid-encoded polymerase"). Due to its bacterial origin, the organization of PEP resembles that of current bacterial RNA polymerases: It is encoded by the RPOA, RPOB, RPOC1 and RPOC2 genes on the plastome, which as proteins form the core subunits of PEP, respectively named α, β, β′ and β″. Similar to the RNA polymerase in E. coli, PEP requires the presence of sigma (σ) factors for the recognition of its promoters, containing the -10 and -35 motifs. Despite the many commonalities between plant organellar and bacterial RNA polymerases and their structure, PEP additionally requires the association of a number of nuclear encoded proteins, termed PAPs (PEP-associated proteins), which form essential components that are closely associated with the PEP complex in plants. Initially, a group consisting of 10 PAPs was identified through biochemical methods, which was later extended to 12 PAPs.
Chloroplast also contain a second, structurally and mechanistically unrelated, single-subunit RNAP ("nucleus-encoded polymerase, NEP"). Eukaryotic mitochondria use POLRMT (human), a nucleus-encoded single-subunit RNAP. Such phage-like polymerases are referred to as RpoT in plants.
Archaea
Archaea have a single type of RNAP, responsible for the synthesis of all RNA. Archaeal RNAP is structurally and mechanistically similar to bacterial RNAP and eukaryotic nuclear RNAP I-V, and is especially closely structurally and mechanistically related to eukaryotic nuclear RNAP II.
The history of the discovery of the archaeal RNA polymerase is quite recent. The first analysis of the RNAP of an archaeon was performed in 1971, when the RNAP from the extreme halophile Halobacterium cutirubrum was isolated and purified. Crystal structures of RNAPs from Sulfolobus solfataricus and Sulfolobus shibatae set the total number of identified archaeal subunits at thirteen.
Archaea has the subunit corresponding to Eukaryotic Rpb1 split into two. There is no homolog to eukaryotic Rpb9 (POLR2I) in the S. shibatae complex, although TFS (TFIIS homolog) has been proposed as one based on similarity. There is an additional subunit dubbed Rpo13; together with Rpo5 it occupies a space filled by an insertion found in bacterial β′ subunits (1,377–1,420 in Taq). An earlier, lower-resolution study on S. solfataricus structure did not find Rpo13 and only assigned the space to Rpo5/Rpb5. Rpo3 is notable in that it's an iron–sulfur protein. RNAP I/III subunit AC40 found in some eukaryotes share similar sequences, but does not bind iron. This domain, in either case, serves a structural function.
Archaeal RNAP subunit previously used an "RpoX" nomenclature where each subunit is assigned a letter in a way unrelated to any other systems. In 2009, a new nomenclature based on Eukaryotic Pol II subunit "Rpb" numbering was proposed.
Viruses
Orthopoxviruses and some other nucleocytoplasmic large DNA viruses synthesize RNA using a virally encoded multi-subunit RNAP. They are most similar to eukaryotic RNAPs, with some subunits minified or removed. Exactly which RNAP they are most similar to is a topic of debate. Most other viruses that synthesize RNA use unrelated mechanics.
Many viruses use a single-subunit DNA-dependent RNAP (ssRNAP) that is structurally and mechanistically related to the single-subunit RNAP of eukaryotic chloroplasts (RpoT) and mitochondria (POLRMT) and, more distantly, to DNA polymerases and reverse transcriptases. Perhaps the most widely studied such single-subunit RNAP is bacteriophage T7 RNA polymerase. ssRNAPs cannot proofread.
B. subtilis prophage SPβ uses YonO, a homolog of the β+β′ subunits of msRNAPs to form a monomeric (both barrels on the same chain) RNAP distinct from the usual "right hand" ssRNAP. It probably diverged very long ago from the canonical five-unit msRNAP, before the time of the last universal common ancestor.
Other viruses use an RNA-dependent RNAP (an RNAP that employs RNA as a template instead of DNA). This occurs in negative strand RNA viruses and dsRNA viruses, both of which exist for a portion of their life cycle as double-stranded RNA. However, some positive strand RNA viruses, such as poliovirus, also contain RNA-dependent RNAP.
History
RNAP was discovered independently by Charles Loe, Audrey Stevens, and Jerard Hurwitz in 1960. By this time, one half of the 1959 Nobel Prize in Medicine had been awarded to Severo Ochoa for the discovery of what was believed to be RNAP, but instead turned out to be polynucleotide phosphorylase.
Purification
RNA polymerase can be isolated in the following ways:
By a phosphocellulose column.
By glycerol gradient centrifugation.
By a DNA column.
By an ion chromatography column.
And also combinations of the above techniques.
See also
Alpha-amanitin
Primase
References
External links
DNAi – DNA Interactive, including information and Flash clips on RNA Polymerase.
RNA Polymerase – Synthesis RNA from DNA Template
(Wayback Machine copy)
3D macromolecular structures of RNA Polymerase from the EM Data Bank(EMDB)
Gene expression
RNA
Enzymes
EC 2.7.7 | RNA polymerase | [
"Chemistry",
"Biology"
] | 4,369 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
201,479 | https://en.wikipedia.org/wiki/Syngas | Syngas, or synthesis gas, is a mixture of hydrogen and carbon monoxide, in various ratios. The gas often contains some carbon dioxide and methane. It is principally used for producing ammonia or methanol. Syngas is combustible and can be used as a fuel. Historically, it has been used as a replacement for gasoline, when gasoline supply has been limited; for example, wood gas was used to power cars in Europe during WWII (in Germany alone, half a million cars were built or rebuilt to run on wood gas).
Production
Syngas is produced by steam reforming or partial oxidation of natural gas or liquid hydrocarbons, or coal gasification.
Steam reforming of methane is an endothermic reaction requiring 206 kJ/mol of methane:
In principle, but rarely in practice, biomass and related hydrocarbon feedstocks could be used to generate biogas and biochar in waste-to-energy gasification facilities. The gas generated (mostly methane and carbon dioxide) is sometimes described as syngas but its composition differs from syngas. Generation of conventional syngas (mostly H2 and CO) from waste biomass has been explored.
Composition, pathway for formation, and thermochemistry
The chemical composition of syngas varies based on the raw materials and the processes. Syngas produced by coal gasification generally is a mixture of 30 to 60% carbon monoxide, 25 to 30% hydrogen, 5 to 15% carbon dioxide, and 0 to 5% methane. It also contains lesser amount of other gases. Syngas has less than half the energy density of natural gas.
The first reaction, between incandescent coke and steam, is strongly endothermic, producing carbon monoxide (CO) and hydrogen (water gas in older terminology). When the coke bed has cooled to a temperature at which the endothermic reaction can no longer proceed, the steam is then replaced by a blast of air.
The second and third reactions then take place, producing an exothermic reaction—forming initially carbon dioxide and raising the temperature of the coke bed—followed by the second endothermic reaction, in which the latter is converted to carbon monoxide. The overall reaction is exothermic, forming "producer gas" (older terminology). Steam can then be re-injected, then air etc., to give an endless series of cycles until the coke is finally consumed. Producer gas has a much lower energy value, relative to water gas, due primarily to dilution with atmospheric nitrogen. Pure oxygen can be substituted for air to avoid the dilution effect, producing gas of much higher calorific value.
In order to produce more hydrogen from this mixture, more steam is added and the water gas shift reaction is carried out:
The hydrogen can be separated from the by pressure swing adsorption (PSA), amine scrubbing, and membrane reactors. A variety of alternative technologies have been investigated, but none are of commercial value. Some variations focus on new stoichiometries such as carbon dioxide plus methane or partial hydrogenation of carbon dioxide. Other research focuses on novel energy sources to drive the processes including electrolysis, solar energy, microwaves, and electric arcs.
Electricity generated from renewable sources is also used to process carbon dioxide and water into syngas through high-temperature electrolysis. This is an attempt to maintain carbon neutrality in the generation process. Audi, in partnership with company named Sunfire, opened a pilot plant in November 2014 to generate e-diesel using this process.
Syngas that is not methanized typically has a lower heating value of 120 BTU/scf . Untreated syngas can be run in hybrid turbines that allow for greater efficiency because of their lower operating temperatures, and extended part lifetime.
Uses
Syngas is used as a source of hydrogen as well as a fuel. It is also used to directly reduce iron ore to sponge iron. Chemical uses include the production of methanol which is a precursor to acetic acid and many acetates; liquid fuels and lubricants via the Fischer–Tropsch process and previously the Mobil methanol to gasoline process; ammonia via the Haber process, which converts atmospheric nitrogen (N2) into ammonia which is used as a fertilizer; and oxo alcohols via an intermediate aldehyde.
See also
Boudouard reaction
Claus process
Coal gas
Industrial gas
Integrated gasification combined cycle
Partial oxidation
Reformer sponge iron cycle
Syngas fermentation
Underground coal gasification
References
External links
"Sewage treatment plant smells success in synthetic gas trial" ARENA, accessed December 6 2020
Fischer Tropsch archive
https://www.technologyreview.com/s/508051/a-cheap-trick-enables-energy-efficient-carbon-capture/
Coal technology
Fuel gas
Waste treatment technology
Synthetic fuels
Industrial gases
Synthetic fuel technologies
Petrochemicals | Syngas | [
"Chemistry",
"Engineering"
] | 998 | [
"Products of chemical industry",
"Water treatment",
"Petroleum technology",
"Industrial gases",
"Synthetic fuel technologies",
"Environmental engineering",
"Chemical process engineering",
"Waste treatment technology",
"Petrochemicals"
] |
18,961,138 | https://en.wikipedia.org/wiki/Tensor%20rank%20decomposition | In multilinear algebra, the tensor rank decomposition or rank-R decomposition is the decomposition of a tensor as a sum of R rank-1 tensors, where R is minimal. Computing this decomposition is an open problem.
Canonical polyadic decomposition (CPD) is a variant of the tensor rank decomposition, in which the tensor is approximated as a sum of K rank-1 tensors for a user-specified K. The CP decomposition has found some applications in linguistics and chemometrics. It was introduced by Frank Lauren Hitchcock in 1927 and later rediscovered several times, notably in psychometrics.
The CP decomposition is referred to as CANDECOMP, PARAFAC, or CANDECOMP/PARAFAC (CP). Note that the PARAFAC2 rank decomposition is a variation of the CP decomposition.
Another popular generalization of the matrix SVD known as the higher-order singular value decomposition computes orthonormal mode matrices and has found applications in econometrics, signal processing, computer vision, computer graphics, and psychometrics.
Notation
A scalar variable is denoted by lower case italic letters, and an upper bound scalar is denoted by an upper case italic letter, .
Indices are denoted by a combination of lowercase and upper case italic letters, . Multiple indices that one might encounter when referring to the multiple modes of a tensor are conveniently denoted by where .
A vector is denoted by a lower case bold Times Roman, and a matrix is denoted by bold upper case letters .
A higher order tensor is denoted by calligraphic letters,. An element of an -order tensor is denoted by or .
Definition
A data tensor is a collection of multivariate observations organized into a -way array where =+1. Every tensor may be represented with a suitably large as a linear combination of rank-1 tensors:
where and where . When the number of terms is minimal in the above expression, then is called the rank of the tensor, and the decomposition is often referred to as a (tensor) rank decomposition, minimal CP decomposition, or Canonical Polyadic Decomposition (CPD). If the number of terms is not minimal, then the above decomposition is often referred to as CANDECOMP/PARAFAC, Polyadic decomposition'.
Tensor rank
Contrary to the case of matrices, computing the rank of a tensor is NP-hard. The only notable well-understood case consists of tensors in , whose rank can be obtained from the Kronecker–Weierstrass normal form of the linear matrix pencil that the tensor represents. A simple polynomial-time algorithm exists for certifying that a tensor is of rank 1, namely the higher-order singular value decomposition.
The rank of the tensor of zeros is zero by convention. The rank of a tensor is one, provided that .
Field dependence
The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example, consider the following real tensor
where . The rank of this tensor over the reals is known to be 3, while its complex rank is only 2 because it is the sum of a complex rank-1 tensor with its complex conjugate, namely
where .
In contrast, the rank of real matrices will never decrease under a field extension to : real matrix rank and complex matrix rank coincide for real matrices.
Generic rank
The generic rank is defined as the least rank such that the closure in the Zariski topology of the set of tensors of rank at most is the entire space . In the case of complex tensors, tensors of rank at most form a dense set : every tensor in the aforementioned space is either of rank less than the generic rank, or it is the limit in the Euclidean topology of a sequence of tensors from . In the case of real tensors, the set of tensors of rank at most only forms an open set of positive measure in the Euclidean topology. There may exist Euclidean-open sets of tensors of rank strictly higher than the generic rank. All ranks appearing on open sets in the Euclidean topology are called typical ranks. The smallest typical rank is called the generic rank; this definition applies to both complex and real tensors. The generic rank of tensor spaces was initially studied in 1983 by Volker Strassen.
As an illustration of the above concepts, it is known that both 2 and 3 are typical ranks of while the generic rank of is 2. Practically, this means that a randomly sampled real tensor (from a continuous probability measure on the space of tensors) of size will be a rank-1 tensor with probability zero, a rank-2 tensor with positive probability, and rank-3 with positive probability. On the other hand, a randomly sampled complex tensor of the same size will be a rank-1 tensor with probability zero, a rank-2 tensor with probability one, and a rank-3 tensor with probability zero. It is even known that the generic rank-3 real tensor in will be of complex rank equal to 2.
The generic rank of tensor spaces depends on the distinction between balanced and unbalanced tensor spaces. A tensor space , where ,
is called unbalanced whenever
and it is called balanced otherwise.
Unbalanced tensor spaces
When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal
almost everywhere. More precisely, the rank of every tensor in an unbalanced tensor space , where is some indeterminate closed set in the Zariski topology, equals the above value.
Balanced tensor spaces
The expected generic rank of tensors living in a balanced tensor space is equal to
almost everywhere for complex tensors and on a Euclidean-open set for real tensors, where
More precisely, the rank of every tensor in , where is some indeterminate closed set in the Zariski topology, is expected to equal the above value. For real tensors, is the least rank that is expected to occur on a set of positive Euclidean measure. The value is often referred to as the expected generic rank of the tensor space because it is only conjecturally correct. It is known that the true generic rank always satisfies
The Abo–Ottaviani–Peterson conjecture states that equality is expected, i.e., , with the following exceptional cases:
In each of these exceptional cases, the generic rank is known to be . Note that while the set of tensors of rank 3 in is defective (13 and not the expected 14), the generic rank in that space is still the expected one, 4. Similarly, the set of tensors of rank 5 in is defective (44 and not the expected 45), but the generic rank in that space is still the expected 6.
The AOP conjecture has been proved completely in a number of special cases. Lickteig showed already in 1985 that , provided that . In 2011, a major breakthrough was established by Catalisano, Geramita, and Gimigliano who proved that the expected dimension of the set of rank tensors of format is the expected one except for rank 3 tensors in the 4 factor case, yet the expected rank in that case is still 4. As a consequence, for all binary tensors.
Maximum rank
The maximum rank that can be admitted by any of the tensors in a tensor space is unknown in general; even a conjecture about this maximum rank is missing. Presently, the best general upper bound states that the maximum rank of , where , satisfies
where is the (least) generic rank of .
It is well-known that the foregoing inequality may be strict. For instance, the generic rank of tensors in is two, so that the above bound yields , while it is known that the maximum rank equals 3.
Border rank
A rank- tensor is called a border tensor if there exists a sequence of tensors of rank at most whose limit is . If is the least value for which such a convergent sequence exists, then it is called the border rank of . For order-2 tensors, i.e., matrices, rank and border rank always coincide, however, for tensors of order they may differ. Border tensors were first studied in the context of fast approximate matrix multiplication algorithms by Bini, Lotti, and Romani in 1980.
A classic example of a border tensor is the rank-3 tensor
It can be approximated arbitrarily well by the following sequence of rank-2 tensors
as . Therefore, its border rank is 2, which is strictly less than its rank. When the two vectors are orthogonal, this example is also known as a W state.
Properties
Identifiability
It follows from the definition of a pure tensor that if and only if there exist such that and for all m. For this reason, the parameters of a rank-1 tensor are called identifiable or essentially unique. A rank- tensor is called identifiable if every of its tensor rank decompositions is the sum of the same set of distinct tensors where the 's are of rank 1. An identifiable rank- thus has only one essentially unique decomposition and all tensor rank decompositions of can be obtained by permuting the order of the summands. Observe that in a tensor rank decomposition all the 's are distinct, for otherwise the rank of would be at most .
Generic identifiability
Order-2 tensors in , i.e., matrices, are not identifiable for . This follows essentially from the observation where is an invertible matrix, , , and . It can be shown that for every , where is a closed set in the Zariski topology, the decomposition on the right-hand side is a sum of a different set of rank-1 tensors than the decomposition on the left-hand side, entailing that order-2 tensors of rank are generically not identifiable.
The situation changes completely for higher-order tensors in with and all . For simplicity in notation, assume without loss of generality that the factors are ordered such that . Let denote the set of tensors of rank bounded by . Then, the following statement was proved to be correct using a computer-assisted proof for all spaces of dimension , and it is conjectured to be valid in general:
There exists a closed set in the Zariski topology such that every tensor is identifiable ( is called generically identifiable in this case), unless either one of the following exceptional cases holds:
The rank is too large: ;
The space is identifiability-unbalanced, i.e., , and the rank is too large: ;
The space is the defective case and the rank is ;
The space is the defective case , where , and the rank is ;
The space is and the rank is ;
The space is and the rank is ; or
The space is and the rank is .
The space is perfect, i.e., is an integer, and the rank is .
In these exceptional cases, the generic (and also minimum) number of complex decompositions is
proved to be in the first 4 cases;
proved to be two in case 5;
expected to be six in case 6;
proved to be two in case 7; and
expected to be at least two in case 8 with exception of the two identifiable cases and .
In summary, the generic tensor of order and rank that is not identifiability-unbalanced is expected to be identifiable (modulo the exceptional cases in small spaces).
Ill-posedness of the standard approximation problem
The rank approximation problem asks for the rank- decomposition closest (in the usual Euclidean topology) to some rank- tensor , where . That is, one seeks to solve
where is the Frobenius norm.
It was shown in a 2008 paper by de Silva and Lim that the above standard approximation problem may be ill-posed. A solution to aforementioned problem may sometimes not exist because the set over which one optimizes is not closed. As such, a minimizer may not exist, even though an infimum would exist. In particular, it is known that certain so-called border tensors may be approximated arbitrarily well by a sequence of tensor of rank at most , even though the limit of the sequence converges to a tensor of rank strictly higher than . The rank-3 tensor
can be approximated arbitrarily well by the following sequence of rank-2 tensors
as . This example neatly illustrates the general principle that a sequence of rank- tensors that converges to a tensor of strictly higher rank needs to admit at least two individual rank-1 terms whose norms become unbounded. Stated formally, whenever a sequence
has the property that (in the Euclidean topology) as , then there should exist at least such that
as . This phenomenon is often encountered when attempting to approximate a tensor using numerical optimization algorithms. It is sometimes called the problem of diverging components''. It was, in addition, shown that a random low-rank tensor over the reals may not admit a rank-2 approximation with positive probability, leading to the understanding that the ill-posedness problem is an important consideration when employing the tensor rank decomposition.
A common partial solution to the ill-posedness problem consists of imposing an additional inequality constraint that bounds the norm of the individual rank-1 terms by some constant. Other constraints that result in a closed set, and, thus, well-posed optimization problem, include imposing positivity or a bounded inner product strictly less than unity between the rank-1 terms appearing in the sought decomposition.
Calculating the CPD
Alternating algorithms:
alternating least squares (ALS)
alternating slice-wise diagonalisation (ASD)
Direct algorithms:
pencil-based algorithms
moment-based algorithms
General optimization algorithms:
simultaneous diagonalization (SD)
simultaneous generalized Schur decomposition (SGSD)
Levenberg–Marquardt (LM)
nonlinear conjugate gradient (NCG)
limited memory BFGS (L-BFGS)
General polynomial system solving algorithms:
homotopy continuation
Applications
In machine learning, the CP-decomposition is the central ingredient in learning probabilistic latent variables models via the technique of moment-matching. For example, consider the multi-view model which is a probabilistic latent variable model. In this model, the generation of samples are posited as follows: there exists a hidden random variable that is not observed directly, given which, there are several conditionally independent random variables known as the different "views" of the hidden variable. For example, assume there are three views of a -state categorical hidden variable . Then the empirical third moment of this latent variable model is a rank 3 tensor and can be decomposed as:
.
In applications such as topic modeling, this can be interpreted as the co-occurrence of words in a document. Then the coefficients in the decomposition of this empirical moment tensor can be interpreted as the probability of choosing a specific topic and each column of the factor matrix corresponds to probabilities of words in the vocabulary in the corresponding topic.
See also
Latent class analysis
Multilinear subspace learning
Singular value decomposition
Tucker decomposition
Higher-order singular value decomposition
Tensor decomposition
References
Further reading
External links
PARAFAC Tutorial
Parallel Factor Analysis (PARAFAC)
FactoMineR (free exploratory multivariate data analysis software linked to R)
Multilinear algebra | Tensor rank decomposition | [
"Engineering"
] | 3,188 | [
"Tensors"
] |
18,963,166 | https://en.wikipedia.org/wiki/Midbody%20%28cell%20biology%29 | The midbody is a transient structure found in mammalian cells and is present near the end of cytokinesis just prior to the complete separation of the dividing cells. The structure was first described by Walther Flemming in 1891.
Structure
The midbody structure contains bundles of microtubules derived from the mitotic spindle which compacts during the final stages of cell division. It has a typical diameter of 1 micrometre and a length of 3 to 5 micrometres. Aside from microtubules it also contains various proteins involved in cytokinesis, asymmetric cell division, and chromosome segregation.
The midbody is important for completing the final stages of cytokinesis, a process called abscission. During symmetric abscission, the midbody is severed at each end and released into the cellular environment.
Role in intercellular signalling
It was long assumed that the midbody was simply a structural part of cytokinesis, and was totally degraded with the completion of mitosis. However, it is now understood that post-abscission, the midbody is converted into an endosome-like signalling molecule, and can be internalised by nearby cells.
This endosome is marked by MKLP1, and can persist for up to 48 hours once internalised into another cell. It is coated in Actin, which is slowly degraded by the internalising cell.
Related proteins
MKLP1
TEX14
CEP55
Aurora Kinase B
References
Organelles
Molecular biology
Cell biology | Midbody (cell biology) | [
"Chemistry",
"Biology"
] | 305 | [
"Biochemistry",
"Cell biology",
"Molecular biology"
] |
18,963,754 | https://en.wikipedia.org/wiki/Viscosity | Viscosity is a measure of a fluid's rate-dependent resistance to a change in shape or to movement of its neighboring portions relative to one another. For liquids, it corresponds to the informal concept of thickness; for example, syrup has a higher viscosity than water. Viscosity is defined scientifically as a force multiplied by a time divided by an area. Thus its SI units are newton-seconds per square meter, or pascal-seconds.
Viscosity quantifies the internal frictional force between adjacent layers of fluid that are in relative motion. For instance, when a viscous fluid is forced through a tube, it flows more quickly near the tube's center line than near its walls. Experiments show that some stress (such as a pressure difference between the two ends of the tube) is needed to sustain the flow. This is because a force is required to overcome the friction between the layers of the fluid which are in relative motion. For a tube with a constant rate of flow, the strength of the compensating force is proportional to the fluid's viscosity.
In general, viscosity depends on a fluid's state, such as its temperature, pressure, and rate of deformation. However, the dependence on some of these properties is negligible in certain cases. For example, the viscosity of a Newtonian fluid does not vary significantly with the rate of deformation.
Zero viscosity (no resistance to shear stress) is observed only at very low temperatures in superfluids; otherwise, the second law of thermodynamics requires all fluids to have positive viscosity. A fluid that has zero viscosity (non-viscous) is called ideal or inviscid.
For non-Newtonian fluid's viscosity, there are pseudoplastic, plastic, and dilatant flows that are time-independent, and there are thixotropic and rheopectic flows that are time-dependent.
Etymology
The word "viscosity" is derived from the Latin ("mistletoe"). also referred to a viscous glue derived from mistletoe berries.
Definitions
Dynamic viscosity
In materials science and engineering, there is often interest in understanding the forces or stresses involved in the deformation of a material. For instance, if the material were a simple spring, the answer would be given by Hooke's law, which says that the force experienced by a spring is proportional to the distance displaced from equilibrium. Stresses which can be attributed to the deformation of a material from some rest state are called elastic stresses. In other materials, stresses are present which can be attributed to the deformation rate over time. These are called viscous stresses. For instance, in a fluid such as water the stresses which arise from shearing the fluid do not depend on the distance the fluid has been sheared; rather, they depend on how quickly the shearing occurs.
Viscosity is the material property which relates the viscous stresses in a material to the rate of change of a deformation (the strain rate). Although it applies to general flows, it is easy to visualize and define in a simple shearing flow, such as a planar Couette flow.
In the Couette flow, a fluid is trapped between two infinitely large plates, one fixed and one in parallel motion at constant speed (see illustration to the right). If the speed of the top plate is low enough (to avoid turbulence), then in steady state the fluid particles move parallel to it, and their speed varies from at the bottom to at the top. Each layer of fluid moves faster than the one just below it, and friction between them gives rise to a force resisting their relative motion. In particular, the fluid applies on the top plate a force in the direction opposite to its motion, and an equal but opposite force on the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed.
In many fluids, the flow velocity is observed to vary linearly from zero at the bottom to at the top. Moreover, the magnitude of the force, , acting on the top plate is found to be proportional to the speed and the area of each plate, and inversely proportional to their separation :
The proportionality factor is the dynamic viscosity of the fluid, often simply referred to as the viscosity. It is denoted by the Greek letter mu (). The dynamic viscosity has the dimensions , therefore resulting in the SI units and the derived units:
pressure multiplied by time energy per unit volume multiplied by time.
The aforementioned ratio is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction parallel to the normal vector of the plates (see illustrations to the right). If the velocity does not vary linearly with , then the appropriate generalization is:
where , and is the local shear velocity. This expression is referred to as Newton's law of viscosity. In shearing flows with planar symmetry, it is what defines . It is a special case of the general definition of viscosity (see below), which can be expressed in coordinate-free form.
Use of the Greek letter mu () for the dynamic viscosity (sometimes also called the absolute viscosity) is common among mechanical and chemical engineers, as well as mathematicians and physicists. However, the Greek letter eta () is also used by chemists, physicists, and the IUPAC. The viscosity is sometimes also called the shear viscosity. However, at least one author discourages the use of this terminology, noting that can appear in non-shearing flows in addition to shearing flows.
Kinematic viscosity
In fluid dynamics, it is sometimes more appropriate to work in terms of kinematic viscosity (sometimes also called the momentum diffusivity), defined as the ratio of the dynamic viscosity () over the density of the fluid (). It is usually denoted by the Greek letter nu ():
and has the dimensions , therefore resulting in the SI units and the derived units:
specific energy multiplied by time energy per unit mass multiplied by time.
General definition
In very general terms, the viscous stresses in a fluid are defined as those resulting from the relative velocity of different fluid particles. As such, the viscous stresses must depend on spatial gradients of the flow velocity. If the velocity gradients are small, then to a first approximation the viscous stresses depend only on the first derivatives of the velocity. (For Newtonian fluids, this is also a linear dependence.) In Cartesian coordinates, the general relationship can then be written as
where is a viscosity tensor that maps the velocity gradient tensor onto the viscous stress tensor . Since the indices in this expression can vary from 1 to 3, there are 81 "viscosity coefficients" in total. However, assuming that the viscosity rank-2 tensor is isotropic reduces these 81 coefficients to three independent parameters , , :
and furthermore, it is assumed that no viscous forces may arise when the fluid is undergoing simple rigid-body rotation, thus , leaving only two independent parameters. The most usual decomposition is in terms of the standard (scalar) viscosity and the bulk viscosity such that and . In vector notation this appears as:
where is the unit tensor. This equation can be thought of as a generalized form of Newton's law of viscosity.
The bulk viscosity (also called volume viscosity) expresses a type of internal friction that resists the shearless compression or expansion of a fluid. Knowledge of is frequently not necessary in fluid dynamics problems. For example, an incompressible fluid satisfies and so the term containing drops out. Moreover, is often assumed to be negligible for gases since it is in a monatomic ideal gas. One situation in which can be important is the calculation of energy loss in sound and shock waves, described by Stokes' law of sound attenuation, since these phenomena involve rapid expansions and compressions.
The defining equations for viscosity are not fundamental laws of nature, so their usefulness, as well as methods for measuring or calculating the viscosity, must be established using separate means. A potential issue is that viscosity depends, in principle, on the full microscopic state of the fluid, which encompasses the positions and momenta of every particle in the system. Such highly detailed information is typically not available in realistic systems. However, under certain conditions most of this information can be shown to be negligible. In particular, for Newtonian fluids near equilibrium and far from boundaries (bulk state), the viscosity depends only space- and time-dependent macroscopic fields (such as temperature and density) defining local equilibrium.
Nevertheless, viscosity may still carry a non-negligible dependence on several system properties, such as temperature, pressure, and the amplitude and frequency of any external forcing. Therefore, precision measurements of viscosity are only defined
with respect to a specific fluid state. To standardize comparisons among experiments and theoretical models, viscosity data is sometimes extrapolated to ideal limiting cases, such as the zero shear limit, or (for gases) the zero density limit.
Momentum transport
Transport theory provides an alternative interpretation of viscosity in terms of momentum transport: viscosity is the material property which characterizes momentum transport within a fluid, just as thermal conductivity characterizes heat transport, and (mass) diffusivity characterizes mass transport. This perspective is implicit in Newton's law of viscosity, , because the shear stress has units equivalent to a momentum flux, i.e., momentum per unit time per unit area. Thus, can be interpreted as specifying the flow of momentum in the direction from one fluid layer to the next. Per Newton's law of viscosity, this momentum flow occurs across a velocity gradient, and the magnitude of the corresponding momentum flux is determined by the viscosity.
The analogy with heat and mass transfer can be made explicit. Just as heat flows from high temperature to low temperature and mass flows from high density to low density, momentum flows from high velocity to low velocity. These behaviors are all described by compact expressions, called constitutive relations, whose one-dimensional forms are given here:
where is the density, and are the mass and heat fluxes, and and are the mass diffusivity and thermal conductivity. The fact that mass, momentum, and energy (heat) transport are among the most relevant processes in continuum mechanics is not a coincidence: these are among the few physical quantities that are conserved at the microscopic level in interparticle collisions. Thus, rather than being dictated by the fast and complex microscopic interaction timescale, their dynamics occurs on macroscopic timescales, as described by the various equations of transport theory and hydrodynamics.
Newtonian and non-Newtonian fluids
Newton's law of viscosity is not a fundamental law of nature, but rather a constitutive equation (like Hooke's law, Fick's law, and Ohm's law) which serves to define the viscosity . Its form is motivated by experiments which show that for a wide range of fluids, is independent of strain rate. Such fluids are called Newtonian. Gases, water, and many common liquids can be considered Newtonian in ordinary conditions and contexts. However, there are many non-Newtonian fluids that significantly deviate from this behavior. For example:
Shear-thickening (dilatant) liquids, whose viscosity increases with the rate of shear strain.
Shear-thinning liquids, whose viscosity decreases with the rate of shear strain.
Thixotropic liquids, that become less viscous over time when shaken, agitated, or otherwise stressed.
Rheopectic liquids, that become more viscous over time when shaken, agitated, or otherwise stressed.
Bingham plastics that behave as a solid at low stresses but flow as a viscous fluid at high stresses.
Trouton's ratio is the ratio of extensional viscosity to shear viscosity. For a Newtonian fluid, the Trouton ratio is 3. Shear-thinning liquids are very commonly, but misleadingly, described as thixotropic.
Viscosity may also depend on the fluid's physical state (temperature and pressure) and other, external, factors. For gases and other compressible fluids, it depends on temperature and varies very slowly with pressure. The viscosity of some fluids may depend on other factors. A magnetorheological fluid, for example, becomes thicker when subjected to a magnetic field, possibly to the point of behaving like a solid.
In solids
The viscous forces that arise during fluid flow are distinct from the elastic forces that occur in a solid in response to shear, compression, or extension stresses. While in the latter the stress is proportional to the amount of shear deformation, in a fluid it is proportional to the rate of deformation over time. For this reason, James Clerk Maxwell used the term fugitive elasticity for fluid viscosity.
However, many liquids (including water) will briefly react like elastic solids when subjected to sudden stress. Conversely, many "solids" (even granite) will flow like liquids, albeit very slowly, even under arbitrarily small stress. Such materials are best described as viscoelastic—that is, possessing both elasticity (reaction to deformation) and viscosity (reaction to rate of deformation).
Viscoelastic solids may exhibit both shear viscosity and bulk viscosity. The extensional viscosity is a linear combination of the shear and bulk viscosities that describes the reaction of a solid elastic material to elongation. It is widely used for characterizing polymers.
In geology, earth materials that exhibit viscous deformation at least three orders of magnitude greater than their elastic deformation are sometimes called rheids.
Measurement
Viscosity is measured with various types of viscometers and rheometers. Close temperature control of the fluid is essential to obtain accurate measurements, particularly in materials like lubricants, whose viscosity can double with a change of only 5 °C. A rheometer is used for fluids that cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer.
For some fluids, the viscosity is constant over a wide range of shear rates (Newtonian fluids). The fluids without a constant viscosity (non-Newtonian fluids) cannot be described by a single number. Non-Newtonian fluids exhibit a variety of different correlations between shear stress and shear rate.
One of the most common instruments for measuring kinematic viscosity is the glass capillary viscometer.
In coating industries, viscosity may be measured with a cup in which the efflux time is measured. There are several sorts of cup—such as the Zahn cup and the Ford viscosity cup—with the usage of each type varying mainly according to the industry.
Also used in coatings, a Stormer viscometer employs load-based rotation to determine viscosity. The viscosity is reported in Krebs units (KU), which are unique to Stormer viscometers.
Vibrating viscometers can also be used to measure viscosity. Resonant, or vibrational viscometers work by creating shear waves within the liquid. In this method, the sensor is submerged in the fluid and is made to resonate at a specific frequency. As the surface of the sensor shears through the liquid, energy is lost due to its viscosity. This dissipated energy is then measured and converted into a viscosity reading. A higher viscosity causes a greater loss of energy.
Extensional viscosity can be measured with various rheometers that apply extensional stress.
Volume viscosity can be measured with an acoustic rheometer.
Apparent viscosity is a calculation derived from tests performed on drilling fluid used in oil or gas well development. These calculations and tests help engineers develop and maintain the properties of the drilling fluid to the specifications required.
Nanoviscosity (viscosity sensed by nanoprobes) can be measured by fluorescence correlation spectroscopy.
Units
The SI unit of dynamic viscosity is the newton-second per square meter (N·s/m2), also frequently expressed in the equivalent forms pascal-second (Pa·s), kilogram per meter per second (kg·m−1·s−1) and poiseuille (Pl). The CGS unit is the poise (P, or g·cm−1·s−1 = 0.1 Pa·s), named after Jean Léonard Marie Poiseuille. It is commonly expressed, particularly in ASTM standards, as centipoise (cP). The centipoise is convenient because the viscosity of water at 20 °C is about 1 cP, and one centipoise is equal to the SI millipascal second (mPa·s).
The SI unit of kinematic viscosity is square meter per second (m2/s), whereas the CGS unit for kinematic viscosity is the stokes (St, or cm2·s−1 = 0.0001 m2·s−1), named after Sir George Gabriel Stokes. In U.S. usage, stoke is sometimes used as the singular form. The submultiple centistokes (cSt) is often used instead, 1 cSt = 1 mm2·s−1 = 10−6 m2·s−1. 1 cSt is 1 cP divided by 1000 kg/m^3, close to the density of water. The kinematic viscosity of water at 20 °C is about 1 cSt.
The most frequently used systems of US customary, or Imperial, units are the British Gravitational (BG) and English Engineering (EE). In the BG system, dynamic viscosity has units of pound-seconds per square foot (lb·s/ft2), and in the EE system it has units of pound-force-seconds per square foot (lbf·s/ft2). The pound and pound-force are equivalent; the two systems differ only in how force and mass are defined. In the BG system the pound is a basic unit from which the unit of mass (the slug) is defined by Newton's Second Law, whereas in the EE system the units of force and mass (the pound-force and pound-mass respectively) are defined independently through the Second Law using the proportionality constant gc.
Kinematic viscosity has units of square feet per second (ft2/s) in both the BG and EE systems.
Nonstandard units include the reyn (lbf·s/in2), a British unit of dynamic viscosity. In the automotive industry the viscosity index is used to describe the change of viscosity with temperature.
The reciprocal of viscosity is fluidity, usually symbolized by or , depending on the convention used, measured in reciprocal poise (P−1, or cm·s·g−1), sometimes called the rhe. Fluidity is seldom used in engineering practice.
At one time the petroleum industry relied on measuring kinematic viscosity by means of the Saybolt viscometer, and expressing kinematic viscosity in units of Saybolt universal seconds (SUS). Other abbreviations such as SSU (Saybolt seconds universal) or SUV (Saybolt universal viscosity) are sometimes used. Kinematic viscosity in centistokes can be converted from SUS according to the arithmetic and the reference table provided in ASTM D 2161.
Molecular origins
Momentum transport in gases is mediated by discrete molecular collisions, and in liquids by attractive forces that bind molecules close together. Because of this, the dynamic viscosities of liquids are typically much larger than those of gases. In addition, viscosity tends to increase with temperature in gases and decrease with temperature in liquids.
Above the liquid-gas critical point, the liquid and gas phases are replaced by a single supercritical phase. In this regime, the mechanisms of momentum transport interpolate between liquid-like and gas-like behavior.
For example, along a supercritical isobar (constant-pressure surface), the kinematic viscosity decreases at low temperature and increases at high temperature, with a minimum in between. A rough estimate for the value
at the minimum is
where is the Planck constant, is the electron mass, and is the molecular mass.
In general, however, the viscosity of a system depends in detail on how the molecules constituting the system interact, and there are no simple but correct formulas for it. The simplest exact expressions are the Green–Kubo relations for the linear shear viscosity or the transient time correlation function expressions derived by Evans and Morriss in 1988. Although these expressions are each exact, calculating the viscosity of a dense fluid using these relations currently requires the use of molecular dynamics computer simulations. Somewhat more progress can be made for a dilute gas, as elementary assumptions about how gas molecules move and interact lead to a basic understanding of the molecular origins of viscosity. More sophisticated treatments can be constructed by systematically coarse-graining the equations of motion of the gas molecules. An example of such a treatment is Chapman–Enskog theory, which derives expressions for the viscosity of a dilute gas from the Boltzmann equation.
Pure gases
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
! Elementary calculation of viscosity for a dilute gas
|-
|
Consider a dilute gas moving parallel to the -axis with velocity that depends only on the coordinate. To simplify the discussion, the gas is assumed to have uniform temperature and density.
Under these assumptions, the velocity of a molecule passing through is equal to whatever velocity that molecule had when its mean free path began. Because is typically small compared with macroscopic scales, the average velocity of such a molecule has the form
where is a numerical constant on the order of . (Some authors estimate ; on the other hand, a more careful calculation for rigid elastic spheres gives .) Next, because half the molecules on either side are moving towards , and doing so on average with half the average molecular speed , the momentum flux from either side is
The net momentum flux at is the difference of the two:
According to the definition of viscosity, this momentum flux should be equal to , which leads to
|}
Viscosity in gases arises principally from the molecular diffusion that transports momentum between layers of flow. An elementary calculation for a dilute gas at temperature and density gives
where is the Boltzmann constant, the molecular mass, and a numerical constant on the order of . The quantity , the mean free path, measures the average distance a molecule travels between collisions. Even without a priori knowledge of , this expression has nontrivial implications. In particular, since is typically inversely proportional to density and increases with temperature, itself should increase with temperature and be independent of density at fixed temperature. In fact, both of these predictions persist in more sophisticated treatments, and accurately describe experimental observations. By contrast, liquid viscosity typically decreases with temperature.
For rigid elastic spheres of diameter , can be computed, giving
In this case is independent of temperature, so . For more complicated molecular models, however, depends on temperature in a non-trivial way, and simple kinetic arguments as used here are inadequate. More fundamentally, the notion of a mean free path becomes imprecise for particles that interact over a finite range, which limits the usefulness of the concept for describing real-world gases.
Chapman–Enskog theory
A technique developed by Sydney Chapman and David Enskog in the early 1900s allows a more refined calculation of . It is based on the Boltzmann equation, which provides a statistical description of a dilute gas in terms of intermolecular interactions. The technique allows accurate calculation of for molecular models that are more realistic than rigid elastic spheres, such as those incorporating intermolecular attractions. Doing so is necessary to reproduce the correct temperature dependence of , which experiments show increases more rapidly than the trend predicted for rigid elastic spheres. Indeed, the Chapman–Enskog analysis shows that the predicted temperature dependence can be tuned by varying the parameters in various molecular models. A simple example is the Sutherland model, which describes rigid elastic spheres with weak mutual attraction. In such a case, the attractive force can be treated perturbatively, which leads to a simple expression for :
where is independent of temperature, being determined only by the parameters of the intermolecular attraction. To connect with experiment, it is convenient to rewrite as
where is the viscosity at temperature . This expression is usually named Sutherland's formula. If is known from experiments at and at least one other temperature, then can be calculated. Expressions for obtained in this way are qualitatively accurate for a number of simple gases. Slightly more sophisticated models, such as the Lennard-Jones potential, or the more flexible Mie potential, may provide better agreement with experiments, but only at the cost of a more opaque dependence on temperature. A further advantage of these more complex interaction potentials is that they can be used to develop accurate models for a wide variety of properties using the same potential parameters. In situations where little experimental data is available, this makes it possible to obtain model parameters from fitting to properties such as pure-fluid vapour-liquid equilibria, before using the parameters thus obtained to predict the viscosities of interest with reasonable accuracy.
In some systems, the assumption of spherical symmetry must be abandoned, as is the case for vapors with highly polar molecules like H2O. In these cases, the Chapman–Enskog analysis is significantly more complicated.
Bulk viscosity
In the kinetic-molecular picture, a non-zero bulk viscosity arises in gases whenever there are non-negligible relaxational timescales governing the exchange of energy between the translational energy of molecules and their internal energy, e.g. rotational and vibrational. As such, the bulk viscosity is for a monatomic ideal gas, in which the internal energy of molecules is negligible, but is nonzero for a gas like carbon dioxide, whose molecules possess both rotational and vibrational energy.
Pure liquids
In contrast with gases, there is no simple yet accurate picture for the molecular origins of viscosity in liquids.
At the simplest level of description, the relative motion of adjacent layers in a liquid is opposed primarily by attractive molecular forces
acting across the layer boundary. In this picture, one (correctly) expects viscosity to decrease with increasing temperature. This is because
increasing temperature increases the random thermal motion of the molecules, which makes it easier for them to overcome their attractive interactions.
Building on this visualization, a simple theory can be constructed in analogy with the discrete structure of a solid: groups of molecules in a liquid
are visualized as forming "cages" which surround and enclose single molecules. These cages can be occupied or unoccupied, and
stronger molecular attraction corresponds to stronger cages.
Due to random thermal motion, a molecule "hops" between cages at a rate which varies inversely with the strength of molecular attractions. In equilibrium these "hops" are not biased in any direction.
On the other hand, in order for two adjacent layers to move relative to each other, the "hops" must be biased in the direction
of the relative motion. The force required to sustain this directed motion can be estimated for a given shear rate, leading to
where is the Avogadro constant, is the Planck constant, is the volume of a mole of liquid, and is the normal boiling point. This result has the same form as the well-known empirical relation
where and are constants fit from data. On the other hand, several authors express caution with respect to this model.
Errors as large as 30% can be encountered using equation (), compared with fitting equation () to experimental data. More fundamentally, the physical assumptions underlying equation () have been criticized. It has also been argued that the exponential dependence in equation () does not necessarily describe experimental observations more accurately than simpler, non-exponential expressions.
In light of these shortcomings, the development of a less ad hoc model is a matter of practical interest. Foregoing simplicity in favor of precision, it is possible to write rigorous expressions for viscosity starting from the fundamental equations of motion for molecules. A classic example of this approach is Irving–Kirkwood theory. On the other hand, such expressions are given as averages over multiparticle correlation functions and are therefore difficult to apply in practice.
In general, empirically derived expressions (based on existing viscosity measurements) appear to be the only consistently reliable means of calculating viscosity in liquids.
Local atomic structure changes observed in undercooled liquids on cooling below the equilibrium melting temperature either in terms of radial distribution function g(r) or structure factor S(Q) are found to be directly responsible for the liquid fragility: deviation of the temperature dependence of viscosity of the undercooled liquid from the Arrhenius equation (2) through modification of the activation energy for viscous flow. At the same time equilibrium liquids follow the Arrhenius equation.
Mixtures and blends
Gaseous mixtures
The same molecular-kinetic picture of a single component gas can also be applied to a gaseous mixture. For instance, in the Chapman–Enskog approach the viscosity of a binary mixture of gases can be written in terms of the individual component viscosities , their respective volume fractions, and the intermolecular interactions.
As for the single-component gas, the dependence of on the parameters of the intermolecular interactions enters through various collisional integrals which may not be expressible in closed form. To obtain usable expressions for which reasonably match experimental data, the collisional integrals may be computed numerically or from correlations. In some cases, the collision integrals are regarded as fitting parameters, and are fitted directly to experimental data. This is a common approach in the development of reference equations for gas-phase viscosities. An example of such a procedure is the Sutherland approach for the single-component gas, discussed above.
For gas mixtures consisting of simple molecules, Revised Enskog Theory has been shown to accurately represent both the density- and temperature dependence of the viscosity over a wide range of conditions.
Blends of liquids
As for pure liquids, the viscosity of a blend of liquids is difficult to predict from molecular principles. One method is to extend the molecular "cage" theory presented above for a pure liquid. This can be done with varying levels of sophistication. One expression resulting from such an analysis is the Lederer–Roegiers equation for a binary mixture:
where is an empirical parameter, and and are the respective mole fractions and viscosities of the component liquids.
Since blending is an important process in the lubricating and oil industries, a variety of empirical and proprietary equations exist for predicting the viscosity of a blend.
Solutions and suspensions
Aqueous solutions
Depending on the solute and range of concentration, an aqueous electrolyte solution can have either a larger or smaller viscosity compared with pure water at the same temperature and pressure. For instance, a 20% saline (sodium chloride) solution has viscosity over 1.5 times that of pure water, whereas a 20% potassium iodide solution has viscosity about 0.91 times that of pure water.
An idealized model of dilute electrolytic solutions leads to the following prediction for the viscosity of a solution:
where is the viscosity of the solvent, is the concentration, and is a positive constant which depends on both solvent and solute properties. However, this expression is only valid for very dilute solutions, having less than 0.1 mol/L. For higher concentrations, additional terms are necessary which account for higher-order molecular correlations:
where and are fit from data. In particular, a negative value of is able to account for the decrease in viscosity observed in some solutions. Estimated values of these constants are shown below for sodium chloride and potassium iodide at temperature 25 °C (mol = mole, L = liter).
Suspensions
In a suspension of solid particles (e.g. micron-size spheres suspended in oil), an effective viscosity can be defined in terms of stress and strain components which are averaged over a volume large compared with the distance between the suspended particles, but small with respect to macroscopic dimensions. Such suspensions generally exhibit non-Newtonian behavior. However, for dilute systems in steady flows, the behavior is Newtonian and expressions for can be derived directly from the particle dynamics. In a very dilute system, with volume fraction , interactions between the suspended particles can be ignored. In such a case one can explicitly calculate the flow field around each particle independently, and combine the results to obtain . For spheres, this results in the Einstein's effective viscosity formula:
where is the viscosity of the suspending liquid. The linear dependence on is a consequence of neglecting interparticle interactions. For dilute systems in general, one expects to take the form
where the coefficient may depend on the particle shape (e.g. spheres, rods, disks). Experimental determination of the precise value of is difficult, however: even the prediction for spheres has not been conclusively validated, with various experiments finding values in the range . This deficiency has been attributed to difficulty in controlling experimental conditions.
In denser suspensions, acquires a nonlinear dependence on , which indicates the importance of interparticle interactions. Various analytical and semi-empirical schemes exist for capturing this regime. At the most basic level, a term quadratic in is added to :
and the coefficient is fit from experimental data or approximated from the microscopic theory. However, some authors advise caution in applying such simple formulas since non-Newtonian behavior appears in dense suspensions ( for spheres), or in suspensions of elongated or flexible particles.
There is a distinction between a suspension of solid particles, described above, and an emulsion. The latter is a suspension of tiny droplets, which themselves may exhibit internal circulation. The presence of internal circulation can decrease the observed effective viscosity, and different theoretical or semi-empirical models must be used.
Amorphous materials
In the high and low temperature limits, viscous flow in amorphous materials (e.g. in glasses and melts) has the Arrhenius form:
where is a relevant activation energy, given in terms of molecular parameters; is temperature; is the molar gas constant; and is approximately a constant. The activation energy takes a different value depending on whether the high or low temperature limit is being considered: it changes from a high value at low temperatures (in the glassy state) to a low value at high temperatures (in the liquid state).
For intermediate temperatures, varies nontrivially with temperature and the simple Arrhenius form fails. On the other hand, the two-exponential equation
where , , , are all constants, provides a good fit to experimental data over the entire range of temperatures, while at the same time reducing to the correct Arrhenius form in the low and high temperature limits. This expression, also known as Duouglas-Doremus-Ojovan model, can be motivated from various theoretical models of amorphous materials at the atomic level.
A two-exponential equation for the viscosity can be derived within the Dyre shoving model of supercooled liquids, where the Arrhenius energy barrier is identified with the high-frequency shear modulus times a characteristic shoving volume. Upon specifying the temperature dependence of the shear modulus via thermal expansion and via the repulsive part of the intermolecular potential, another two-exponential equation is retrieved:
where denotes the high-frequency shear modulus of the material evaluated at a temperature equal to the glass transition temperature , is the so-called shoving volume, i.e. it is the characteristic volume of the group of atoms involved in the shoving event by which an atom/molecule escapes from the cage of nearest-neighbours, typically on the order of the volume occupied by few atoms. Furthermore, is the thermal expansion coefficient of the material, is a parameter which measures the steepness of the power-law rise of the ascending flank of the first peak of the radial distribution function, and is quantitatively related to the repulsive part of the interatomic potential. Finally, denotes the Boltzmann constant.
Eddy viscosity
In the study of turbulence in fluids, a common practical strategy is to ignore the small-scale vortices (or eddies) in the motion and to calculate a large-scale motion with an effective viscosity, called the "eddy viscosity", which characterizes the transport and dissipation of energy in the smaller-scale flow (see large eddy simulation). In contrast to the viscosity of the fluid itself, which must be positive by the second law of thermodynamics, the eddy viscosity can be negative.
Prediction
Because viscosity depends continuously on temperature and pressure, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available at the temperatures and pressures of interest. This capability is important for thermophysical simulations,
in which the temperature and pressure of a fluid can vary continuously with space and time. A similar situation is encountered for mixtures of pure fluids, where the viscosity depends continuously on the concentration ratios of the constituent fluids
For the simplest fluids, such as dilute monatomic gases and their mixtures, ab initio quantum mechanical computations can accurately predict viscosity in terms of fundamental atomic constants, i.e., without reference to existing viscosity measurements. For the special case of dilute helium, uncertainties in the ab initio calculated viscosity are two order of magnitudes smaller than uncertainties in experimental values.
For slightly more complex fluids and mixtures at moderate densities (i.e. sub-critical densities) Revised Enskog Theory can be used to predict viscosities with some accuracy. Revised Enskog Theory is predictive in the sense that predictions for viscosity can be obtained using parameters fitted to other, pure-fluid thermodynamic properties or transport properties, thus requiring no a priori experimental viscosity measurements.
For most fluids, high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing viscosity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures and pressures, then it is called a "reference correlation" for that fluid. Reference correlations have been published for many pure fluids; a few examples are water, carbon dioxide, ammonia, benzene, and xenon. Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases.
Thermophysical modeling software often relies on reference correlations for predicting viscosity at user-specified temperature and pressure.
These correlations may be proprietary. Examples are REFPROP (proprietary) and CoolProp
(open-source).
Viscosity can also be computed using formulas that express it in terms of the statistics of individual particle
trajectories. These formulas include the Green–Kubo relations for the linear shear viscosity and the transient time correlation function expressions derived by Evans and Morriss in 1988.
The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics.
An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules.
Selected substances
Observed values of viscosity vary over several orders of magnitude, even for common substances (see the order of magnitude table below). For instance, a 70% sucrose (sugar) solution has a viscosity over 400 times that of water, and 26,000 times that of air. More dramatically, pitch has been estimated to have a viscosity 230 billion times that of water.
Water
The dynamic viscosity of water is about 0.89 mPa·s at room temperature (25 °C). As a function of temperature in kelvins, the viscosity can be estimated using the semi-empirical Vogel-Fulcher-Tammann equation:
where A = 0.02939 mPa·s, B = 507.88 K, and C = 149.3 K. Experimentally determined values of the viscosity are also given in the table below. The values at 20 °C are a useful reference: there, the dynamic viscosity is about 1 cP and the kinematic viscosity is about 1 cSt.
Air
Under standard atmospheric conditions (25 °C and pressure of 1 bar), the dynamic viscosity of air is 18.5 μPa·s, roughly 50 times smaller than the viscosity of water at the same temperature. Except at very high pressure, the viscosity of air depends mostly on the temperature. Among the many possible approximate formulas for the temperature dependence (see Temperature dependence of viscosity), one is:
which is accurate in the range −20 °C to 400 °C. For this formula to be valid, the temperature must be given in kelvins; then corresponds to the viscosity in Pa·s.
Other common substances
Order of magnitude estimates
The following table illustrates the range of viscosity values observed in common substances. Unless otherwise noted, a temperature of 25 °C and a pressure of 1 atmosphere are assumed.
The values listed are representative estimates only, as they do not account for measurement uncertainties, variability in material definitions, or non-Newtonian behavior.
See also
References
Footnotes
Citations
Sources
. An advanced treatment.
External links
Viscosity - The Feynman Lectures on Physics
Fluid properties – high accuracy calculation of viscosity for frequently encountered pure liquids and gases
Fluid Characteristics Chart – a table of viscosities and vapor pressures for various fluids
Gas Dynamics Toolbox – calculate coefficient of viscosity for mixtures of gases
Glass Viscosity Measurement – viscosity measurement, viscosity units and fixpoints, glass viscosity calculation
Kinematic Viscosity – conversion between kinematic and dynamic viscosity
Physical Characteristics of Water – a table of water viscosity as a function of temperature
Calculation of temperature-dependent dynamic viscosities for some common components
Artificial viscosity
Viscosity of Air, Dynamic and Kinematic, Engineers Edge
Articles containing video clips
Aerodynamics
Fluid dynamics | Viscosity | [
"Physics",
"Chemistry",
"Engineering"
] | 8,913 | [
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties",
"Fluid dynamics"
] |
18,963,787 | https://en.wikipedia.org/wiki/Ion | An ion () is an atom or molecule with a net electrical charge. The charge of an electron is considered to be negative by convention and this charge is equal and opposite to the charge of a proton, which is considered to be positive by convention. The net charge of an ion is not zero because its total number of electrons is unequal to its total number of protons.
A cation is a positively charged ion with fewer electrons than protons (e.g. K+ (potassium ion)) while an anion is a negatively charged ion with more electrons than protons. (e.g. Cl− (chloride ion) and OH− (hydroxide ion)). Opposite electric charges are pulled towards one another by electrostatic force, so cations and anions attract each other and readily form ionic compounds.
If only a + or - is present, it indicates a +1 or -1 charge. To indicate a more severe charge, the number of additional or missing electrons is supplied, as seen in O22- (negative charge, peroxide) and He2+ (positive charge, alpha particle).
Ions consisting of only a single atom are termed atomic or monatomic ions, while two or more atoms form molecular ions or polyatomic ions. In the case of physical ionization in a fluid (gas or liquid), "ion pairs" are created by spontaneous molecule collisions, where each generated pair consists of a free electron and a positive ion. Ions are also created by chemical interactions, such as the dissolution of a salt in liquids, or by other means, such as passing a direct current through a conducting solution, dissolving an anode via ionization.
History of discovery
The word ion was coined from neuter present participle of
Greek ἰέναι (ienai), meaning "to go". A cation is something that moves down (, kato, meaning "down") and an anion is something that moves up (, ano, meaning "up"). They are so called because ions move toward the electrode of opposite charge. This term was introduced (after a suggestion by the English polymath William Whewell) by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday did not know the nature of these species, but he knew that since metals dissolved into and entered a solution at one electrode and new metal came forth from a solution at the other electrode; that some kind of substance has moved through the solution in a current. This conveys matter from one place to the other. In correspondence with Faraday, Whewell also coined the words anode and cathode, as well as anion and cation as ions that are attracted to the respective electrodes.
Svante Arrhenius put forth, in his 1884 dissertation, the explanation of the fact that solid crystalline salts dissociate into paired charged particles when dissolved, for which he would win the 1903 Nobel Prize in Chemistry. Arrhenius' explanation was that in forming a solution, the salt dissociates into Faraday's ions, he proposed that ions formed even in the absence of an electric current.
Characteristics
Ions in their gas-like state are highly reactive and will rapidly interact with ions of opposite charge to give neutral molecules or ionic salts. Ions are also produced in the liquid or solid state when salts interact with solvents (for example, water) to produce solvated ions, which are more stable, for reasons involving a combination of energy and entropy changes as the ions move away from each other to interact with the liquid. These stabilized species are more commonly found in the environment at low temperatures. A common example is the ions present in seawater, which are derived from dissolved salts.
As charged objects, ions are attracted to opposite electric charges (positive to negative, and vice versa) and repelled by like charges. When they move, their trajectories can be deflected by a magnetic field.
Electrons, due to their smaller mass and thus larger space-filling properties as matter waves, determine the size of atoms and molecules that possess any electrons at all. Thus, anions (negatively charged ions) are larger than the parent molecule or atom, as the excess electron(s) repel each other and add to the physical size of the ion, because its size is determined by its electron cloud. Cations are smaller than the corresponding parent atom or molecule due to the smaller size of the electron cloud. One particular cation (that of hydrogen) contains no electrons, and thus consists of a single proton – much smaller than the parent hydrogen atom.
Anions and cations
Anion (−) and cation (+) indicate the net electric charge on an ion. An ion that has more electrons than protons, giving it a net negative charge, is named an anion, and a minus indication "Anion (−)" indicates the negative charge. With a cation it is just the opposite: it has fewer electrons than protons, giving it a net positive charge, hence the indication "Cation (+)".
Since the electric charge on a proton is equal in magnitude to the charge on an electron, the net electric charge on an ion is equal to the number of protons in the ion minus the number of electrons.
An (−) ( , from the Greek word ἄνω (ánō), meaning "up") is an ion with more electrons than protons, giving it a net negative charge (since electrons are negatively charged and protons are positively charged).
A (+) ( , from the Greek word κάτω (kátō), meaning "down") is an ion with fewer electrons than protons, giving it a positive charge.
There are additional names used for ions with multiple charges. For example, an ion with a −2 charge is known as a dianion and an ion with a +2 charge is known as a dication. A zwitterion is a neutral molecule with positive and negative charges at different locations within that molecule.
Cations and anions are measured by their ionic radius and they differ in relative size: "Cations are small, most of them less than 10−10 m (10−8 cm) in radius. But most anions are large, as is the most common Earth anion, oxygen. From this fact it is apparent that most of the space of a crystal is occupied by the anion and that the cations fit into the spaces between them."
The terms anion and cation (for ions that respectively travel to the anode and cathode during electrolysis) were introduced by Michael Faraday in 1834 following his consultation with William Whewell.
Natural occurrences
Ions are ubiquitous in nature and are responsible for diverse phenomena from the luminescence of the Sun to the existence of the Earth's ionosphere. Atoms in their ionic state may have a different color from neutral atoms, and thus light absorption by metal ions gives the color of gemstones. In both inorganic and organic chemistry (including biochemistry), the interaction of water and ions is often relevant for understanding properties of systems; an example of their importance is in the breakdown of adenosine triphosphate (ATP), which provides the energy for many reactions in biological systems.
Related technology
Ions can be non-chemically prepared using various ion sources, usually involving high voltage or temperature. These are used in a multitude of devices such as mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters, and ion engines.
As reactive charged particles, they are also used in air purification by disrupting microbes, and in household items such as smoke detectors.
As signalling and metabolism in organisms are controlled by a precise ionic gradient across membranes, the disruption of this gradient contributes to cell death. This is a common mechanism exploited by natural and artificial biocides, including the ion channels gramicidin and amphotericin (a fungicide).
Inorganic dissolved ions are a component of total dissolved solids, a widely known indicator of water quality.
Detection of ionizing radiation
The ionizing effect of radiation on a gas is extensively used for the detection of radiation such as alpha, beta, gamma, and X-rays. The original ionization event in these instruments results in the formation of an "ion pair"; a positive ion and a free electron, by ion impact by the radiation on the gas molecules. The ionization chamber is the simplest of these detectors, and collects all the charges created by direct ionization within the gas through the application of an electric field.
The Geiger–Müller tube and the proportional counter both use a phenomenon known as a Townsend avalanche to multiply the effect of the original ionizing event by means of a cascade effect whereby the free electrons are given sufficient energy by the electric field to release further electrons by ion impact.
Chemistry
Denoting the charged state
When writing the chemical formula for an ion, its net charge is written in superscript immediately after the chemical structure for the molecule/atom. The net charge is written with the magnitude before the sign; that is, a doubly charged cation is indicated as 2+ instead of +2. However, the magnitude of the charge is omitted for singly charged molecules/atoms; for example, the sodium cation is indicated as and not .
An alternative (and acceptable) way of showing a molecule/atom with multiple charges is by drawing out the signs multiple times, this is often seen with transition metals. Chemists sometimes circle the sign; this is merely ornamental and does not alter the chemical meaning. All three representations of , , and shown in the figure, are thus equivalent.
Monatomic ions are sometimes also denoted with Roman numerals, particularly in spectroscopy; for example, the (positively doubly charged) example seen above is referred to as , or Fe III (Fe I for a neutral Fe atom, Fe II for a singly ionized Fe ion). The Roman numeral designates the formal oxidation state of an element, whereas the superscripted Indo-Arabic numerals denote the net charge. The two notations are, therefore, exchangeable for monatomic ions, but the Roman numerals cannot be applied to polyatomic ions. However, it is possible to mix the notations for the individual metal centre with a polyatomic complex, as shown by the uranyl ion example.
Sub-classes
If an ion contains unpaired electrons, it is called a radical ion. Just like uncharged radicals, radical ions are very reactive. Polyatomic ions containing oxygen, such as carbonate and sulfate, are called oxyanions. Molecular ions that contain at least one carbon to hydrogen bond are called organic ions. If the charge in an organic ion is formally centred on a carbon, it is termed a carbocation (if positively charged) or carbanion (if negatively charged).
Formation
Formation of monatomic ions
Monatomic ions are formed by the gain or loss of electrons to the valence shell (the outer-most electron shell) in an atom. The inner shells of an atom are filled with electrons that are tightly bound to the positively charged atomic nucleus, and so do not participate in this kind of chemical interaction. The process of gaining or losing electrons from a neutral atom or molecule is called ionization.
Atoms can be ionized by bombardment with radiation, but the more usual process of ionization encountered in chemistry is the transfer of electrons between atoms or molecules. This transfer is usually driven by the attaining of stable ("closed shell") electronic configurations. Atoms will gain or lose electrons depending on which action takes the least energy.
For example, a sodium atom, Na, has a single electron in its valence shell, surrounding 2 stable, filled inner shells of 2 and 8 electrons. Since these filled shells are very stable, a sodium atom tends to lose its extra electron and attain this stable configuration, becoming a sodium cation in the process
Na -> Na+ + e-
On the other hand, a chlorine atom, Cl, has 7 electrons in its valence shell, which is one short of the stable, filled shell with 8 electrons. Thus, a chlorine atom tends to gain an extra electron and attain a stable 8-electron configuration, becoming a chloride anion in the process:
Cl + e- -> Cl-
This driving force is what causes sodium and chlorine to undergo a chemical reaction, wherein the "extra" electron is transferred from sodium to chlorine, forming sodium cations and chloride anions. Being oppositely charged, these cations and anions form ionic bonds and combine to form sodium chloride, NaCl, more commonly known as table salt.
Na+ + Cl- -> NaCl
Formation of polyatomic and molecular ions
Polyatomic and molecular ions are often formed by the gaining or losing of elemental ions such as a proton, , in neutral molecules. For example, when ammonia, , accepts a proton, —a process called protonation—it forms the ammonium ion, . Ammonia and ammonium have the same number of electrons in essentially the same electronic configuration, but ammonium has an extra proton that gives it a net positive charge.
Ammonia can also lose an electron to gain a positive charge, forming the ion . However, this ion is unstable, because it has an incomplete valence shell around the nitrogen atom, making it a very reactive radical ion.
Due to the instability of radical ions, polyatomic and molecular ions are usually formed by gaining or losing elemental ions such as , rather than gaining or losing electrons. This allows the molecule to preserve its stable electronic configuration while acquiring an electrical charge.
Ionization potential
The energy required to detach an electron in its lowest energy state from an atom or molecule of a gas with less net electric charge is called the ionization potential, or ionization energy. The nth ionization energy of an atom is the energy required to detach its nth electron after the first electrons have already been detached.
Each successive ionization energy is markedly greater than the last. Particularly great increases occur after any given block of atomic orbitals is exhausted of electrons. For this reason, ions tend to form in ways that leave them with full orbital blocks. For example, sodium has one valence electron in its outermost shell, so in ionized form it is commonly found with one lost electron, as . On the other side of the periodic table, chlorine has seven valence electrons, so in ionized form it is commonly found with one gained electron, as . Caesium has the lowest measured ionization energy of all the elements and helium has the greatest. In general, the ionization energy of metals is much lower than the ionization energy of nonmetals, which is why, in general, metals will lose electrons to form positively charged ions and nonmetals will gain electrons to form negatively charged ions.
Ionic bonding
Ionic bonding is a kind of chemical bonding that arises from the mutual attraction of oppositely charged ions. Ions of like charge repel each other, and ions of opposite charge attract each other. Therefore, ions do not usually exist on their own, but will bind with ions of opposite charge to form a crystal lattice. The resulting compound is called an ionic compound, and is said to be held together by ionic bonding. In ionic compounds there arise characteristic distances between ion neighbours from which the spatial extension and the ionic radius of individual ions may be derived.
The most common type of ionic bonding is seen in compounds of metals and nonmetals (except noble gases, which rarely form chemical compounds). Metals are characterized by having a small number of electrons in excess of a stable, closed-shell electronic configuration. As such, they have the tendency to lose these extra electrons in order to attain a stable configuration. This property is known as electropositivity. Non-metals, on the other hand, are characterized by having an electron configuration just a few electrons short of a stable configuration. As such, they have the tendency to gain more electrons in order to achieve a stable configuration. This tendency is known as electronegativity. When a highly electropositive metal is combined with a highly electronegative nonmetal, the extra electrons from the metal atoms are transferred to the electron-deficient nonmetal atoms. This reaction produces metal cations and nonmetal anions, which are attracted to each other to form a salt.
Common ions
See also
Air ioniser
Aurora
Electrolyte
Gaseous ionization detector
Ioliomics
Ion beam
Ion exchange
Ionizing radiation
Stopping power of radiation particles
References
Physical chemistry
Charge carriers | Ion | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,422 | [
"Matter",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Charge carriers",
"Electrical phenomena",
"Condensed matter physics",
"nan",
"Physical chemistry",
"Ions"
] |
18,964,485 | https://en.wikipedia.org/wiki/Pressure-retarded%20osmosis | Pressure retarded osmosis (PRO) is a technique to separate a solvent (for example, fresh water) from a solution that is more concentrated (e.g. sea water) and also pressurized. A semipermeable membrane allows the solvent to pass to the concentrated solution side by osmosis. The technique can be used to generate power from the salinity gradient energy resulting from the difference in the salt concentration between sea and river water.
History
This method of generating power was invented by Prof. Sidney Loeb in 1973 at the Ben-Gurion University of the Negev, Beersheba, Israel.
Richard Norman submitted a manuscript describing the concept to Science in May 1974. In that manuscript, Norman clearly indicated that he was unaware of any prior work on the topic. Loeb submitted a comment on Norman's cost analysis to Science in January 1975. In that publication, Loeb proposed the term "pressure retarded osmosis". He further wrote "To facilitate examination of the concept in some detail, the United States-Israel Binational Science Foundation awarded a grant (No. 337) to our Research Authority in May 1974."
Scientific and technical background
The ideal power production formula, which applies to an idealized situation, predicts that the optimal hydraulic pressure difference, is one-half the osmotic pressure difference between the saline and pure water streams . For a seawater to fresh water PRO system, the ideal case corresponds to an optimal power pressure of 26 bars. This pressure is equivalent to a column of water (hydraulic head) 270 meters high.
In a real-world system, both the hydraulic pressure and the osmotic pressure will vary through the PRO system as a result of friction, water removal, and salt build up near the membranes. These factors reduce the achievable power below the ideal limit. The amount of membrane area that can be used is limited by cost and other practical considerations, and this factor limits achievable power production. A significant portion of the electrical power generated by PRO must be used by the pumps that circulate water through the plant. Appropriate membranes are also necessary. All these factors have limited the economic viability of PRO.
PRO has the potential to extract osmotic power from waste streams, such as desalination plant brine discharge or treated wastewater effluent. The potential power output is proportional to the salinity difference between the fresh and saline water streams. Desalination yields very salty brine, while treated municipal wastewater has relatively little salt. Combining those streams could produce energy to power both facilities. However, powering an existing wastewater treatment plant by mixing treated wastewater with seawater in a mid-size city could require a membrane area of 2.5 million square meters.
Process
PRO uses a water–permeable membrane with an osmotic pressure difference to drive water flux from a low–concentration "diluate" stream, into a slightly pressurized higher–concentration. An energy recovery device on this stream provides the energy output, and must exceed the pumping pressure input for net power production.
Testing
The world's first osmotic plant with capacity of 10 kW was opened by Statkraft, a state-owned hydropower company, on 24 November 2009 in Tofte, Norway.
It had been estimated that PRO could generate 12 TWh annually in Norway, sufficient to meet 10% of Norway's total demand for electricity.
In January 2014, Statkraft terminated their pressure-retarded osmosis pilot project due to economic feasibility concerns.
Starting in 2021, SaltPower is building another commercial osmotic power plant in Denmark using very high salinity brine from a geothermal power plant.
See also
Electrodialysis reversal (EDR)
Forward osmosis
Green energy
Osmotic power
Osmotic pressure
Renewable energy
Reverse electrodialysis (RED)
Reverse osmosis
Semipermeable membrane
Van 't Hoff factor
References
Further reading
Sustainable technologies
Sustainable energy
Energy conversion
Israeli inventions
Membrane technology | Pressure-retarded osmosis | [
"Chemistry"
] | 821 | [
"Membrane technology",
"Separation processes"
] |
18,964,603 | https://en.wikipedia.org/wiki/Bioreporter | Bioreporters are intact, living microbial cells that have been genetically engineered to produce a measurable signal in response to a specific chemical or physical agent in their environment. Bioreporters contain two essential genetic elements, a promoter gene and a reporter gene. The promoter gene is turned on (transcribed) when the target agent is present in the cell’s environment. The promoter gene in a normal bacterial cell is linked to other genes that are then likewise transcribed and then translated into proteins that help the cell in either combating or adapting to the agent to which it has been exposed. In the case of a bioreporter, these genes, or portions thereof, have been removed and replaced with a reporter gene. As a result, turning on the promoter gene also turns on the reporter gene, leading to the production of reporter proteins that output a detectable signal. The presence of a signal indicates that the bioreporter has sensed a particular agent in its environment.
Originally developed for fundamental analysis of factors affecting gene expression, bioreporters were early on applied for the detection of environmental contaminants and have since evolved into fields as diverse as medical diagnostics, precision agriculture, food safety assurance, process monitoring and control, and bio-microelectronic computing. Their versatility stems from the fact that there exist a large number of reporter gene systems that are capable of generating a variety of signals. Additionally, reporter genes can be genetically inserted into bacterial, yeast, plant, and mammalian cells, thereby providing considerable functionality over a wide range of host vectors.
Reporter gene systems
Several types of reporter genes are available for use in the construction of bioreporter organisms, and the signals they generate can usually be categorized as either colorimetric, fluorescent, luminescent, chemiluminescent or electrochemical. Although each functions differently, their end product always remains the same – a measurable signal that is proportional to the concentration of the unique chemical or physical agent to which they have been exposed. In some instances, the signal only occurs when a secondary substrate is added to the bioassay (luxAB, Luc, and aequorin). For other bioreporters, the signal must be activated by an external light source (GFP and UMT), and for a select few bioreporters, the signal is completely self-induced, with no exogenous substrate or external activation being required (luxCDABE). The following sections outline in brief some of the reporter gene systems available and their existing applications.
Bacterial luciferase (Lux)
Luciferase is a generic name for an enzyme that catalyzes a light-emitting reaction. Luciferases can be found in bacteria, algae, fungi, jellyfish, insects, shrimp, and squid, and the resulting light that these organisms produce is termed bioluminescence. In bacteria, the genes responsible for the light-emitting reaction (the lux genes) have been isolated and used extensively in the construction of bioreporters that emit a blue-green light with a maximum intensity at 490 nm. Three variants of lux are available, one that functions at < 30°C, another at < 37°C, and a third at < 45°C. The lux genetic system consists of five genes, luxA, luxB, luxC, luxD, and luxE. Depending on the combination of these genes used, several different types of bioluminescent bioreporters can be constructed.
luxAB Bioreporters
luxAB bioreporters contain only the luxA and luxB genes, which together are responsible for generating the light signal. However, to fully complete the light-emitting reaction, a substrate must be supplied to the cell. Typically, this occurs through the addition of the chemical decanal at some point during the bioassay procedure. Numerous luxAB bioreporters have been constructed within bacterial, yeast, insect, nematode, plant, and mammalian cell systems.
luxCDABE Bioreporters
Instead of containing only the luxA and luxB genes, bioreporters can contain all five genes of the lux cassette, thereby allowing for a completely independent light generating system that requires no extraneous additions of substrate nor any excitation by an external light source. So in this bioassay, the bioreporter is simply exposed to a target analyte and a quantitative increase in bioluminescence results, often within less than one hour. Due to their rapidity and ease of use, along with the ability to perform the bioassay repetitively in real time and on-line, makes luxCDABE bioreporters extremely attractive. Consequently, they have been incorporated into a diverse array of detection methodologies ranging from the sensing of environmental contaminants to the real-time monitoring of pathogen infections in living mice.
Nonspecific lux Bioreporters
Nonspecific lux bioreporters are typically used for the detection of chemical toxins. They are usually designed to continuously bioluminesce. Upon exposure to a chemical toxin, either the cell dies or its metabolic activity is retarded, leading to a decrease in bioluminescent light levels. Their most familiar application is in the Microtox assay where, following a short exposure to several concentrations of the sample, the decreased bioluminescence can be correlated to relative levels of toxicity.
Firefly luciferase (Luc)
Firefly luciferase catalyzes a reaction that produces visible light in the 550 to 575 nm range. A click-beetle luciferase is also available that produces light at a peak closer to 595 nm. Both luciferases require the addition of an exogenous substrate (luciferin) for the light reaction to occur. Numerous luc-based bioreporters have been constructed for the detection of a wide array of inorganic and organic compounds of environmental concern. The most promising applications, however, probably rely on introducing the genetic code of the firefly luciferase into other eukaryotic cells and tissues.
Medical diagnostics
Insertion of the luc genes into a human cervical carcinoma cell line (HeLa) illustrated that tumor-cell clearance could be visualized within a living mouse by simply scanning with a charge-coupled device camera, allowing for chemotherapy treatment to rapidly be monitored on-line and in real-time. In another example, the luc genes were inserted into human breast cancer cell lines to develop a bioassay for the detection and measurement of substances with potential estrogenic and antiestrogenic activity.
Research on gene regulation
Particular promoters can be placed upstream of the luc gene, that is, the luc sequence can be fused to the promoter sequence at DNA level. If such construct is not too large in size, it can simply be introduced into eukaryotic cells using plasmids. This approach is widely used to study the activity of a given promoter in a given cell/tissue type, since the amount of light produced by the luciferase is directly proportional to the promoter activity. In addition to studying the promoters, firefly luciferase assays offer the option of studying transcriptional activators: in these experiments typically the GAL4/UAS_system is used and its Gal4 upstream activating DNA sequence (UAS) is placed upstream the luc gene while the different activators or the different variants/fragments of the same activator are fused to the GAL4 DNA binding module at protein level. This way the transcriptional activity of the different GAL4 fusion proteins can be directly compared using light as a readout.
Aequorin
Aequorin is a photoprotein isolated from the bioluminescent jellyfish Aequorea victoria. Upon addition of calcium ions (Ca2+) and coelenterazine, a reaction occurs whose result is the generation of blue light in the 460 to 470 nm range. Aequorin has been incorporated into human B cell lines for the detection of pathogenic bacteria and viruses in what is referred to as the Cell CANARY assay (Cellular Analysis and Notification of Antigen Risks and Yields). The B cells are genetically engineered to produce aequorin. Upon exposure to antigens of different pathogens, the recombinant B cells emit light as a result of activation of an intracellular signaling cascade that releases calcium ions inside the cell.
Green fluorescent protein (GFP)
Green fluorescent protein (GFP) is also a photoprotein isolated and cloned from the jellyfish Aequorea victoria. Variants have also been isolated from the sea pansy Renilla reniformis. GFP, like aequorin, produces a blue fluorescent signal, but without the required addition of an exogenous substrate. All that is required is an ultraviolet light source to activate the fluorescent properties of the photoprotein. This ability to autofluoresce makes GFP highly desirable in biosensing assays since it can be used on-line and in to monitor intact, living cells. Additionally, the ability to alter GFP to produce light emissions besides blue (i.e., cyan, red, and yellow) allows it to be used as a multianalyte detector. Consequently, GFP has been used extensively in bioreporter constructs within bacterial, yeast, nematode, plant, and mammalian hosts.
Uroporphyrinogen (urogen) III methyltransferase (UMT)
Uroporphyrinogen (urogen) III methyltransferase (UMT) catalyzes a reaction that yields two fluorescent products which produce a red-orange fluorescence in the 590 to 770 nm range when illuminated with ultraviolet light. So as with GFP, no addition of exogenous substrates is required. UMT has been used as a bioreporter for the selection of recombinant plasmids, as a marker for gene transcription in bacterial, yeast, and mammalian cells, and for the detection of toxic salts such as arsenite and antimonite.
References
Environmental science
Genetic engineering | Bioreporter | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 2,053 | [
"Biological engineering",
"Genetic engineering",
"nan",
"Molecular biology"
] |
18,966,100 | https://en.wikipedia.org/wiki/Desmocollin | Desmocollins are a subfamily of desmosomal cadherins, the transmembrane constituents of desmosomes. They are co-expressed with desmogleins to link adjacent cells by extracellular adhesion. There are seven desmosomal cadherins in humans, three desmocollins and four desmogleins. Desmosomal cadherins allow desmosomes to contribute to the integrity of tissue structure in multicellular living organisms.
Structure
Three isoforms of desmocollin proteins have been identified.
Desmocollin-1, coded by the DSC1 gene
Desmocollin-2, coded by the DSC2 gene
Desmocollin-3, coded by the DSC3 gene
Each desmocollin gene encodes a pair of proteins: a longer 'a' form and a shorter 'b' form. The 'a' and 'b' forms differ in the length of their C-terminus tails. The protein pair is generated by alternative splicing.
Desmocollin has four cadherin-like extracellular domains, an extracellular anchor domain, and an intracellular anchor domain. Additionally, the 'a' form has an intracellular cadherin-like sequence domain, which provides binding sites for other desmosomal proteins such as plakoglobin.
Expression
The desmosomal cadherins are expressed in tissue-specific patterns. Desmocollin-2 and desmoglein-2 are found in all desmosome-containing tissues such as colon and cardiac muscle tissues, while other desmosomal cadherins are restricted to stratified epithelial tissues.
All seven desmosomal cadherins are expressed in epidermis, but in a differentiation-specific manner. The '2' and '3' isoforms of desmocollin and desmoglein are expressed in the lower epidermal layers, and the '1' proteins and desmoglein-4 are expressed in the upper epidermal layers. Different isoforms are located in the same individual cells, and single desmosomes contain more than one isoform of both desmocollin and desmoglein.
It is unclear why there are multiple desmosomal cadherin isoforms. It is thought that they may have different adhesive properties that are required at different levels in stratified epithelia or that they have specific functions in epithelial differentiation.
Disorders
Desmosomes are involved in cell-cell adhesion, and are particularly important for the integrity of heart and skin tissue. Because of this, desmocollin gene mutations can affect the adhesion of cells that undergo mechanical stress, notably cardiomyocytes and keratinocytes. Genetic disorders associated with desmocollin gene mutations include Carvajal syndrome, striate palmoplantar keratoderma, Naxos disease, and arrhythmogenic right ventricular cardiomyopathy.
There is also evidence that autoimmunity against desmosomal cadherins contributes to cardiac inflammation associated with arrhythmogenic right ventricular cardiomyopathy, and that anti-desmosomal cadherin antibodies may represent new therapeutic targets.
See also
Desmoglein, a subfamily of desmosomal cadherins
Armadillo protein family, instrumental in the cytoplasmic anchoring of cadherins
Plakoglobin
Plakophilin
Plakin protein family, associates intercellular bridges to cytoskeleton
Desmoplakin
List of target antigens in pemphigus
List of conditions caused by problems with junctional proteins
References
Proteins | Desmocollin | [
"Chemistry"
] | 765 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
18,967,136 | https://en.wikipedia.org/wiki/Seismoelectrical%20method | The seismoelectrical method (which is different from the electroseismic physical principle) is based on the generation of electromagnetic fields in soils and rocks by seismic waves. This technique is still under development and in the future it may have applications like detecting and characterizing fluids in the underground by their electrical properties, among others, usually related to fluids (porosity, transmissivity, physical properties).
Operation
When a seismic wave encounters an interface, it creates a charge separation at the interface forming an electric dipole. This dipole radiates an electromagnetic wave that can be detected by antennae on the ground surface.
As the seismic (P or compression) waves stress earth materials, four geophysical phenomena occur:
The resistivity of the earth materials is modulated by the seismic wave;
Electrokinetic effects analogous to streaming potentials are created by the seismic wave;
Piezoelectric effects are created by the seismic wave; and
High-frequency, audio- and high-frequency radio frequency impulsive responses are generated in sulfide minerals (sometimes referred to as RPE).
The dominant application of the electroseismic method is to measure the electrokinetic effect or streaming potential (item 2, above). Electrokinetic effects are initiated by sound waves (typically P-waves) passing through a porous rock inducing relative motion of the rock matrix and fluid. Motion of the ionic fluid through the capillaries in the rock occurs with cations (or less commonly, anions) preferentially adhering to the capillary walls, so that applied pressure and resulting fluid flow relative to the rock matrix produces an electric dipole. In a non-homogeneous formation, the seismic wave generates an oscillating flow of fluid and a corresponding oscillating electrical and EM field. The resulting EM wave can be detected by electrode pairs placed on the ground surface.
However, P-waves moving through a solid that contains some moisture also generates an electric phenomenon called coseismic waves. The coseismic waves travel with P-waves and are not sensitive to electrical properties of the subsurface. The dipole antenna cannot distinguish electrokinetic signal from coseismic signal so it records them both, and coseismic waves must be removed while processing field data to be able to actually interpret electrokinetic effect.
At the moment, there is not a field routine operation method, but in scientific studies an array of several dipole antennas is placed along a straight line to record seismoelectric waves, and an array of geophones placed between dipole antennas to record seismic wave arrivals. Geophones are necessary to be able to suppress coseismic waves from the seismoelectric signal, so that electrokinetic effect can be separated and studied.
Limitations
The electroseismic method is very susceptible to electrical cultural noise, and has also the same noise sources as reflection seismic method, which include ground roll, multiples and random noise. Seismoelectrical method also has a very low signal-to-noise ratio, because the attenuation of electromagnetic waves inside the earth is 1/r^3, thus theoretically limiting its depth of exploration to three hundred meters. Typical electroseismic signals are at the microvolt level. The electroseismic signal is proportional to the pressure of the seismic wave. Thus it is possible to increase the signal by using stronger seismic sources.
The electrokinetic effect is produced by several kinds of contrasts between layers like porosity contrasts, potential contrasts, viscosity contrasts and saturation in fluids contrasts among others. The possible causes of elektronkinetic effect between layers is still now a matter of study. With nowadays knowledge and technology it's really hard to determine without further data (like borehole or other geophysics data from the location), what are electrokinetic conversions produced by, and further studies will have to be carried out to be able to interpret electrokinetic data correctly. Although that, the electrokinetic effect has a promising future in near-surface and borehole geophysics.
Examples of successful field studies
The propagation of seismic waves in porous rocks is associated with a small transient deformation of rock matrix and pore space which can cause electromagnetic fields of observable amplitude if the pores are saturated. Seismoelectric field measurements are expected to help localize permeable layers in porous rocks and provide information about anelastic properties. This theoretical potential for hydrogeological applications, however, is so far confirmed only by a very limited number of successful field studies. As a consequence, the seismoelectric method is still far from being routinely used.
See also
Seismo-electromagnetics
References
Further reading
A Description of Seismo-electric and Electro-seismic Coupling
Geophysics
Economic geology
es:Ingeniería geofísica
fr:Géophysique appliquée
nl:Exploratiegeofysica
ja:物理探査
sr:Геофизичке методе | Seismoelectrical method | [
"Physics"
] | 1,038 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
18,967,255 | https://en.wikipedia.org/wiki/Mathematical%20economics | Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. Often, these applied methods are beyond simple geometry, and may include differential and integral calculus, difference and differential equations, matrix algebra, mathematical programming, or other computational methods. Proponents of this approach claim that it allows the formulation of theoretical relationships with rigor, generality, and simplicity.
Mathematics allows economists to form meaningful, testable propositions about wide-ranging and complex subjects which could less easily be expressed informally. Further, the language of mathematics allows economists to make specific, positive claims about controversial or contentious subjects that would be impossible without mathematics. Much of economic theory is currently presented in terms of mathematical economic models, a set of stylized and simplified mathematical relationships asserted to clarify assumptions and implications.
Broad applications include:
optimization problems as to goal equilibrium, whether of a household, business firm, or policy maker
static (or equilibrium) analysis in which the economic unit (such as a household) or economic system (such as a market or the economy) is modeled as not changing
comparative statics as to a change from one equilibrium to another induced by a change in one or more factors
dynamic analysis, tracing changes in an economic system over time, for example from economic growth.
Formal economic modeling began in the 19th century with the use of differential calculus to represent and explain economic behavior, such as utility maximization, an early economic application of mathematical optimization. Economics became more mathematical as a discipline throughout the first half of the 20th century, but introduction of new and generalized techniques in the period around the Second World War, as in game theory, would greatly broaden the use of mathematical formulations in economics.
This rapid systematizing of economics alarmed critics of the discipline as well as some noted economists. John Maynard Keynes, Robert Heilbroner, Friedrich Hayek and others have criticized the broad use of mathematical models for human behavior, arguing that some human choices are irreducible to mathematics.
History
The use of mathematics in the service of social and economic analysis dates back to the 17th century. Then, mainly in German universities, a style of instruction emerged which dealt specifically with detailed presentation of data as it related to public administration. Gottfried Achenwall lectured in this fashion, coining the term statistics. At the same time, a small group of professors in England established a method of "reasoning by figures upon things relating to government" and referred to this practice as Political Arithmetick. Sir William Petty wrote at length on issues that would later concern economists, such as taxation, Velocity of money and national income, but while his analysis was numerical, he rejected abstract mathematical methodology. Petty's use of detailed numerical data (along with John Graunt) would influence statisticians and economists for some time, even though Petty's works were largely ignored by English scholars.
The mathematization of economics began in earnest in the 19th century. Most of the economic analysis of the time was what would later be called classical economics. Subjects were discussed and dispensed with through algebraic means, but calculus was not used. More importantly, until Johann Heinrich von Thünen's The Isolated State in 1826, economists did not develop explicit and abstract models for behavior in order to apply the tools of mathematics. Thünen's model of farmland use represents the first example of marginal analysis. Thünen's work was largely theoretical, but he also mined empirical data in order to attempt to support his generalizations. In comparison to his contemporaries, Thünen built economic models and tools, rather than applying previous tools to new problems.
Meanwhile, a new cohort of scholars trained in the mathematical methods of the physical sciences gravitated to economics, advocating and applying those methods to their subject, and described today as moving from geometry to mechanics.
These included W.S. Jevons who presented a paper on a "general mathematical theory of political economy" in 1862, providing an outline for use of the theory of marginal utility in political economy. In 1871, he published The Principles of Political Economy, declaring that the subject as science "must be mathematical simply because it deals with quantities". Jevons expected that only collection of statistics for price and quantities would permit the subject as presented to become an exact science. Others preceded and followed in expanding mathematical representations of economic problems.
Marginalists and the roots of neoclassical economics
Augustin Cournot and Léon Walras built the tools of the discipline axiomatically around utility, arguing that individuals sought to maximize their utility across choices in a way that could be described mathematically. At the time, it was thought that utility was quantifiable, in units known as utils. Cournot, Walras and Francis Ysidro Edgeworth are considered the precursors to modern mathematical economics.
Augustin Cournot
Cournot, a professor of mathematics, developed a mathematical treatment in 1838 for duopoly—a market condition defined by competition between two sellers. This treatment of competition, first published in Researches into the Mathematical Principles of Wealth, is referred to as Cournot duopoly. It is assumed that both sellers had equal access to the market and could produce their goods without cost. Further, it assumed that both goods were homogeneous. Each seller would vary her output based on the output of the other and the market price would be determined by the total quantity supplied. The profit for each firm would be determined by multiplying their output by the per unit market price. Differentiating the profit function with respect to quantity supplied for each firm left a system of linear equations, the simultaneous solution of which gave the equilibrium quantity, price and profits. Cournot's contributions to the mathematization of economics would be neglected for decades, but eventually influenced many of the marginalists. Cournot's models of duopoly and oligopoly also represent one of the first formulations of non-cooperative games. Today the solution can be given as a Nash equilibrium but Cournot's work preceded modern game theory by over 100 years.
Léon Walras
While Cournot provided a solution for what would later be called partial equilibrium, Léon Walras attempted to formalize discussion of the economy as a whole through a theory of general competitive equilibrium. The behavior of every economic actor would be considered on both the production and consumption side. Walras originally presented four separate models of exchange, each recursively included in the next. The solution of the resulting system of equations (both linear and non-linear) is the general equilibrium. At the time, no general solution could be expressed for a system of arbitrarily many equations, but Walras's attempts produced two famous results in economics. The first is Walras' law and the second is the principle of tâtonnement. Walras' method was considered highly mathematical for the time and Edgeworth commented at length about this fact in his review of Éléments d'économie politique pure (Elements of Pure Economics).
Walras' law was introduced as a theoretical answer to the problem of determining the solutions in general equilibrium. His notation is different from modern notation but can be constructed using more modern summation notation. Walras assumed that in equilibrium, all money would be spent on all goods: every good would be sold at the market price for that good and every buyer would expend their last dollar on a basket of goods. Starting from this assumption, Walras could then show that if there were n markets and n-1 markets cleared (reached equilibrium conditions) that the nth market would clear as well. This is easiest to visualize with two markets (considered in most texts as a market for goods and a market for money). If one of two markets has reached an equilibrium state, no additional goods (or conversely, money) can enter or exit the second market, so it must be in a state of equilibrium as well. Walras used this statement to move toward a proof of existence of solutions to general equilibrium but it is commonly used today to illustrate market clearing in money markets at the undergraduate level.
Tâtonnement (roughly, French for groping toward) was meant to serve as the practical expression of Walrasian general equilibrium. Walras abstracted the marketplace as an auction of goods where the auctioneer would call out prices and market participants would wait until they could each satisfy their personal reservation prices for the quantity desired (remembering here that this is an auction on all goods, so everyone has a reservation price for their desired basket of goods).
Only when all buyers are satisfied with the given market price would transactions occur. The market would "clear" at that price—no surplus or shortage would exist. The word tâtonnement is used to describe the directions the market takes in groping toward equilibrium, settling high or low prices on different goods until a price is agreed upon for all goods. While the process appears dynamic, Walras only presented a static model, as no transactions would occur until all markets were in equilibrium. In practice, very few markets operate in this manner.
Francis Ysidro Edgeworth
Edgeworth introduced mathematical elements to Economics explicitly in Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences, published in 1881. He adopted Jeremy Bentham's felicific calculus to economic behavior, allowing the outcome of each decision to be converted into a change in utility. Using this assumption, Edgeworth built a model of exchange on three assumptions: individuals are self-interested, individuals act to maximize utility, and individuals are "free to recontract with another independently of...any third party".
Given two individuals, the set of solutions where both individuals can maximize utility is described by the contract curve on what is now known as an Edgeworth Box. Technically, the construction of the two-person solution to Edgeworth's problem was not developed graphically until 1924 by Arthur Lyon Bowley. The contract curve of the Edgeworth box (or more generally on any set of solutions to Edgeworth's problem for more actors) is referred to as the core of an economy.
Edgeworth devoted considerable effort to insisting that mathematical proofs were appropriate for all schools of thought in economics. While at the helm of The Economic Journal, he published several articles criticizing the mathematical rigor of rival researchers, including Edwin Robert Anderson Seligman, a noted skeptic of mathematical economics. The articles focused on a back and forth over tax incidence and responses by producers. Edgeworth noticed that a monopoly producing a good that had jointness of supply but not jointness of demand (such as first class and economy on an airplane, if the plane flies, both sets of seats fly with it) might actually lower the price seen by the consumer for one of the two commodities if a tax were applied. Common sense and more traditional, numerical analysis seemed to indicate that this was preposterous. Seligman insisted that the results Edgeworth achieved were a quirk of his mathematical formulation. He suggested that the assumption of a continuous demand function and an infinitesimal change in the tax resulted in the paradoxical predictions. Harold Hotelling later showed that Edgeworth was correct and that the same result (a "diminution of price as a result of the tax") could occur with a discontinuous demand function and large changes in the tax rate.
Modern mathematical economics
From the later-1930s, an array of new mathematical tools from differential calculus and differential equations, convex sets, and graph theory were deployed to advance economic theory in a way similar to new mathematical methods earlier applied to physics. The process was later described as moving from mechanics to axiomatics.
Differential calculus
Vilfredo Pareto analyzed microeconomics by treating decisions by economic actors as attempts to change a given allotment of goods to another, more preferred allotment. Sets of allocations could then be treated as Pareto efficient (Pareto optimal is an equivalent term) when no exchanges could occur between actors that could make at least one individual better off without making any other individual worse off. Pareto's proof is commonly conflated with Walrassian equilibrium or informally ascribed to Adam Smith's Invisible hand hypothesis. Rather, Pareto's statement was the first formal assertion of what would be known as the first fundamental theorem of welfare economics. These models lacked the inequalities of the next generation of mathematical economics.
In the landmark treatise Foundations of Economic Analysis (1947), Paul Samuelson identified a common paradigm and mathematical structure across multiple fields in the subject, building on previous work by Alfred Marshall. Foundations took mathematical concepts from physics and applied them to economic problems. This broad view (for example, comparing Le Chatelier's principle to tâtonnement) drives the fundamental premise of mathematical economics: systems of economic actors may be modeled and their behavior described much like any other system. This extension followed on the work of the marginalists in the previous century and extended it significantly. Samuelson approached the problems of applying individual utility maximization over aggregate groups with comparative statics, which compares two different equilibrium states after an exogenous change in a variable. This and other methods in the book provided the foundation for mathematical economics in the 20th century.
Linear models
Restricted models of general equilibrium were formulated by John von Neumann in 1937. Unlike earlier versions, the models of von Neumann had inequality constraints. For his model of an expanding economy, von Neumann proved the existence and uniqueness of an equilibrium using his generalization of Brouwer's fixed point theorem. Von Neumann's model of an expanding economy considered the matrix pencil A - λ B with nonnegative matrices A and B; von Neumann sought probability vectors p and q and a positive number λ that would solve the complementarity equation
pT (A − λ B) q = 0,
along with two inequality systems expressing economic efficiency. In this model, the (transposed) probability vector p represents the prices of the goods while the probability vector q represents the "intensity" at which the production process would run. The unique solution λ represents the rate of growth of the economy, which equals the interest rate. Proving the existence of a positive growth rate and proving that the growth rate equals the interest rate were remarkable achievements, even for von Neumann. Von Neumann's results have been viewed as a special case of linear programming, where von Neumann's model uses only nonnegative matrices. The study of von Neumann's model of an expanding economy continues to interest mathematical economists with interests in computational economics.
Input-output economics
In 1936, the Russian–born economist Wassily Leontief built his model of input-output analysis from the 'material balance' tables constructed by Soviet economists, which themselves followed earlier work by the physiocrats. With his model, which described a system of production and demand processes, Leontief described how changes in demand in one economic sector would influence production in another. In practice, Leontief estimated the coefficients of his simple models, to address economically interesting questions. In production economics, "Leontief technologies" produce outputs using constant proportions of inputs, regardless of the price of inputs, reducing the value of Leontief models for understanding economies but allowing their parameters to be estimated relatively easily. In contrast, the von Neumann model of an expanding economy allows for choice of techniques, but the coefficients must be estimated for each technology.
Mathematical optimization
In mathematics, mathematical optimization (or optimization or mathematical programming) refers to the selection of a best element from some set of available alternatives. In the simplest case, an optimization problem involves maximizing or minimizing a real function by selecting input values of the function and computing the corresponding values of the function. The solution process includes satisfying general necessary and sufficient conditions for optimality. For optimization problems, specialized notation may be used as to the function and its input(s). More generally, optimization includes finding the best available element of some function given a defined domain and may use a variety of different computational optimization techniques.
Economics is closely enough linked to optimization by agents in an economy that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Optimization problems run through modern economics, many with explicit economic or technical constraints. In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem for a given level of utility, are economic optimization problems. Theory posits that consumers maximize their utility, subject to their budget constraints and that firms maximize their profits, subject to their production functions, input costs, and market demand.
Economic equilibrium is studied in optimization theory as a key ingredient of economic theorems that in principle could be tested against empirical data. Newer developments have occurred in dynamic programming and modeling optimization with risk and uncertainty, including applications to portfolio theory, the economics of information, and search theory.
Optimality properties for an entire market system may be stated in mathematical terms, as in formulation of the two fundamental theorems of welfare economics and in the Arrow–Debreu model of general equilibrium (also discussed below). More concretely, many problems are amenable to analytical (formulaic) solution. Many others may be sufficiently complex to require numerical methods of solution, aided by software. Still others are complex but tractable enough to allow computable methods of solution, in particular computable general equilibrium models for the entire economy.
Linear and nonlinear programming have profoundly affected microeconomics, which had earlier considered only equality constraints. Many of the mathematical economists who received Nobel Prizes in Economics had conducted notable research using linear programming: Leonid Kantorovich, Leonid Hurwicz, Tjalling Koopmans, Kenneth J. Arrow, Robert Dorfman, Paul Samuelson and Robert Solow. Both Kantorovich and Koopmans acknowledged that George B. Dantzig deserved to share their Nobel Prize for linear programming. Economists who conducted research in nonlinear programming also have won the Nobel prize, notably Ragnar Frisch in addition to Kantorovich, Hurwicz, Koopmans, Arrow, and Samuelson.
Linear optimization
Linear programming was developed to aid the allocation of resources in firms and in industries during the 1930s in Russia and during the 1940s in the United States. During the Berlin airlift (1948), linear programming was used to plan the shipment of supplies to prevent Berlin from starving after the Soviet blockade.
Nonlinear programming
Extensions to nonlinear optimization with inequality constraints were achieved in 1951 by Albert W. Tucker and Harold Kuhn, who considered the nonlinear optimization problem:
Minimize subject to and where
is the function to be minimized
are the functions of the inequality constraints where
are the functions of the equality constraints where .
In allowing inequality constraints, the Kuhn–Tucker approach generalized the classic method of Lagrange multipliers, which (until then) had allowed only equality constraints. The Kuhn–Tucker approach inspired further research on Lagrangian duality, including the treatment of inequality constraints. The duality theory of nonlinear programming is particularly satisfactory when applied to convex minimization problems, which enjoy the convex-analytic duality theory of Fenchel and Rockafellar; this convex duality is particularly strong for polyhedral convex functions, such as those arising in linear programming. Lagrangian duality and convex analysis are used daily in operations research, in the scheduling of power plants, the planning of production schedules for factories, and the routing of airlines (routes, flights, planes, crews).
Variational calculus and optimal control
Economic dynamics allows for changes in economic variables over time, including in dynamic systems. The problem of finding optimal functions for such changes is studied in variational calculus and in optimal control theory. Before the Second World War, Frank Ramsey and Harold Hotelling used the calculus of variations to that end.
Following Richard Bellman's work on dynamic programming and the 1962 English translation of L. Pontryagin et al.'s earlier work, optimal control theory was used more extensively in economics in addressing dynamic problems, especially as to economic growth equilibrium and stability of economic systems, of which a textbook example is optimal consumption and saving. A crucial distinction is between deterministic and stochastic control models. Other applications of optimal control theory include those in finance, inventories, and production for example.
Functional analysis
It was in the course of proving of the existence of an optimal equilibrium in his 1937 model of economic growth that John von Neumann introduced functional analytic methods to include topology in economic theory, in particular, fixed-point theory through his generalization of Brouwer's fixed-point theorem. Following von Neumann's program, Kenneth Arrow and Gérard Debreu formulated abstract models of economic equilibria using convex sets and fixed–point theory. In introducing the Arrow–Debreu model in 1954, they proved the existence (but not the uniqueness) of an equilibrium and also proved that every Walras equilibrium is Pareto efficient; in general, equilibria need not be unique. In their models, the ("primal") vector space represented quantities while the "dual" vector space represented prices.
In Russia, the mathematician Leonid Kantorovich developed economic models in partially ordered vector spaces, that emphasized the duality between quantities and prices. Kantorovich renamed prices as "objectively determined valuations" which were abbreviated in Russian as "o. o. o.", alluding to the difficulty of discussing prices in the Soviet Union.
Even in finite dimensions, the concepts of functional analysis have illuminated economic theory, particularly in clarifying the role of prices as normal vectors to a hyperplane supporting a convex set, representing production or consumption possibilities. However, problems of describing optimization over time or under uncertainty require the use of infinite–dimensional function spaces, because agents are choosing among functions or stochastic processes.
Differential decline and rise
John von Neumann's work on functional analysis and topology broke new ground in mathematics and economic theory. It also left advanced mathematical economics with fewer applications of differential calculus. In particular, general equilibrium theorists used general topology, convex geometry, and optimization theory more than differential calculus, because the approach of differential calculus had failed to establish the existence of an equilibrium.
However, the decline of differential calculus should not be exaggerated, because differential calculus has always been used in graduate training and in applications. Moreover, differential calculus has returned to the highest levels of mathematical economics, general equilibrium theory (GET), as practiced by the "GET-set" (the humorous designation due to Jacques H. Drèze). In the 1960s and 1970s, however, Gérard Debreu and Stephen Smale led a revival of the use of differential calculus in mathematical economics. In particular, they were able to prove the existence of a general equilibrium, where earlier writers had failed, because of their novel mathematics: Baire category from general topology and Sard's lemma from differential topology. Other economists associated with the use of differential analysis include Egbert Dierker, Andreu Mas-Colell, and Yves Balasko. These advances have changed the traditional narrative of the history of mathematical economics, following von Neumann, which celebrated the abandonment of differential calculus.
Game theory
John von Neumann, working with Oskar Morgenstern on the theory of games, broke new mathematical ground in 1944 by extending functional analytic methods related to convex sets and topological fixed-point theory to economic analysis. Their work thereby avoided the traditional differential calculus, for which the maximum–operator did not apply to non-differentiable functions. Continuing von Neumann's work in cooperative game theory, game theorists Lloyd S. Shapley, Martin Shubik, Hervé Moulin, Nimrod Megiddo, Bezalel Peleg influenced economic research in politics and economics. For example, research on the fair prices in cooperative games and fair values for voting games led to changed rules for voting in legislatures and for accounting for the costs in public–works projects. For example, cooperative game theory was used in designing the water distribution system of Southern Sweden and for setting rates for dedicated telephone lines in the US.
Earlier neoclassical theory had bounded only the range of bargaining outcomes and in special cases, for example bilateral monopoly or along the contract curve of the Edgeworth box. Von Neumann and Morgenstern's results were similarly weak. Following von Neumann's program, however, John Nash used fixed–point theory to prove conditions under which the bargaining problem and noncooperative games can generate a unique equilibrium solution. Noncooperative game theory has been adopted as a fundamental aspect of experimental economics, behavioral economics, information economics, industrial organization, and political economy. It has also given rise to the subject of mechanism design (sometimes called reverse game theory), which has private and public-policy applications as to ways of improving economic efficiency through incentives for information sharing.
In 1994, Nash, John Harsanyi, and Reinhard Selten received the Nobel Memorial Prize in Economic Sciences their work on non–cooperative games. Harsanyi and Selten were awarded for their work on repeated games. Later work extended their results to computational methods of modeling.
Agent-based computational economics
Agent-based computational economics (ACE) as a named field is relatively recent, dating from about the 1990s as to published work. It studies economic processes, including whole economies, as dynamic systems of interacting agents over time. As such, it falls in the paradigm of complex adaptive systems. In corresponding agent-based models, agents are not real people but "computational objects modeled as interacting according to rules" ... "whose micro-level interactions create emergent patterns" in space and time. The rules are formulated to predict behavior and social interactions based on incentives and information. The theoretical assumption of mathematical optimization by agents markets is replaced by the less restrictive postulate of agents with bounded rationality adapting to market forces.
ACE models apply numerical methods of analysis to computer-based simulations of complex dynamic problems for which more conventional methods, such as theorem formulation, may not find ready use. Starting from specified initial conditions, the computational economic system is modeled as evolving over time as its constituent agents repeatedly interact with each other. In these respects, ACE has been characterized as a bottom-up culture-dish approach to the study of the economy. In contrast to other standard modeling methods, ACE events are driven solely by initial conditions, whether or not equilibria exist or are computationally tractable. ACE modeling, however, includes agent adaptation, autonomy, and learning. It has a similarity to, and overlap with, game theory as an agent-based method for modeling social interactions. Other dimensions of the approach include such standard economic subjects as competition and collaboration, market structure and industrial organization, transaction costs, welfare economics and mechanism design, information and uncertainty, and macroeconomics.
The method is said to benefit from continuing improvements in modeling techniques of computer science and increased computer capabilities. Issues include those common to experimental economics in general and by comparison and to development of a common framework for empirical validation and resolving open questions in agent-based modeling. The ultimate scientific objective of the method has been described as "test[ing] theoretical findings against real-world data in ways that permit empirically supported theories to cumulate over time, with each researcher's work building appropriately on the work that has gone before".
Mathematicization of economics
Over the course of the 20th century, articles in "core journals" in economics have been almost exclusively written by economists in academia. As a result, much of the material transmitted in those journals relates to economic theory, and "economic theory itself has been continuously more abstract and mathematical." A subjective assessment of mathematical techniques employed in these core journals showed a decrease in articles that use neither geometric representations nor mathematical notation from 95% in 1892 to 5.3% in 1990. A 2007 survey of ten of the top economic journals finds that only 5.8% of the articles published in 2003 and 2004 both lacked statistical analysis of data and lacked displayed mathematical expressions that were indexed with numbers at the margin of the page.
Econometrics
Between the world wars, advances in mathematical statistics and a cadre of mathematically trained economists led to econometrics, which was the name proposed for the discipline of advancing economics by using mathematics and statistics. Within economics, "econometrics" has often been used for statistical methods in economics, rather than mathematical economics. Statistical econometrics features the application of linear regression and time series analysis to economic data.
Ragnar Frisch coined the word "econometrics" and helped to found both the Econometric Society in 1930 and the journal Econometrica in 1933. A student of Frisch's, Trygve Haavelmo published The Probability Approach in Econometrics in 1944, where he asserted that precise statistical analysis could be used as a tool to validate mathematical theories about economic actors with data from complex sources. This linking of statistical analysis of systems to economic theory was also promulgated by the Cowles Commission (now the Cowles Foundation) throughout the 1930s and 1940s.
The roots of modern econometrics can be traced to the American economist Henry L. Moore. Moore studied agricultural productivity and attempted to fit changing values of productivity for plots of corn and other crops to a curve using different values of elasticity. Moore made several errors in his work, some from his choice of models and some from limitations in his use of mathematics. The accuracy of Moore's models also was limited by the poor data for national accounts in the United States at the time. While his first models of production were static, in 1925 he published a dynamic "moving equilibrium" model designed to explain business cycles—this periodic variation from over-correction in supply and demand curves is now known as the cobweb model. A more formal derivation of this model was made later by Nicholas Kaldor, who is largely credited for its exposition.
Application
Much of classical economics can be presented in simple geometric terms or elementary mathematical notation. Mathematical economics, however, conventionally makes use of calculus and matrix algebra in economic analysis in order to make powerful claims that would be more difficult without such mathematical tools. These tools are prerequisites for formal study, not only in mathematical economics but in contemporary economic theory in general. Economic problems often involve so many variables that mathematics is the only practical way of attacking and solving them. Alfred Marshall argued that every economic problem which can be quantified, analytically expressed and solved, should be treated by means of mathematical work.
Economics has become increasingly dependent upon mathematical methods and the mathematical tools it employs have become more sophisticated. As a result, mathematics has become considerably more important to professionals in economics and finance. Graduate programs in both economics and finance require strong undergraduate preparation in mathematics for admission and, for this reason, attract an increasingly high number of mathematicians. Applied mathematicians apply mathematical principles to practical problems, such as economic analysis and other economics-related issues, and many economic problems are often defined as integrated into the scope of applied mathematics.
This integration results from the formulation of economic problems as stylized models with clear assumptions and falsifiable predictions. This modeling may be informal or prosaic, as it was in Adam Smith's The Wealth of Nations, or it may be formal, rigorous and mathematical.
Broadly speaking, formal economic models may be classified as stochastic or deterministic and as discrete or continuous. At a practical level, quantitative modeling is applied to many areas of economics and several methodologies have evolved more or less independently of each other.
Stochastic models are formulated using stochastic processes. They model economically observable values over time. Most of econometrics is based on statistics to formulate and test hypotheses about these processes or estimate parameters for them. Between the World Wars, Herman Wold developed a representation of stationary stochastic processes in terms of autoregressive models and a determinist trend. Wold and Jan Tinbergen applied time-series analysis to economic data. Contemporary research on time series statistics consider additional formulations of stationary processes, such as autoregressive moving average models. More general models include autoregressive conditional heteroskedasticity (ARCH) models and generalized ARCH (GARCH) models.
Non-stochastic mathematical models may be purely qualitative (for example, models involved in some aspect of social choice theory) or quantitative (involving rationalization of financial variables, for example with hyperbolic coordinates, and/or specific forms of functional relationships between variables). In some cases economic predictions of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions.
Qualitative models are occasionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision.
Example: The effect of a corporate tax cut on wages
The great appeal of mathematical economics is that it brings a degree of rigor to economic thinking, particularly around charged political topics. For example, during the discussion of the efficacy of a corporate tax cut for increasing the wages of workers, a simple mathematical model proved beneficial to understanding the issues at hand.
As an intellectual exercise, the following problem was posed by Prof. Greg Mankiw of Harvard University:An open economy has the production function , where is output per worker and is capital per worker. The capital stock adjusts so that the after-tax marginal product of capital equals the exogenously given world interest rate ...How much will the tax cut increase wages?To answer this question, we follow John H. Cochrane of the Hoover Institution. Suppose an open economy has the production function:Where the variables in this equation are:
is the total output
is the production function
is the total capital stock
is the total labor stock
The standard choice for the production function is the Cobb-Douglas production function:where is the factor of productivity - assumed to be a constant. A corporate tax cut in this model is equivalent to a tax on capital. With taxes, firms look to maximize:where is the capital tax rate, is wages per worker, and is the exogenous interest rate. Then the first-order optimality conditions become:Therefore, the optimality conditions imply that:Define total taxes . This implies that taxes per worker are:Then the change in taxes per worker, given the tax rate, is:To find the change in wages, we differentiate the second optimality condition for the per worker wages to obtain:Assuming that the interest rate is fixed at , so that , we may differentiate the first optimality condition for the interest rate to find:For the moment, let's focus only on the static effect of a capital tax cut, so that . If we substitute this equation into equation for wage changes with respect to the tax rate, then we find that:Therefore, the static effect of a capital tax cut on wages is:Based on the model, it seems possible that we may achieve a rise in the wage of a worker greater than the amount of the tax cut. But that only considers the static effect, and we know that the dynamic effect must be accounted for. In the dynamic model, we may rewrite the equation for changes in taxes per worker with respect to the tax rate as:Recalling that , we have that:Using the Cobb-Douglas production function, we have that:Therefore, the dynamic effect of a capital tax cut on wages is:If we take , then the dynamic effect of lowering capital taxes on wages will be even larger than the static effect. Moreover, if there are positive externalities to capital accumulation, the effect of the tax cut on wages would be larger than in the model we just derived. It is important to note that the result is a combination of:
The standard result that in a small open economy labor bears 100% of a small capital income tax
The fact that, starting at a positive tax rate, the burden of a tax increase exceeds revenue collection due to the first-order deadweight loss
This result showing that, under certain assumptions, a corporate tax cut can boost the wages of workers by more than the lost revenue does not imply that the magnitude is correct. Rather, it suggests a basis for policy analysis that is not grounded in handwaving. If the assumptions are reasonable, then the model is an acceptable approximation of reality; if they are not, then better models should be developed.
CES production function
Now let's assume that instead of the Cobb-Douglas production function we have a more general constant elasticity of substitution (CES) production function:where ; is the elasticity of substitution between capital and labor. The relevant quantity we want to calculate is , which may be derived as:Therefore, we may use this to find that:Therefore, under a general CES model, the dynamic effect of a capital tax cut on wages is:We recover the Cobb-Douglas solution when . When , which is the case when perfect substitutes exist, we find that - there is no effect of changes in capital taxes on wages. And when , which is the case when perfect complements exist, we find that - a cut in capital taxes increases wages by exactly one dollar.
Criticisms and defences
Adequacy of mathematics for qualitative and complicated economics
The Austrian school — while making many of the same normative economic arguments as mainstream economists from marginalist traditions, such as the Chicago school — differs methodologically from mainstream neoclassical schools of economics, in particular in their sharp critiques of the mathematization of economics. Friedrich Hayek contended that the use of formal techniques projects a scientific exactness that does not appropriately account for informational limitations faced by real economic agents.
In an interview in 1999, the economic historian Robert Heilbroner stated:
Heilbroner stated that "some/much of economics is not naturally quantitative and therefore does not lend itself to mathematical exposition."
Testing predictions of mathematical economics
Philosopher Karl Popper discussed the scientific standing of economics in the 1940s and 1950s. He argued that mathematical economics suffered from being tautological. In other words, insofar as economics became a mathematical theory, mathematical economics ceased to rely on empirical refutation but rather relied on mathematical proofs and disproof. According to Popper, falsifiable assumptions can be tested by experiment and observation while unfalsifiable assumptions can be explored mathematically for their consequences and for their consistency with other assumptions.
Sharing Popper's concerns about assumptions in economics generally, and not just mathematical economics, Milton Friedman declared that "all assumptions are unrealistic". Friedman proposed judging economic models by their predictive performance rather than by the match between their assumptions and reality.
Mathematical economics as a form of pure mathematics
Considering mathematical economics, J.M. Keynes wrote in The General Theory:
Defense of mathematical economics
In response to these criticisms, Paul Samuelson argued that mathematics is a language, repeating a thesis of Josiah Willard Gibbs. In economics, the language of mathematics is sometimes necessary for representing substantive problems. Moreover, mathematical economics has led to conceptual advances in economics. In particular, Samuelson gave the example of microeconomics, writing that "few people are ingenious enough to grasp [its] more complex parts... without resorting to the language of mathematics, while most ordinary individuals can do so fairly easily with the aid of mathematics."
Some economists state that mathematical economics deserves support just like other forms of mathematics, particularly its neighbors in mathematical optimization and mathematical statistics and increasingly in theoretical computer science. Mathematical economics and other mathematical sciences have a history in which theoretical advances have regularly contributed to the reform of the more applied branches of economics. In particular, following the program of John von Neumann, game theory now provides the foundations for describing much of applied economics, from statistical decision theory (as "games against nature") and econometrics to general equilibrium theory and industrial organization. In the last decade, with the rise of the internet, mathematical economists and optimization experts and computer scientists have worked on problems of pricing for on-line services --- their contributions using mathematics from cooperative game theory, nondifferentiable optimization, and combinatorial games.
Robert M. Solow concluded that mathematical economics was the core "infrastructure" of contemporary economics:
Economics is no longer a fit conversation piece for ladies and gentlemen. It has become a technical subject. Like any technical subject it attracts some people who are more interested in the technique than the subject. That is too bad, but it may be inevitable. In any case, do not kid yourself: the technical core of economics is indispensable infrastructure for the political economy. That is why, if you consult [a reference in contemporary economics] looking for enlightenment about the world today, you will be led to technical economics, or history, or nothing at all.
Mathematical economists
Prominent mathematical economists include the following.
19th century
Enrico Barone
Antoine Augustin Cournot
Francis Ysidro Edgeworth
Irving Fisher
William Stanley Jevons
Vilfredo Pareto
Léon Walras
20th century
Charalambos D. Aliprantis
R. G. D. Allen
Maurice Allais
Kenneth J. Arrow
Robert J. Aumann
Yves Balasko
David Blackwell
Lawrence E. Blume
Graciela Chichilnisky
George B. Dantzig
Gérard Debreu
Mario Draghi
Jacques H. Drèze
David Gale
Nicholas Georgescu-Roegen
Roger Guesnerie
Frank Hahn
John C. Harsanyi
John R. Hicks
Werner Hildenbrand
Harold Hotelling
Leonid Hurwicz
Leonid Kantorovich
Tjalling Koopmans
David M. Kreps
Harold W. Kuhn
Edmond Malinvaud
Andreu Mas-Colell
Eric Maskin
Nimrod Megiddo
Jean-François Mertens
James Mirrlees
Roger Myerson
John Forbes Nash, Jr.
John von Neumann
Vladimir Pokrovskii
Edward C. Prescott
Roy Radner
Frank Ramsey
Donald John Roberts
Paul Samuelson
Yuliy Sannikov
Thomas Sargent
Leonard J. Savage
Herbert Scarf
Reinhard Selten
Amartya Sen
Lloyd S. Shapley
Stephen Smale
Robert Solow
Hugo F. Sonnenschein
Nancy L. Stokey
Albert W. Tucker
Hirofumi Uzawa
Robert B. Wilson
Abraham Wald
Hermann Wold
Nicholas C. Yannelis
See also
Econophysics
Mathematical finance
References
Further reading
Alpha C. Chiang and Kevin Wainwright, [1967] 2005. Fundamental Methods of Mathematical Economics, McGraw-Hill Irwin. Contents.
E. Roy Weintraub, 1982. Mathematics for Economists, Cambridge. Contents.
Stephen Glaister, 1984. Mathematical Methods for Economists, 3rd ed., Blackwell. Contents.
Akira Takayama, 1985. Mathematical Economics, 2nd ed. Cambridge. Contents.
Nancy L. Stokey and Robert E. Lucas with Edward Prescott, 1989. Recursive Methods in Economic Dynamics, Harvard University Press. Desecription and chapter-preview links.
A. K. Dixit, [1976] 1990. Optimization in Economic Theory, 2nd ed., Oxford. Description and contents preview.
Kenneth L. Judd, 1998. Numerical Methods in Economics, MIT Press. Description and chapter-preview links.
Michael Carter, 2001. Foundations of Mathematical Economics, MIT Press. Contents.
Ferenc Szidarovszky and Sándor Molnár, 2002. Introduction to Matrix Theory: With Applications to Business and Economics, World Scientific Publishing. Description and preview.
D. Wade Hands, 2004. Introductory Mathematical Economics, 2nd ed. Oxford. Contents.
Vladimir Pokrovskii, 2018. Econodynamics. The Theory of Social Production, 3th ed., Springer.
Giancarlo Gandolfo, [1997] 2009. Economic Dynamics, 4th ed., Springer. Description and preview.
John Stachurski, 2009. Economic Dynamics: Theory and Computation, MIT Press. Description and preview.
External links
Journal of Mathematical Economics Aims & Scope
Erasmus Mundus Master QEM - Models and Methods of Quantitative Economics, The Models and Methods of Quantitative Economics - QEM
Mathematical and quantitative methods (economics) | Mathematical economics | [
"Mathematics"
] | 9,011 | [
"Applied mathematics",
"Mathematical economics"
] |
18,968,032 | https://en.wikipedia.org/wiki/IntervalZero | IntervalZero, Inc. develops hard real-time software and its symmetric multiprocessing (SMP) enabled RTX and RTX64 software transform the Microsoft Windows general-purpose operating system (GPOS) into a real-time operating system (RTOS).
IntervalZero and its engineering group regularly release new software (cf its history).
Its most recent product, RTX64, focuses on 64-bit and symmetric multiprocessing (SMP) to replace dedicated hardware based systems such as digital signal processors (DSPs) or field-programmable gate arrays (FPGAs) with multicore PCs.
For instance, an audio mixing surface manufacturer which largely deployed DSP based systems, switched to personal computer (PC) based systems, dedicating multi-core processors for the real time audio processing.
Founded in July 2008 by a group of former Ardence executives, IntervalZero is headed by CEO Jeffrey D. Hibbard. The firm has offices in Waltham, MA; Nice, France; Munich, Germany, and Taiwan, ROC.
This global presence is important because these solutions are deployed worldwide, primarily in industrial automation, military, aerospace, medical devices, digital media, and test and simulation software.
The corporate name, IntervalZero, comes from the technical definition of the optimal experience between a system command and execution.
History
IntervalZero's lineage traces back to 1980, when a group of Massachusetts Institute of Technology engineers started VenturCom and began to develop expertise in embedded technology. It was during this time that Venix was developed and marketed.
Their first innovation was to focus on Windows NT 4.0 as a possible real-time solution for the Industry in 1995 by releasing RTX. Since then, a lot of controllers are PC and Windows based.
Their second innovation came as a second product, Component Integrator, which makes Windows NT 4.0 an embedded OS. It was licensed by Microsoft a few years later and became the origin of Windows NT Embedded.
In 2004, VenturCom, was renamed Ardence.
In December 2006, Citrix Systems announced an agreement to acquire Ardence's enterprise and embedded software businesses. It integrated the software streaming products into the Citrix portfolio in 2007 and early 2008.
In 2008, a group of former Ardence executives founded IntervalZero and acquired the Ardence embedded software business from Citrix Systems Inc. Citrix retained a minority ownership the firm.
On July 28, 2008, Intervalzero announced that it had acquired the Ardennce embedded software division with Citrix Systems Inc.
Products
IntervalZero develops RTX and RTX64, hard real-time software that transforms Microsoft Windows into a real-time operating system (RTOS).
Executive Officers
Jeffrey D. Hibbard, Chief Executive Officer
Mark Van Vranken, Chief Financial Officer
Brian Calder, Vice President, North America Sales & Marketing
Daron Underwood, Vice President, CTO
Brian Carter, Vice President, Strategic Communications
Bryan Levey, Vice President, Engineering
References
Software companies based in Massachusetts
Real-time operating systems
Software companies established in 2008
2008 establishments in Massachusetts
Companies formed by management buyout | IntervalZero | [
"Technology"
] | 641 | [
"Real-time computing",
"Real-time operating systems"
] |
2,357,705 | https://en.wikipedia.org/wiki/Borel%20summation | In mathematics, Borel summation is a summation method for divergent series, introduced by . It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several variations of this method that are also called Borel summation, and a generalization of it called Mittag-Leffler summation.
Definition
There are (at least) three slightly different methods called Borel summation. They differ in which series they can sum, but are consistent, meaning that if two of the methods sum the same series they give the same answer.
Throughout let denote a formal power series
and define the Borel transform of to be its corresponding exponential series
Borel's exponential summation method
Let denote the partial sum
A weak form of Borel's summation method defines the Borel sum of to be
If this converges at to some function , we say that the weak Borel sum of converges at , and write .
Borel's integral summation method
Suppose that the Borel transform converges for all positive real numbers to a function growing sufficiently slowly that the following integral is well defined (as an improper integral), the Borel sum of is given by
representing Laplace transform of .
If the integral converges at to some , we say that the Borel sum of converges at , and write .
Borel's integral summation method with analytic continuation
This is similar to Borel's integral summation method, except that the Borel transform need not converge for all , but converges to an analytic function of near 0 that can be analytically continued along the positive real axis.
Basic properties
Regularity
The methods and are both regular summation methods, meaning that whenever converges (in the standard sense), then the Borel sum and weak Borel sum also converge, and do so to the same value. i.e.
Regularity of is easily seen by a change in order of integration, which is valid due to absolute convergence: if is convergent at , then
where the rightmost expression is exactly the Borel sum at .
Regularity of and imply that these methods provide analytic extensions to .
Nonequivalence of Borel and weak Borel summation
Any series that is weak Borel summable at is also Borel summable at . However, one can construct examples of series which are divergent under weak Borel summation, but which are Borel summable. The following theorem characterises the equivalence of the two methods.
Theorem ().
Let be a formal power series, and fix , then:
If , then .
If , and then .
Relationship to other summation methods
is the special case of Mittag-Leffler summation with .
can be seen as the limiting case of generalized Euler summation method in the sense that as the domain of convergence of the method converges up to the domain of convergence for .
Uniqueness theorems
There are always many different functions with any given asymptotic expansion. However, there is sometimes a best possible function, in the sense that the errors in the finite-dimensional approximations are as small as possible in some region. Watson's theorem and Carleman's theorem show that Borel summation produces such a best possible sum of the series.
Watson's theorem
Watson's theorem gives conditions for a function to be the Borel sum of its asymptotic series. Suppose that is a function satisfying the following conditions:
is holomorphic in some region , for some positive and .
In this region has an asymptotic series with the property that the error
is bounded by
for all in the region (for some positive constant ).
Then Watson's theorem says that in this region is given by the Borel sum of its asymptotic series. More precisely, the series for the Borel transform converges in a neighborhood of the origin, and can be analytically continued to the positive real axis, and the integral defining the Borel sum converges to for in the region above.
Carleman's theorem
Carleman's theorem shows that a function is uniquely determined by an asymptotic series in a sector provided the errors in the finite order approximations do not grow too fast. More precisely it states that if is analytic in the interior of the sector , and in this region for all , then is zero provided that the series diverges.
Carleman's theorem gives a summation method for any asymptotic series whose terms do not grow too fast, as the sum can be defined to be the unique function with this asymptotic series in a suitable sector if it exists. Borel summation is slightly weaker than special case of this when for some constant . More generally one can define summation methods slightly stronger than Borel's by taking the numbers to be slightly larger, for example or . In practice this generalization is of little use, as there are almost no natural examples of series summable by this method that cannot also be summed by Borel's method.
Example
The function has the asymptotic series with an error bound of the form above in the region for any , but is not given by the Borel sum of its asymptotic series. This shows that the number in Watson's theorem cannot be replaced by any smaller number (unless the bound on the error is made smaller).
Examples
The geometric series
Consider the geometric series
which converges (in the standard sense) to for . The Borel transform is
from which we obtain the Borel sum
which converges in the larger region , giving an analytic continuation of the original series.
Considering instead the weak Borel transform, the partial sums are given by , and so the weak Borel sum is
where, again, convergence is on . Alternatively this can be seen by appealing to part 2 of the equivalence theorem, since for ,
An alternating factorial series
Consider the series
then does not converge for any nonzero . The Borel transform is
for , which can be analytically continued to all. So the Borel sum is
(where is the incomplete gamma function).
This integral converges for all , so the original divergent series is Borel summable for all such. This function has an asymptotic expansion as tends to 0 that is given by the original divergent series. This is a typical example of the fact that Borel summation will sometimes "correctly" sum divergent asymptotic expansions.
Again, since
for all , the equivalence theorem ensures that weak Borel summation has the same domain of convergence, .
An example in which equivalence fails
The following example extends on that given in . Consider
After changing the order of summation, the Borel transform is given by
At the Borel sum is given by
where is the Fresnel integral. Via the convergence theorem along chords, the Borel integral converges for all (the integral diverges for ).
For the weak Borel sum we note that
holds only for , and so the weak Borel sum converges on this smaller domain.
Existence results and the domain of convergence
Summability on chords
If a formal series is Borel summable at , then it is also Borel summable at all points on the chord connecting to the origin. Moreover, there exists a function analytic throughout the disk with radius such that
for all .
An immediate consequence is that the domain of convergence of the Borel sum is a star domain in . More can be said about the domain of convergence of the Borel sum, than that it is a star domain, which is referred to as the Borel polygon, and is determined by the singularities of the series .
The Borel polygon
Suppose that has strictly positive radius of convergence, so that it is analytic in a non-trivial region containing the origin, and let denote the set of singularities of . This means that if and only if can be continued analytically along the open chord from 0 to , but not to itself. For , let denote the line passing through which is perpendicular to the chord . Define the sets
the set of points which lie on the same side of as the origin. The Borel polygon of is the set
An alternative definition was used by Borel and Phragmén . Let denote the largest star domain on which there is an analytic extension of , then is the largest subset of such that for all the interior of the circle with diameter OP is contained in . Referring to the set as a polygon is something of a misnomer, since the set need not be polygonal at all; if, however, has only finitely many singularities then will in fact be a polygon.
The following theorem, due to Borel and Phragmén provides convergence criteria for Borel summation.
Theorem .
The series is summable at all , and is divergent at all .
Note that summability for depends on the nature of the point.
Example 1
Let denote the -th roots of unity, , and consider
which converges on . Seen as a function on , has singularities at , and consequently the Borel polygon is given by the regular -gon centred at the origin, and such that is a midpoint of an edge.
Example 2
The formal series
converges for all (for instance, by the comparison test with the geometric series). It can however be shown that does not converge for any point such that for some . Since the set of such is dense in the unit circle, there can be no analytic extension of outside of . Subsequently the largest star domain to which can be analytically extended is from which (via the second definition) one obtains . In particular one sees that the Borel polygon is not polygonal.
A Tauberian theorem
A Tauberian theorem provides conditions under which convergence of one summation method implies convergence under another method. The principal Tauberian theorem for Borel summation provides conditions under which the weak Borel method implies convergence of the series.
Theorem . If is summable at , , and
then , and the series converges for all .
Applications
Borel summation finds application in perturbation expansions in quantum field theory. In particular in 2-dimensional Euclidean field theory the Schwinger functions can often be recovered from their perturbation series using Borel summation . Some of the singularities of the Borel transform are related to instantons and renormalons in quantum field theory .
Generalizations
Borel summation requires that the coefficients do not grow too fast: more precisely, has to be bounded by for some . There is a variation of Borel summation that replaces factorials with for some positive integer , which allows the summation of some series with bounded by for some . This generalization is given by Mittag-Leffler summation.
In the most general case, Borel summation is generalized by Nachbin resummation, which can be used when the bounding function is of some general type (psi-type), instead of being exponential type.
See also
Abel summation
Abel's theorem
Abel–Plana formula
Euler summation
Cesàro summation
Lambert summation
Laplace transform
Nachbin resummation
Abelian and tauberian theorems
Van Wijngaarden transformation
Notes
References
Mathematical series
Summability methods
Quantum chromodynamics | Borel summation | [
"Mathematics"
] | 2,323 | [
"Sequences and series",
"Mathematical structures",
"Series (mathematics)",
"Calculus",
"Summability methods"
] |
2,360,392 | https://en.wikipedia.org/wiki/Free-radical%20reaction | A free-radical reaction is any chemical reaction involving free radicals. This reaction type is abundant in organic reactions. Two pioneering studies into free radical reactions have been the discovery of the triphenylmethyl radical by Moses Gomberg (1900) and the lead-mirror experiment described by Friedrich Paneth in 1927. In this last experiment tetramethyllead is decomposed at elevated temperatures to methyl radicals and elemental lead in a quartz tube. The gaseous methyl radicals are moved to another part of the chamber in a carrier gas where they react with lead in a mirror film which slowly disappears.
When radical reactions are part of organic synthesis the radicals are often generated from radical initiators such as peroxides or azobis compounds. Many radical reactions are chain reactions with a chain initiation step, a chain propagation step and a chain termination step. Reaction inhibitors slow down a radical reaction and radical disproportionation is a competing reaction. Radical reactions occur frequently in the gas phase, are often initiated by light, are rarely acid or base catalyzed and are not dependent on polarity of the reaction medium. Reactions are also similar whether in the gas phase or solution phase.
Kinetics
The chemical kinetics of a radical reaction depend on all these individual reactions. In steady state the concentrations of initiating (I.) and terminating species T. are negligent and rate of initiation and rate of termination are equal. The overall reaction rate can be written as:
with a broken-order dependence of 1.5 with respect to the initiating species.
The reactivity of different compounds toward a certain radical is measured in so-called competition experiments. Compounds bearing carbon–hydrogen bonds react with radicals in the order primary < secondary < tertiary < benzyl < allyl reflecting the order in C–H bond dissociation energy
Many stabilizing effects can be explained as resonance effects, an effect specific to radicals is the captodative effect.
Reactions
The most important reaction types involving free radicals are:
Free-radical substitution, for instance free-radical halogenation and autoxidation.
Free-radical addition reactions
Intramolecular free radical reactions (substitution or addition) such as the Hofmann–Löffler reaction or the Barton reaction
Free radical rearrangement reactions are rare compared to rearrangements involving carbocations and restricted to aryl migrations.
Fragmentation reactions or homolysis, for instance the Norrish reaction, the Hunsdiecker reaction and certain decarboxylations. For fragmentations taking place in mass spectrometry see mass spectrum analysis.
Electron transfer. An example is the decomposition of certain peresters by Cu(I) which is a one-electron reduction reaction forming Cu(II), an alkoxy oxygen radical and a carboxylate. Another example is Kolbe electrolysis.
Radical-nucleophilic aromatic substitution is a special case of nucleophilic aromatic substitution.
Carbon–carbon coupling reactions, for example manganese-mediated coupling reactions.
Elimination reactions
Free radicals can be formed by photochemical reaction and thermal fission reaction or by oxidation reduction reaction. Specific reactions involving free radicals are combustion, pyrolysis and cracking. Free radical reactions also occur within and outside of cells, are injurious, and have been implicated in a wide range of human diseases (see 13-Hydroxyoctadecadienoic acid, 9-hydroxyoctadecadienoic acid, reactive oxygen species, and Oxidative stress) as well as many of the maladies associated with ageing (see ageing).
See also
Radical clock
References
Organic reactions | Free-radical reaction | [
"Chemistry"
] | 739 | [
"Free radical reactions",
"Organic reactions"
] |
2,360,715 | https://en.wikipedia.org/wiki/Enterprise%20risk%20management | Enterprise risk management (ERM) in business includes the methods and processes used by organizations to manage risks and seize opportunities related to the achievement of their objectives. ERM provides a framework for risk management, which typically involves identifying particular events or circumstances relevant to the organization's objectives (threats and opportunities), assessing them in terms of likelihood and magnitude of impact, determining a response strategy, and monitoring process. By identifying and proactively addressing risks and opportunities, business enterprises protect and create value for their stakeholders, including owners, employees, customers, regulators, and society overall.
ERM can also be described as a risk-based approach to managing an enterprise, integrating concepts of internal control, the Sarbanes–Oxley Act, data protection and strategic planning. ERM is evolving to address the needs of various stakeholders, who want to understand the broad spectrum of risks facing complex organizations to ensure they are appropriately managed. Regulators and debt rating agencies have increased their scrutiny on the risk management processes of companies.
According to Thomas Stanton of Johns Hopkins University, the point of enterprise risk management is not to create more bureaucracy, but to facilitate discussion on what the really big risks are.
ERM frameworks defined
There are various important ERM frameworks, each of which describes an approach for identifying, analyzing, responding to, and monitoring risks and opportunities, within the internal and external environment facing the enterprise. Management selects a risk response strategy for specific risks identified and analyzed, which may include:
Avoidance: exiting the activities giving rise to risk
Reduction: taking action to reduce the likelihood or impact related to the risk
Alternative Actions: deciding and considering other feasible steps to minimize risks
Share or Insure: transferring or sharing a portion of the risk, to finance it
Accept: no action is taken, due to a cost/benefit decision
Monitoring is typically performed by management as part of its internal control activities, such as review of analytical reports or management committee meetings with relevant experts, to understand how the risk response strategy is working and whether the objectives are being achieved.
Casualty Actuarial Society framework
In 2003, the Casualty Actuarial Society (CAS) defined ERM as the discipline by which an organization in any industry assesses, controls, exploits, finances, and monitors risks from all sources for the purpose of increasing the organization's short- and long-term value to its stakeholders." The CAS conceptualized ERM as proceeding across the two dimensions of risk type and risk management processes. The risk types and examples include:
Hazard risk Liability torts, Property damage, Natural catastrophe
Financial risk Pricing risk, Asset risk, Currency risk, Liquidity risk
Operational risk Customer satisfaction, Product failure, Integrity, Reputational risk; Internal Poaching; Knowledge drain
Strategic risks Competition, Social trend, Capital availability
The risk management process involves:
Establishing Context: This includes an understanding of the current conditions in which the organization operates on an internal, external and risk management context.
Identifying Risks: This includes the documentation of the material threats to the organization's achievement of its objectives and the representation of areas that the organization may exploit for competitive advantage.
Analyzing/Quantifying Risks: This includes the calibration and, if possible, creation of probability distributions of outcomes for each material risk.
Integrating Risks: This includes the aggregation of all risk distributions, reflecting correlations and portfolio effects, and the formulation of the results in terms of impact on the organization's key performance metrics.
Assessing/Prioritizing Risks: This includes the determination of the contribution of each risk to the aggregate risk profile, and appropriate prioritization.
Treating/Exploiting Risks: This includes the development of strategies for controlling and exploiting the various risks.
Monitoring and Reviewing: This includes the continual measurement and monitoring of the risk environment and the performance of the risk management strategies.
COSO ERM framework
The COSO "Enterprise Risk Management-Integrated Framework" published in 2004 (New edition COSO ERM 2017 is not Mentioned and the 2004 version is outdated) defines ERM as a "…process, effected by an entity's board of directors, management, and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives."
The COSO ERM Framework has eight components and four objectives categories. It is an expansion of the COSO Internal Control-Integrated Framework published in 1992 and amended in 1994. The eight components are:
Internal Environment
Objective Setting
Event Identification
Risk Assessment
Risk Response
Control Activities
Information and Communication
Monitoring
The four objectives categories - additional components highlighted - are:
Strategy - high-level goals, aligned with and supporting the organization's mission
Operations - effective and efficient use of resources
Financial Reporting - reliability of operational and financial reporting
Compliance - compliance with applicable laws and regulations
ISO 31000: the new International Risk Management Standard
ISO 31000 is an International Standard for Risk Management which was published on 13 November 2009, and updated in 2018. An accompanying standard, ISO 31010 - Risk Assessment Techniques, soon followed publication (December 1, 2009) together with the updated Risk Management vocabulary ISO Guide 73. The standard set out eight principles based around the central purpose, which is the creation and protection of value.
Implementing an ERM program
Goals of an ERM program
Organizations by nature manage risks and have a variety of existing departments or functions ("risk functions") that identify and manage particular risks. However, each risk function varies in capability and how it coordinates with other risk functions. A central goal and challenge of ERM is improving this capability and coordination, while integrating the output to provide a unified picture of risk for stakeholders and improving the organization's ability to manage the risks effectively.
Typical risk functions
The primary risk functions in large corporations that may participate in an ERM program typically include:
Strategic planning - identifies external threats and competitive opportunities, along with strategic initiatives to address them
Marketing - understands the target customer to ensure product/service alignment with customer requirements
Compliance & Ethics - monitors compliance with code of conduct and directs fraud investigations
Accounting / Financial compliance - directs the Sarbanes–Oxley Section 302 and 404 assessment, which identifies financial reporting risks
Law Department - manages litigation and analyzes emerging legal trends that may impact the organization
Insurance - ensures the proper insurance coverage for the organization
Treasury - ensures cash is sufficient to meet business needs, while managing risk related to commodity pricing or foreign exchange
Operational Quality Assurance - verifies operational output is within tolerances
Operations management - ensures the business runs day-to-day and that related barriers are surfaced for resolution
Credit - ensures any credit provided to customers is appropriate to their ability to pay
Customer service - ensures customer complaints are handled promptly and root causes are reported to operations for resolution
Internal audit - evaluates the effectiveness of each of the above risk functions and recommends improvements
Corporate Security - identifies, evaluates, and mitigates risks posed by physical and information security threats
Common challenges in ERM implementation
Various consulting firms offer suggestions for how to implement an ERM program. Common topics and challenges include:
Identifying executive sponsors for ERM.
Establishing a common risk language or glossary.
Describing the entity's risk appetite (i.e., risks it will and will not take)
Identifying and describing the risks in a "risk inventory".
Implementing a risk-ranking methodology to prioritize risks within and across functions.
Establishing a risk committee and/or chief risk officer (CRO) to coordinate certain activities of the risk functions.
Establishing ownership for particular risks and responses.
Demonstrating the cost-benefit of the risk management effort.
Developing action plans to ensure the risks are appropriately managed.
Developing consolidated reporting for various stakeholders.
Monitoring the results of actions taken to mitigate risk.
Ensuring efficient risk coverage by internal auditors, consulting teams, and other evaluating entities.
Developing a technical ERM framework that enables secure participation by 3rd parties and remote employees.
Internal audit role
In addition to information technology audit, internal auditors play an important role in evaluating the risk-management processes of an organization and advocating their continued improvement. However, to preserve its organizational independence and objective judgment, Internal Audit professional standards indicate the function should not take any direct responsibility for making risk management decisions for the enterprise or managing the risk-management function.
Internal auditors typically perform an annual risk assessment of the enterprise, to develop a plan of audit engagements for the upcoming year. This plan is updated at various frequencies in practice. This typically involves review of the various risk assessments performed by the enterprise (e.g., strategic plans, competitive benchmarking, and SOX 404 top-down risk assessment), consideration of prior audits, and interviews with a variety of senior management. It is designed for identifying audit projects, not to identify, prioritize, and manage risks directly for the enterprise.
Current issues in ERM
The risk management processes of corporations worldwide are under increasing regulatory and private scrutiny. Risk is an essential part of any business. Properly managed, it drives growth and opportunity. Executives struggle with business pressures that may be partly or completely beyond their immediate control, such as distressed financial markets; mergers, acquisitions and restructurings; disruptive technology change; geopolitical instabilities; and the rising price of energy.
Sarbanes–Oxley Act requirements
Section 404 of the Sarbanes–Oxley Act of 2002 required U.S. publicly traded corporations to utilize a control framework in their internal control assessments. Many opted for the COSO Internal Control Framework, which includes a risk assessment element. In addition, new guidance issued by the Securities and Exchange Commission (SEC) and Public Company Accounting Oversight Board in 2007 placed increasing scrutiny on top-down risk assessment and included a specific requirement to perform a fraud risk assessment. Fraud risk assessments typically involve identifying scenarios of potential (or experienced) fraud, related exposure to the organization, related controls, and any action taken as a result.
NYSE corporate governance rules
The New York Stock Exchange requires the Audit Committees of its listed companies to "discuss policies with respect to risk assessment and risk management." The related commentary continues: "While it is the job of the CEO and senior management to assess and manage the company’s exposure to risk, the audit committee must discuss guidelines and policies to govern the process by which this is handled. The audit committee should discuss the company’s major financial risk exposures and the steps management has taken to monitor and control such exposures. The audit committee is not required to be the sole body responsible for risk assessment and management, but, as stated above, the committee must discuss guidelines and policies to govern the process by which risk assessment and management is undertaken. Many companies, particularly financial companies, manage and assess their risk through mechanisms other than the audit committee. The processes these companies have in place should be reviewed in a general manner by the audit committee, but they need not be replaced by the audit committee."
ERM and corporate debt ratings
Standard & Poor's (S&P), the debt rating agency, plans to include a series of questions about risk management in its company evaluation process. This will rollout to financial companies in 2007. The results of this inquiry is one of the many factors considered in debt rating, which has a corresponding impact on the interest rates lenders charge companies for loans or bonds. On May 7, 2008, S&P also announced that it would begin including an ERM assessment in its ratings for non-financial companies starting in 2009, with initial comments in its reports during Q4 2008.
IFC Performance Standards
International Finance Corporation Performance Standards focus on the management of Health, Safety, Environmental and Social risks and impacts. The third edition was published on January 1, 2012 after a two-year negotiation process with the private sector, governments and civil society organizations. They have been adopted by the Equator Principles Banks, a consortium of over 118 commercial banks in 37 countries.
Data Privacy
Data privacy rules, such as the European Union's General Data Protection Regulation, increasingly foresee significant penalties for failure to maintain adequate protection of individuals' personal data such as names, e-mail addresses and personal financial information, or alert affected individuals when data privacy is breached. The EU regulation requires any organization--including organizations located outside the EU--to appoint a Data Protection Officer reporting to the highest management level if they handle the personal data of anyone living in the EU.
Actuarial response
Casualty Actuarial Society
In 2003, the Enterprise Risk Management Committee of the Casualty Actuarial Society (CAS) issued its overview of ERM. This paper laid out the evolution, rationale, definitions, and frameworks for ERM from the casualty actuarial perspective, and also included a vocabulary, conceptual and technical foundations, actual practice and applications, and case studies.
The CAS has specific stated ERM goals, including being "a leading supplier internationally of educational materials relating to Enterprise Risk Management (ERM) in the property casualty insurance arena," and has sponsored research, development, and training of casualty actuaries in that regard. The CAS has refrained from issuing its own credential; instead, in 2007, the CAS Board decided that the CAS should participate in the initiative to develop a global ERM designation, and make a final decision at some later date.
Society of Actuaries
In 2007, the Society of Actuaries developed the Chartered Enterprise Risk Analyst (CERA) credential in response to the growing field of enterprise risk management. This is the first new professional credential to be introduced by the SOA since 1949. A CERA studies to focus on how various risks, including operational, investment, strategic, and reputational combine to affect organizations. CERAs work in environments beyond insurance, reinsurance and the consulting markets, including broader financial services, energy, transportation, media, technology, manufacturing and healthcare.
It takes approximately three to four years to complete the CERA curriculum which combines basic actuarial science, ERM principles and a course on professionalism. To earn the CERA credential, candidates must take five exams, fulfill an educational experience requirement, complete one online course, and attend one in-person course on professionalism.
CERA Global
Initially all CERAs were members of the Society of Actuaries but in 2009 the CERA designation became a global specialized professional credential, awarded and regulated by multiple actuarial bodies;
for example Chartered Enterprise Risk Actuary from the Institute and Faculty of Actuaries.
See also
Actuarial science
Airmic
Basel III
Benefit risk
Committee of Sponsoring Organizations of the Treadway Commission
Cost risk
Credit risk
Information Quality Management
ISO 31000
Market risk and strategic planning
Operational risk management
Optimism bias
Risk accounting
Risk adjusted return on capital
Risk appetite
Risk management tools
RiskLab
ISA 400 Risk Assessments and Internal Control
SOX 404 top-down risk assessment
Three lines of defence
Total Security Management
Web Presence Management
Gordon–Loeb model for cyber security investments
Certifications:
Certified Risk Professional (Institute of Risk Management)
Chartered Enterprise Risk Actuary (Institute and Faculty of Actuaries)
Chartered Enterprise Risk Analyst (Society of Actuaries)
References
External links
Airmic / Alarm / IRM (2010) "A structured approach to Enterprise Risk Management (ERM) and the requirements of ISO 31000"
Hopkin, Paul "Fundamentals of Risk Management 2nd Edition" Kogan-Page (2012)
Actuarial science
Auditing
Information technology audit
Internal audit | Enterprise risk management | [
"Mathematics"
] | 3,106 | [
"Applied mathematics",
"Actuarial science"
] |
5,842,980 | https://en.wikipedia.org/wiki/P2X%20purinoreceptor | The P2X receptors, also ATP-gated P2X receptor cation channel family, is a protein family that consists of cation-permeable ligand-gated ion channels that open in response to the binding of extracellular adenosine 5'-triphosphate (ATP). They belong to a larger family of receptors known as the ENaC/P2X superfamily. ENaC and P2X receptors have similar 3-D structures and are homologous. P2X receptors are present in a diverse array of organisms including humans, mouse, rat, rabbit, chicken, zebrafish, bullfrog, fluke, and amoeba.
Physiological roles
P2X receptors are involved in a variety of physiological processes, including:
Modulation of cardiac rhythm and contractility
Modulation of vascular tone
Mediation of nociception, especially chronic pain
Contraction of the vas deferens during ejaculation
Contraction of the urinary bladder during micturition
Platelet aggregation
Macrophage activation
Apoptosis
Neuronal-glial integration
Tissue distribution
P2X receptors are expressed in cells from a wide variety of animal tissues. On presynaptic and postsynaptic nerve terminals and glial cells throughout the central, peripheral and autonomic nervous systems, P2X receptors have been shown to modulate synaptic transmission. Furthermore, P2X receptors are able to initiate contraction in cells of the heart muscle, skeletal muscle, and various smooth muscle tissues, including that of the vasculature, vas deferens and urinary bladder. P2X receptors are also expressed on leukocytes, including lymphocytes and macrophages, and are present on blood platelets. There is some degree of subtype specificity as to which P2X receptor subtypes are expressed on specific cell types, with P2X1 receptors being particularly prominent in smooth muscle cells, and P2X2 being widespread throughout the autonomic nervous system. However, such trends are very general and there is considerable overlap in subunit distribution, with most cell types expressing more than one subunits. For example, P2X2 and P2X3 subunits are commonly found co-expressed in sensory neurons, where they often co-assemble into functional P2X2/3 receptors.
Basic structure and nomenclature
To date, seven separate genes coding for P2X subunits have been identified, and named as P2X1 through P2X7, based on their pharmacological properties.
The proteins of the P2X receptors are quite similar in sequence (>35% identity), but they possess 380-1000 amino acyl residues per subunit with variability in length. The subunits all share a common topology, possessing two transmembrane domains (one about 30-50 residues from their N-termini, the other near residues 320-340), a large extracellular loop and intracellular carboxyl and amino termini (Figure 1) The extracellular receptor domains between these two segments (of about 270 residues) are well conserved with several conserved glycyl residues and 10 conserved cysteyl residues. The amino termini contain a consensus site for protein kinase C phosphorylation, indicating that the phosphorylation state of P2X subunits may be involved in receptor functioning. Additionally, there is a great deal of variability (25 to 240 residues) in the C termini, indicating that they might serve subunit specific properties.
Generally speaking, most subunits can form functional homomeric or heteromeric receptors. Receptor nomenclature dictates that naming is determined by the constituent subunits; e.g. a homomeric P2X receptor made up of only P2X1 subunits is called a P2X1 receptor, and a heteromeric receptor containing P2X2 and P2X3 subunits is called a P2X2/3 receptor. The general consensus is that P2X6 cannot form a functional homomeric receptor and that P2X7 cannot form a functional heteromeric receptor.
Topologically, they resemble the epithelial Na+ channel proteins in possessing (a) N- and C-termini localized intracellularly, (b) two putative transmembrane segments, (c) a large extracellular loop domain, and (d) many conserved extracellular cysteyl residues. P2X receptor channels transport small monovalent cations, although some also transport Ca2+.
Evidence from early molecular biological and functional studies has strongly indicated that the functional P2X receptor protein is a trimer, with the three peptide subunits arranged around an ion-permeable channel pore. This view was recently confirmed by the use of X-ray crystallography to resolve the three-dimensional structure of the zebrafish P2X4 receptor(Figure 2). These findings indicate that the second transmembrane domain of each subunit lines the ion-conducting pore and is therefore responsible for channel gating.
The relationship between the structure and function of P2X receptors has been the subject of considerable research using site-directed mutagenesis and chimeric channels, and key protein domains responsible for regulating ATP binding, ion permeation, pore dilation and desensitization have been identified.
Activation and channel opening
Three ATP molecules are thought to be required to activate a P2X receptor, suggesting that ATP needs to bind to each of the three subunits in order to open the channel pore, though recent evidence suggests that ATP binds at the three subunit interfaces. Once ATP binds to the extracellular loop of the P2X receptor, it evokes a conformational change in the structure of the ion channel that results in the opening of the ion-permeable pore. The most commonly accepted theory of channel opening involves the rotation and separation of the second transmembrane domain (TM) helices, allowing cations such as Na+ and Ca2+ to access the ion-conducting pore through three lateral fenestrations above the TM domains. The entry of cations leads to the depolarization of the cell membrane and the activation of various Ca2+-sensitive intracellular processes. The channel opening time is dependent upon the subunit makeup of the receptor. For example, P2X1 and P2X3 receptors desensitize rapidly (a few hundred milliseconds) in the continued presence of ATP, whereas the P2X2 receptor channel remains open for as long as ATP is bound to it.
Transport reaction
The generalized transport reaction is:
Monovalent cations or Ca2+ (out) ⇌ monovalent cations or Ca2+ (in)
Pharmacology
The pharmacology of a given P2X receptor is largely determined by its subunit makeup. Different subunits exhibit different sensitivities to purinergic agonists such as ATP, α,β-meATP and BzATP; and antagonists such as pyridoxalphosphate-6-azophenyl-2',4'-disulfonic acid (PPADS) and suramin. Of continuing interest is the fact that some P2X receptors (P2X2, P2X4, human P2X5, and P2X7) exhibit multiple open states in response to ATP, characterized by a time-dependent increase in the permeabilities of large organic ions such as N-methyl-D-glucamine (NMDG+) and nucleotide binding dyes such as propidium iodide (YO-PRO-1). Whether this change in permeability is due to a widening of the P2X receptor channel pore itself or the opening of a separate ion-permeable pore is the subject of continued investigation.
Synthesis and trafficking
P2X receptors are synthesized in the rough endoplasmic reticulum. After complex glycosylation in the Golgi apparatus, they are transported to the plasma membrane, whereby docking is achieved through specific members of the SNARE protein family. A YXXXK motif in the C terminus is common to all P2X subunits and seems to be important for trafficking and stabilization of P2X receptors in the membrane. Removal of P2X receptors occurs via clathrin-mediated endocytosis of receptors to endosomes where they are sorted into vesicles for degradation or recycling.
Allosteric modulation
The sensitivity of P2X receptors to ATP is strongly modulated by changes in extracellular pH and by the presence of heavy metals (e.g. zinc and cadmium). For example, the ATP sensitivity of P2X1, P2X3 and P2X4 receptors is attenuated when the extracellular pH<7, whereas the ATP sensitivity of P2X2 is significantly increased. On the other hand, zinc potentiates ATP-gated currents through P2X2, P2X3 and P2X4, and inhibits currents through P2X1. The allosteric modulation of P2X receptors by pH and metals appears to be conferred by the presence of histidine side chains in the extracellular domain. In contrast to the other members of the P2X receptor family, P2X4 receptors are also very sensitive to modulation by the macrocyclic lactone, ivermectin. Ivermectin potentiates ATP-gated currents through P2X4 receptors by increasing the open probability of the channel in the presence of ATP, which it appears to do by interacting with the transmembrane domains from within the lipid bilayer.
Subfamilies
P2RX1
P2RX2
P2RX3
P2RX4
P2RX5
P2RX6
P2RX7
Human proteins containing this domain
P2RX1; P2RX2; P2RX3; P2RX4; P2RX5; P2RX7; P2RXL1; TAX1BP3
See also
Ligand-gated ion channels
References
External links
Ivar von Kügelgen: Pharmacology of mammalian P2X- and P2Y-receptors, BIOTREND Reviews No. 03, September 2008,© 2008 BIOTREND Chemicals AG
Ligand-gated ion channel Database (European Bioinformatics Institute)
"The P2X Project"
Ion channels
Ionotropic receptors
Cell signaling
Molecular neuroscience
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | P2X purinoreceptor | [
"Chemistry",
"Biology"
] | 2,209 | [
"Ionotropic receptors",
"Signal transduction",
"Protein classification",
"Membrane proteins",
"Molecular neuroscience",
"Molecular biology",
"Protein families",
"Neurochemistry",
"Ion channels"
] |
5,844,089 | https://en.wikipedia.org/wiki/Safety%20instrumented%20system | In functional safety a safety instrumented system (SIS) is an engineered set of hardware and software controls which provides a protection layer that shuts down a chemical, nuclear, electrical, or mechanical system, or part of it, if a hazardous condition is detected.
Requirement specification
An SIS performs a safety instrumented function (SIF). The SIS is credited with a certain measure of reliability depending on its safety integrity level (SIL). The required SIL is determined from a quantitative process hazard analysis (PHA), such as a Layers of Protection Analysis (LOPA). The SIL requirements are verified during the design, construction, installation, and operation of the SIS. The required functionality may be verified by design reviews, factory acceptance testing, site acceptance testing, and regular functional testing. The PHA is in turn based on a hazard identification exercise. In the process industries (oil and gas production, refineries, chemical plants, etc.), this exercise is usually a hazard and operability study (HAZOP). The HAZOP usually identifies not only the process hazards of a plant (such as release of hazardous materials due to the process operating outside the safe limits of the plant) but also the SIFs protecting the plant from such excursions.
Design
An SIS is intended to perform specific control functions to prevent unsafe process operations when unacceptable or dangerous conditions occur. Because of its criticality, safety instrumented systems must be independent from all other control systems that control the same equipment, in order to ensure SIS functionality is not compromised. An SIS is composed of the same types of control elements (including sensors, logic solvers, actuators and other control equipment) as a Basic Process Control System (BPCS). However, all of the control elements in an SIS are dedicated solely to the proper functioning of the SIS.
The essential characteristic of an SIS is that it must include instruments, which detect that process variables (flow, temperature, pressure etc. in the case of a processing facility) are exceeding preset limits (sensors), a logic solver which processes this information and makes appropriate decisions based on the nature of the signal(s), and final elements which receive the output of the logic solver and take necessary action on the process to achieve a safe state. All these components must function properly for the SIS to perform its SIF. The logic solver may use electrical, electronic or programmable electronic equipment, such as relays, trip amplifiers, or programmable logic controllers. Support systems, such as power, instrument air, and communications, are generally required for SIS operation. The support systems should be designed to provide the required integrity and reliability. One example of SIS is a temperature sensor that provides a signal to a controller, which compares the sensed process temperature to the desired temperature setpoint and sends a signal to an emergency on-off valve actuator which stops the flow of heating fluid to the process if the process temperature is exceeded by an unsafe margin.
SIFs are implemented as part of an overall risk reduction strategy which is intended to minimize the likelihood of a previously identified accident that could range from minor equipment damage up to the uncontrolled catastrophic release of energy or materials.
The safe state must be achieved in a sufficiently short amount of time (known as process safety time) to prevent the accident.
International standards
International standard IEC 61511 was published in 2003 to provide guidance to end-users on the application of Safety Instrumented Systems in the process industries. This standard is based on IEC 61508, a generic standard for functional safety including aspects on design, construction, and operation of electrical/electronic/programmable electronic systems. Other industry sectors may also have standards that are based on IEC 61508, such as IEC 62061 (machinery systems), IEC 62425 (for railway signalling systems), IEC 61513 (for nuclear systems), and ISO 26262 (for road vehicles).
Related concepts
Other terms often used in conjunction with and/or to describe safety instrumented systems include:
Critical control system
Protective instrumented system
Equipment protection system
Safety shutdown system
Process shutdown system
Emergency shutdown system
Safety-critical system
Interlock (of which there is a specific domain in railway signalling)
See also
Distributed control system (DCS)
FMEDA
Industrial control systems (ICS)
Plant process and emergency shutdown systems
SCADA
Spurious trip level
References
External links
Center for Chemical Process Safety book, Guidelines for Safe and Reliable Instrumented Protective Systems
Example Safety Requirement Specification (SRS) document
Safety Equipment Reliability Handbook, 4th Edition for use in Safety Instrumented System (SIS) conceptual design verification in the process industry
Process safety
Risk
Safety engineering | Safety instrumented system | [
"Chemistry",
"Engineering"
] | 950 | [
"Chemical process engineering",
"Systems engineering",
"Safety engineering",
"Process safety"
] |
12,282,720 | https://en.wikipedia.org/wiki/Shape-memory%20coupling | Shape-memory coupling is a system for connecting pipes using shape-memory alloys. In its typical form the technique uses an internally ribbed sleeve of alloy such as Tinel(see Nitinol) that is slightly smaller in diameter than the pipes it is to connect. The sleeve is cooled in liquid nitrogen then, in this low-temperature state, mechanically expanded with a mandrel to fit easily over the two pipe ends to be joined. After fitting, it is allowed to rewarm, when the memory effect causes the sleeve to shrink back to its original smaller size, creating a tight joint.
It was first produced in the late 1960s or early 1970s by the Raychem Corporation under the trade name CryoFit. Manufacture of these couplings for aerospace hydraulic connections was later transferred to AMCI (Advanced Metal Components Inc.) and then later to Aerofit Products Inc. Additional products using the same shape-memory alloy technology are produced under Cryolive and CryoFlare trade names.
References
What is the shape memory effect?, Aerofit, Inc.
Metallurgical processes
Smart materials | Shape-memory coupling | [
"Chemistry",
"Materials_science",
"Engineering"
] | 226 | [
"Metallurgical processes",
"Smart materials",
"Materials science",
"Metallurgy"
] |
12,282,750 | https://en.wikipedia.org/wiki/Golay%20cell | The Golay cell is a type of opto-acoustic detector mainly used for infrared spectroscopy. It consists of a gas-filled enclosure with an infrared absorbing material and a flexible diaphragm or membrane. When infrared radiation is absorbed, it heats the gas, causing it to expand. The resulting increase in pressure deforms the membrane. Light reflected off the membrane is detected by a photodiode, and motion of the membrane produces a change in the signal on the photodiode. The concept was originally described in 1947 by Marcel J. E. Golay, after whom it came to be named.
The Golay cell has high sensitivity and a flat response over a very broad range of frequencies. The response time is modest, of order 10 ms. The detector performance is degraded in the presence of mechanical vibrations.
References
Photodetectors
Infrared spectroscopy | Golay cell | [
"Physics",
"Chemistry",
"Astronomy"
] | 174 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Infrared spectroscopy",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
12,285,389 | https://en.wikipedia.org/wiki/Dimethoxanate | Dimethoxanate (trade names Cothera, Cotrane, Atuss, Perlatoss, Tossizid) is a cough suppressant of the phenothiazine class.
Side effects
Dimethoxanate may have analgesic, local anesthetic, and central nervous system depressant effects, but it may also produce nausea and vomiting.
Pharmacology
It binds to the sigma-1 receptor in the brain with an IC50 of 41 nM.
Society and culture
Dimethoxanate was introduced in Austria, Belgium, and France in 1911, and in Italy and Spain in 1963.
Approval for marketing in the US was withdrawn by the FDA in 1975 due to lack of evidence of efficacy.
Synthesis
Phenothiazine (1) is reacted with phosgene to give Phenothiazine-10-carbonyl chloride [18956-87-1] (2). Further reaction with 2-(2-(dimethylamino)ethoxy)ethanol [1704-62-7] (3) completed the synthesis of Dimethoxanate (4).
References
Phenothiazines
Antitussives
Sigma agonists
Carbamates
Ethers
Dimethylamino compounds
Ethanolamines
Abandoned drugs
Withdrawn drugs | Dimethoxanate | [
"Chemistry"
] | 264 | [
"Drug safety",
"Functional groups",
"Organic compounds",
"Ethers",
"Abandoned drugs",
"Withdrawn drugs"
] |
12,285,417 | https://en.wikipedia.org/wiki/Cloperastine | Cloperastine (INN) or cloperastin, in the forms of cloperastine hydrochloride (JAN) (brand names Hustazol, Nitossil, Seki) and cloperastine fendizoate, is an antitussive and antihistamine that is marketed as a cough suppressant in Japan, Hong Kong, and in some European countries. It was first introduced in 1972 in Japan, and then in Italy in 1981.
Side effects
Adverse effects may include sedation, drowsiness, heartburn, and thickening of bronchial secretions.
Pharmacology
The precise mechanism of action of cloperastine is not fully clear, but several different biological activities have been identified for the drug, of which include: ligand of the σ1 receptor (Ki = 20 nM) (likely an agonist), GIRK channel blocker (described as "potent"), antihistamine (Ki = 3.8 nM for the H1 receptor), and anticholinergic. It is thought that the latter two properties contribute to side effects, such as sedation and somnolence, while the former two may be involved in or responsible for the antitussive efficacy of cloperastine.
Synthesis
The halogenation of 4-Chlorobenzhydrol [119-56-2] (1) with phosphorus tribromide in tetrachloromethane gives 1-(Bromophenylmethyl)-4-chlorobenzene [948-54-9] (2). Treatment with ethylenechlorohydrin (2-Chloroethanol) [107-07-3] (3) gives 1-(4-Chlorobenzhydryl)oxy-2-chloroethane [5321-46-0] (4). Reaction with piperidine (5) completes the synthesis of Cloperastine (6).
See also
Cough syrup
Noscapine
Codeine; Pholcodine
Dextromethorphan; Dimemorfan
Racemorphan; Dextrorphan; Levorphanol
Butamirate
Pentoxyverine
Tipepidine
Levocloperastine
References
1-Piperidinyl compounds
4-Chlorophenyl compounds
Antitussives
Ethanolamines
Ethers
H1 receptor antagonists
M1 receptor antagonists
M2 receptor antagonists
M3 receptor antagonists
M4 receptor antagonists
M5 receptor antagonists
Potassium channel blockers
Sigma receptor ligands | Cloperastine | [
"Chemistry"
] | 536 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
12,285,519 | https://en.wikipedia.org/wiki/Normethadone | Normethadone (INN, BAN; brand names Ticarda, Cophylac, Dacartil, Eucopon, Mepidon, Noramidone, Normedon, and others), also known as desmethylmethadone or phenyldimazone, is a synthetic opioid analgesic and antitussive agent.
Normethadone is listed under the Single Convention on Narcotic Drugs 1961 and is a Schedule I Narcotic controlled substance in the United States, with a DEA ACSCN of 9635 and an annual manufacturing quota of 2 grams. It has an effective span of action for about 14 days, and is 12 to 20 times stronger than morphine. The salts in use are the hydrobromide (free base conversion ratio 0.785), hydrochloride (0.890), methyliodide (0.675), oxalate (0.766), picrate (0.563), and the 2,6-ditertbutylnapthalindisulphonate (0.480).
See also
Methade
References
Dimethylamino compounds
Analgesics
Antitussives
Ketones
Mu-opioid receptor agonists
Synthetic opioids | Normethadone | [
"Chemistry"
] | 270 | [
"Ketones",
"Functional groups"
] |
12,285,825 | https://en.wikipedia.org/wiki/Laboratory%20centrifuge | A laboratory centrifuge is a piece of laboratory equipment, driven by a motor, which spins liquid samples at high speed.
There are various types of centrifuges, depending on the size and the sample capacity.
Like all other centrifuges, laboratory centrifuges work by the sedimentation principle, where the centripetal acceleration is used to separate substances of greater and lesser density.
Types
There are various types of centrifugation:
Differential centrifugation, often used to separate certain organelles from whole cells for further analysis of specific parts of cells
Isopycnic centrifugation, often used to isolate nucleic acids such as DNA
Sucrose gradient centrifugation, often used to purify enveloped viruses and ribosomes, and also to separate cell organelles from crude cellular extracts
There are different types of laboratory centrifuges:
Microcentrifuges devices for small tubes from 0.2 ml to 2.0 ml (micro tubes), up to 96 well-plates, compact design, small footprint; up to 30,000 g
Clinical centrifuges moderate-speed devices used for clinical applications like blood collection tubes
Multipurpose high-speed centrifuges devices for a broad range of tube sizes, high variability, big footprint
Ultracentrifuges analytical and preparative models
Because of the heat generated by air friction (even in ultracentrifuges, where the rotor operates in a good vacuum), and the frequent necessity of maintaining samples at a given temperature, many types of laboratory centrifuges are refrigerated and temperature regulated.
Centrifuge tubes
Centrifuge tubes are precision-made, high-strength tubes of glass or plastic made to fit exactly in rotor cavities. They may vary in capacity from 50 mL down to much smaller capacities used in microcentrifuges used extensively in molecular biology laboratories. Microcentrifuges typically accommodate disposable plastic microcentrifuge tubes with capacities from 250 μL to 2.0 mL.
Glass centrifuge tubes can be used with most solvents, but tend to be more expensive. They can be cleaned like other laboratory glassware, and can be sterilized by autoclaving. Small scratches from careless handling can cause failure under the strong forces imposed during a run. Glass tubes are inserted into soft rubber sleeves to cushion them during runs. Plastic centrifuge tubes, especially tend to be less expensive and, with care, can be just as durable as glass. Water is preferred when plastic centrifuge tubes are used. They are more difficult to clean thoroughly, and are usually inexpensive enough to be considered disposable.
Disposable plastic "microlitre tubes" of 0.5ml to 2ml are commonly used in microcentrifuges. They are molded from a flexible transparent plastic similar to polythene, are semi-conical in shape, with integral, hinged sealing caps.
Larger samples are spun using centrifuge bottles, which range in capacity from 250 to 1000 millilitres. Although some are made of heavy glass, centrifuge bottles are usually made of shatterproof plastics such as polypropylene or polycarbonate. Sealing closures may be used for added leak-proof assurance.
Safety
The load in a laboratory centrifuge must be carefully balanced. This is achieved by using a combination of samples and balance tubes which all have the same weight or by using various balancing patterns without balance tubes. It is an interesting mathematical problem to solve the balance pattern given n slots and k tubes with the same weight. It is known that the solution exists if and only if both k and n-k can be expressed as a sum of prime factors of n. Small differences in mass of the load can result in a large force imbalance when the rotor is at high speed. This force imbalance strains the spindle and may result in damage to the centrifuge or personal injury. Some centrifuges have an automatic rotor imbalance detection feature that immediately discontinues the run when an imbalance is detected.
Before starting a centrifuge, an accurate check of the rotor and lid locking mechanisms is mandatory. A spinning rotor can cause serious injury if touched. Modern centrifuges generally have features that prevent accidental contact with a moving rotor as the main lid is locked during the run.
Centrifuge rotors have tremendous kinetic energy during high speed rotation. Rotor failure, caused by mechanical stress from the high forces imparted by the motor, can occur due to manufacturing defects, routine wear and tear, or improper use and maintenance. Such a failure can be catastrophic failure, especially with larger centrifuges, and generally results in total destruction of the centrifuge. While centrifuges generally have safety shielding to contain these failures, such shielding may be inadequate, especially in older models, or the entire centrifuge unit may be propelled from its position, resulting in damage to nearby personnel and equipment. Uncontained rotor failures have shattered laboratory windows and destroyed refrigerators and cabinetry. To reduce the risk of rotor failures, centrifuge manufacturers specify operating and maintenance procedures to ensure that rotors are regularly inspected and removed from service or derated (only operated at lower speeds) when they are past their expected lifetime.
Another potential hazard is the aerosolization of hazardous samples during centrifugation. To prevent contamination of the laboratory, rotor lids with special aerosol-tight gaskets are available. The rotor can be loaded with the samples within a hood and the rotor lid fixed on the rotor. Afterwards, the aerosol-tight system of rotor and lid is transferred to the centrifuge. The rotor can then be fixed within the centrifuge without opening the lid. After the run, the entire rotor assembly, including the lid, is removed from the centrifuge to the hood for further steps, maintaining the samples within a closed system.
See also
Ultracentrifuge
Separation
Cytocentrifuge
References
Centrifuges
Laboratory equipment
Chemical equipment | Laboratory centrifuge | [
"Chemistry",
"Engineering"
] | 1,257 | [
"Chemical equipment",
"Centrifugation",
"Centrifuges",
"nan"
] |
12,286,766 | https://en.wikipedia.org/wiki/C5H12O5 | {{DISPLAYTITLE:C5H12O5}}
The molecular formula C5H12O5 (molar mass: 152.14 g/mol, exact mass: 152.0685 u) may refer to:
Arabitol or arabinitol
Ribitol, or adonitol
Xylitol | C5H12O5 | [
"Chemistry"
] | 72 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
12,287,117 | https://en.wikipedia.org/wiki/Rubble%20masonry | Rubble masonry or rubble stone is rough, uneven building stone not laid in regular courses. It may fill the core of a wall which is faced with unit masonry such as brick or ashlar. Some medieval cathedral walls have outer shells of ashlar with an inner backfill of mortarless rubble and dirt.
Square rubble masonry
Square rubble masonry consists of stones that are dressed (squared on all joints and beds) before laying, set in mortar, and make up the outer surface of a wall.
History
Irregular rubble, or sack, masonry evolved from embankments covered with boards, stones or bricks. That outer surface was used to give the embankment greater strength and make it more difficult for enemies to climb. The Sadd el-Khafara dam, in Wadi Al-Garawi near Helwan in Egypt, which is 14 meters high and built in rubble masonry, dates back to 2900–2600 BC
The Greeks called the construction technique emplekton and made particular use of it in the construction of the defensive walls of their poleis.
The Romans made extensive use of rubble masonry, calling it opus caementicium, because caementicium was the name given to the filling between the two revetments. The technique continued to be used over the centuries, as evidenced by the constructions of defensive walls and large works during medieval times.
Modern construction frequently uses cast concrete with an internal steel reinforcement. That allows for greater elasticity, as well as providing excellent static and seismic resistance, and preserves the unity between shape and structure typical of buildings with external load-bearing walls. All the structural elements can be linked to any rubble walls thus created, freeing the internal spaces from excessive constraints.
See also
Gabion—Metal cages filled with stones
Snecked masonry—Masonry made of mixed sizes of stone but in regular courses
Wattle and daub—Conceptually analogous to rubble within ashlar in the sense that a frame is filled in with a filler material
References
Building stone
Stonemasonry | Rubble masonry | [
"Engineering"
] | 406 | [
"Architecture stubs",
"Stonemasonry",
"Construction",
"Architecture"
] |
12,291,673 | https://en.wikipedia.org/wiki/Current%20density%20imaging | Current density imaging (CDI) is an extension of magnetic resonance imaging (MRI), developed at the University of Toronto. It employs two techniques for spatially mapping electric current pathways through tissue:
LF-CDI, low-frequency CDI, the original implementation developed at the University of Toronto. In this technique, low frequency (LF) electric currents are injected into the tissue. These currents generate magnetic fields, which are then measured using MRI techniques. The current pathways are then computed and spatially mapped.
RF-CDI, radio frequency CDI, a rotating frame of reference version of LF-CDI. This allows measurement of a single component of current density, without requiring subject rotation. The high frequency current that is injected into tissue also does not cause the muscle twitching often encountered using LF-CDI, allowing in-vivo measurements on human subjects.
See also
Magnetic resonance imaging
References
External links
Current Density Imaging page at the University of Toronto
Magnetic resonance imaging
Medical imaging | Current density imaging | [
"Chemistry"
] | 201 | [
"Magnetic resonance imaging",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs"
] |
7,635,266 | https://en.wikipedia.org/wiki/Krylov%E2%80%93Bogolyubov%20theorem | In mathematics, the Krylov–Bogolyubov theorem (also known as the existence of invariant measures theorem) may refer to either of the two related fundamental theorems within the theory of dynamical systems. The theorems guarantee the existence of invariant measures for certain "nice" maps defined on "nice" spaces and were named after Russian-Ukrainian mathematicians and theoretical physicists Nikolay Krylov and Nikolay Bogolyubov who proved the theorems.
Formulation of the theorems
Invariant measures for a single map
Theorem (Krylov–Bogolyubov). Let X be a compact, metrizable topological space and F : X → X a continuous map. Then F admits an invariant Borel probability measure.
That is, if Borel(X) denotes the Borel σ-algebra generated by the collection T of open subsets of X, then there exists a probability measure μ : Borel(X) → [0, 1] such that for any subset A ∈ Borel(X),
In terms of the push forward, this states that
Invariant measures for a Markov process
Let X be a Polish space and let be the transition probabilities for a time-homogeneous Markov semigroup on X, i.e.
Theorem (Krylov–Bogolyubov). If there exists a point for which the family of probability measures { Pt(x, ·) | t > 0 } is uniformly tight and the semigroup (Pt) satisfies the Feller property, then there exists at least one invariant measure for (Pt), i.e. a probability measure μ on X such that
See also
For the 1st theorem: Ya. G. Sinai (Ed.) (1997): Dynamical Systems II. Ergodic Theory with Applications to Dynamical Systems and Statistical Mechanics. Berlin, New York: Springer-Verlag. . (Section 1).
For the 2nd theorem: G. Da Prato and J. Zabczyk (1996): Ergodicity for Infinite Dimensional Systems. Cambridge Univ. Press. . (Section 3).
Notes
Ergodic theory
Theorems in dynamical systems
Probability theorems
Random dynamical systems
Theorems in measure theory | Krylov–Bogolyubov theorem | [
"Mathematics"
] | 457 | [
"Theorems in dynamical systems",
"Theorems in mathematical analysis",
"Theorems in measure theory",
"Random dynamical systems",
"Ergodic theory",
"Theorems in probability theory",
"Mathematical problems",
"Mathematical theorems",
"Dynamical systems"
] |
7,635,675 | https://en.wikipedia.org/wiki/Bubble%20raft | A bubble raft is an array of bubbles. It demonstrates materials' microstructural and atomic length-scale behavior by modelling the {111} plane of a close-packed crystal. A material's observable and measurable mechanical properties strongly depend on its atomic and microstructural configuration and characteristics. This fact is intentionally ignored in continuum mechanics, which assumes a material to have no underlying microstructure and be uniform and semi-infinite throughout.
Bubble rafts assemble bubbles on a water surface, often with the help of amphiphilic soaps. These assembled bubbles act like atoms, diffusing, slipping, ripening, straining, and otherwise deforming in a way that models the behavior of the {111} plane of a close-packed crystal. The ideal (lowest energy) state of the assembly would undoubtedly be a perfectly regular single crystal, but just as in metals, the bubbles often form defects, grain boundaries, and multiple crystals.
History of bubble rafts
The concept of bubble raft modelling was first presented in 1947 by Nobel Laureate Sir William Lawrence Bragg and John Nye of Cambridge University's Cavendish Laboratory in Proceedings of the Royal Society A. Legend claims that Bragg conceived of bubble raft models while pouring oil into his lawn mower. He noticed that bubbles on the surface of the oil assembled into rafts resembling the {111} plane of close-packed crystals. Nye and Bragg later presented a method of generating and controlling bubbles on the surface of a glycerine-water-oleic acid-triethanolamine solution, in assemblies of 100,000 or more sub-millimeter sized bubbles. In their paper, they go on at length about the microstructural phenomena observed in bubble rafts and hypothesized in metals.
Dynamics
Bubble rafts exhibit complex dynamics, as illustrated in the video. This is triggered by rupture of a first bubble, driven by thermal fluctuations and a cascade of subsequent bursting bubbles, which can give rise to self-organized criticality, and a power-law distribution of avalanches.
Relation to crystal lattices
In deforming a crystal lattice, one changes the energy and the interatomic potential felt by the atoms of the lattice. This interatomic potential is popularly (and mostly qualitatively) modeled using the Lennard-Jones potential, which consists of a balance between attractive and repulsive forces between atoms.
The "atoms" in Bubble Rafts also exhibit such attractive and repulsive forces:
The portion of the equation to the left of the plus sign is the attractive force, and the portion to the right represents the repulsive force.
is the interbubble potential
is the average bubble radius
is the density of the solution from which the bubbles are formed
is the gravitational constant
is the ratio of the distance between bubbles to the bubble radius
is the radius of ring contact
is the ratio R/a of the bubble radius to the Laplace constant a, where
is the surface tension
is a constant dependent upon the boundary conditions of the calculation
is a zeroth-order modified Bessel function of the second kind.
Bubble rafts can display numerous phenomena seen in the crystal lattice. This includes such things as point defects (vacancies, substitutional impurities, interstitial atoms), edge dislocations and grains. A screw dislocation can't be modeled in a 2D bubble raft because it extends outside the plane. It is even possible to replicate some microstructure treats such as annealing. The annealing process is simulated by stirring the bubble raft. This anneals out the dislocations (recovery) and promotes recrystallization.
References
Materials science | Bubble raft | [
"Physics",
"Materials_science",
"Engineering"
] | 752 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
7,640,211 | https://en.wikipedia.org/wiki/Map%20algebra | Map algebra is an algebra for manipulating geographic data, primarily fields. Developed by Dr. Dana Tomlin and others in the late 1970s, it is a set of primitive operations in a geographic information system (GIS) which allows one or more raster layers ("maps") of similar dimensions to produce a new raster layer (map) using mathematical or other operations such as addition, subtraction etc.
History
Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps (typically an isarithmic map or a chorochromatic map) drawn on transparent film (e.g., cellulose acetate) to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects and city planners, starting with Warren Manning and further refined and popularized by Jaqueline Tyrwhitt, Ian McHarg and others during the 1950s and 1960s.
In the mid-1970s, landscape architecture student C. Dana Tomlin developed some of the first tools for overlay analysis in raster as part of the IMGRID project at the Harvard Laboratory for Computer Graphics and Spatial Analysis, which he eventually transformed into the Map Analysis Package (MAP), a popular raster GIS during the 1980s. While a graduate student at Yale University, Tomlin and Joseph K. Berry re-conceptualized these tools as a mathematical model, which by 1983 they were calling "map algebra." This effort was part of Tomlin's development of cartographic modeling, a technique for using these raster operations to implement the manual overlay procedures of McHarg. Although the basic operations were defined in his 1983 PhD dissertation, Tomlin had refined the principles of map algebra and cartographic modeling into their current form by 1990. Although the term cartographic modeling has not gained as wide an acceptance as synonyms such as suitability analysis, suitability modeling and multi-criteria decision making, "map algebra" became a core part of GIS. Because Tomlin released the source code to MAP, its algorithms were implemented (with varying degrees of modification) as the analysis toolkit of almost every raster GIS software package starting in the 1980s, including GRASS, IDRISI (now TerrSet), and the GRID module of ARC/INFO (later incorporated into the Spatial Analyst module of ArcGIS).
This widespread implementation further led to the development of many extensions to map algebra, following efforts to extend the raster data model, such as adding new functionality for analyzing spatiotemporal and three-dimensional grids.
Map algebra operations
Like other algebraic structures, map algebra consists of a set of objects (the domain) and a set of operations that manipulate those objects with closure (i.e., the result of an operation is itself in the domain, not something completely different). In this case, the domain is the set of all possible "maps," which are generally implemented as raster grids. A raster grid is a two-dimensional array of cells (Tomlin called them locations or points), each cell occupying a square area of geographic space and being coded with a value representing the measured property of a given geographic phenomenon (usually a field) at that location. Each operation 1) takes one or more raster grids as inputs, 2) creates an output grid with matching cell geometry, 3) scans through each cell of the input grid (or spatially matching cells of multiple inputs), 4) performs the operation on the cell value(s), and writes the result to the corresponding cell in the output grid. Originally, the inputs and the output grids were required to have the identical cell geometry (i.e., covering the same spatial extent with the same cell arrangement, so that each cell corresponds between inputs and outputs), but many modern GIS implementations do not require this, performing interpolation as needed to derive values at corresponding locations.
Tomlin classified the many possible map algebra operations into three types, to which some systems add a fourth:
Local Operators
Operations that operate on one cell location at a time during the scan phase. A simple example would be an arithmetic operator such as addition: to compute MAP3 = MAP1 + MAP2, the software scans through each matching cell of the input grids, adds the numeric values in each using normal arithmetic, and puts the result in the matching cell of the output grid. Due to this decomposition of operations on maps into operations on individual cell values, any operation that can be performed on numbers (e.g., arithmetic, statistics, trigonometry, logic) can be performed in map algebra. For example, a LocalMean operator would take in two or more grids and compute the arithmetic mean of each set of spatially corresponding cells. In addition, a range of GIS-specific operations has been defined, such as reclassifying a large range of values to a smaller range of values (e.g., 45 land cover categories to 3 levels of habitat suitability), which dates to the original IMGRID implementation of 1975. A common use of local functions is for implementing mathematical models, such as an index, that are designed to compute a resultant value at a location from a set of input variables.
Focal Operators
Functions that operate on a geometric neighborhood around each cell. A common example is calculating slope from a grid of elevation values. Looking at a single cell, with a single elevation, it is impossible to judge a trend such as slope. Thus, the slope of each cell is computed from the value of the corresponding cell in the input elevation grid and the values of its immediate neighbors. Other functions allow for the size and shape of the neighborhood (e.g. a circle or square of arbitrary size) to be specified. For example, a FocalMean operator could be used to compute the mean value of all the cells within 1000 meters (a circle) of each cell.
Zonal Operators
Functions that operate on regions of identical value. These are commonly used with discrete fields (also known as categorical coverages), where space is partitioned into regions of homogeneous nominal or categorical value of a property such as land cover, land use, soil type, or surface geologic formation. Unlike local and focal operators, zonal operators do not operate on each cell individually; instead, all of the cells of a given value are taken as input to a single computation, with identical output being written to all of the corresponding cells. For example, a ZonalMean operator would take in two layers, one with values representing the regions (e.g., dominant vegetation species) and another of a related quantitative property (e.g., percent canopy cover). For each unique value found in the former grid, the software collects all of the corresponding cells in the latter grid, computes the arithmetic mean, and writes this value to all of the corresponding cells in the output grid.
Global Operators
Functions that summarize the entire grid. These were not included in Tomlin's work, and are not technically part of map algebra, because the result of the operation is not a raster grid (i.e., it is not closed), but a single value or summary table. However, they are useful to include in the general toolkit of operations. For example, a GlobalMean operator would compute the arithmetic mean of all of the cells in the input grid and return a single mean value. Some also consider operators that generate a new grid by evaluating patterns across the entire input grid as global, which could be considered part of the algebra. An example of these are the operators for evaluating cost distance.
Implementation
Several GIS software packages implement map algebra concepts, including ERDAS Imagine, QGIS, GRASS GIS, TerrSet, PCRaster, and ArcGIS.
In Tomlin's original formulation of cartographic modeling in the Map Analysis Package, he designed a simple procedural language around the algebra operators to allow them to be combined into a complete procedure with additional structures such as conditional branching and looping. However, in most modern implementations, map algebra operations are typically one component of a general procedural processing system, such as a visual modeling tool or a scripting language. For example, ArcGIS implements Map Algebra in both its visual ModelBuilder tool and in Python. Here, Python's overloading capability allows simple operators and functions to be used for raster grids. For example, rasters can be multiplied using the same "*" arithmetic operator used for multiplying numbers.
Here are some examples in MapBasic, the scripting language for MapInfo Professional:
# demo for Brown's Pond data set
# Give layers
# altitude
# development – 0: vacant, 1: major, 2: minor, 3: houses, 4: buildings, 5 cement
# water – 0: dry, 2: wet, 3: pond
# calculate the slope at each location based on altitude
slope = IncrementalGradient of altitude
# identify the areas that are too steep
toosteep = LocalRating of slope
where 1 replaces 4 5 6
where VOID replaces ...
# create layer unifying water and development
occupied = LocalRating of development
where water replaces VOID
notbad = LocalRating of occupied and toosteep
where 1 replaces VOID and VOID
where VOID replaces ... and ...
roads = LocalRating of development
where 1 replaces 1 2
where VOID replaces ...
nearread = FocalNeighbor of roads at 0 ... 10
aspect = IncrementalAspect of altitude
southface = LocalRating of aspect
where 1 replaces 135 ... 225
where VOID replaces ...
sites = LocalMinimum of nearroad and southface and notbad
sitenums = FocalInsularity of sites at 0 ... 1
sitesize = ZonalSum of 1 within sitenums
bestsites = LocalRating of sitesize
where sitesize replaces 100 ... 300
where VOID replaces ...
External links
osGeo-RFC-39 about Layer Algebra
References
B. E. Davis GIS: A Visual Approach (2001 Cengage Learning) pp. 249ff.
Geographic information systems
Applied mathematics
Algebra
Geographic information science
Spatial analysis | Map algebra | [
"Physics",
"Mathematics",
"Technology"
] | 2,094 | [
"Applied mathematics",
"Spatial analysis",
"Information systems",
"Space",
"Spacetime",
"Geographic information systems",
"Algebra"
] |
7,642,705 | https://en.wikipedia.org/wiki/Ionic%20atmosphere | Ionic Atmosphere is a concept employed in Debye–Hückel theory which explains the electrolytic conductivity behaviour of solutions. It can be generally defined as the area at which a charged entity is capable of attracting an entity of the opposite charge.
Asymmetry, or relaxation effect
If an electrical potential is applied to an electrolytic solution, a positive ion will move towards the negative electrode and drag along an entourage of negative ions with it. The more concentrated the solution, the closer these negative ions are to the positive ion and thus the greater the resistance experienced by the positive ion. This influence on the speed of an ion is known as the "Asymmetry effect" because the ionic atmosphere moving around the ion is not symmetrical; the charge density behind is greater than in the front, slowing the motion of the ion. The time required to form a new ionic atmosphere on the right or time required for ionic atmosphere on the left to fade away is known as time of relaxation. The asymmetrization of ionic atmosphere does not occur in the case of Debye Falkenhagen effect due to high frequency dependence of conductivity.
Electrophoretic effect
This is another factor which slows the motion of ions within a solution. It is the tendency of the applied potential to move the ionic atmosphere itself. This drags the solvent molecules along because of the attractive forces between ions and solvent molecules. As a result, the central ion at the centre of the ionic atmosphere is influenced to move towards the pole opposite its ionic atmosphere. This inclination retards its motion.
Limits to the model
The model of ionic atmosphere is less adequate for concentrated ionic solutions near saturation. These solutions as well as molten salts or ionic liquids have a structure similar to the crystalline lattice where water molecules are located between ions.
References
Analytical chemistry
Physical chemistry | Ionic atmosphere | [
"Physics",
"Chemistry"
] | 371 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
7,645,050 | https://en.wikipedia.org/wiki/Plasma%20speaker | Plasma speakers or ionophones are a form of loudspeaker which varies air pressure via an electrical plasma instead of a solid diaphragm. The plasma arc heats the surrounding air causing it to expand. Varying the electrical signal that drives the plasma and connected to the output of an audio amplifier, the plasma size varies which in turn varies the expansion of the surrounding air creating sound waves.
The plasma is typically in the form of a glow discharge and acts as a massless radiating element. The technique is a much later development of physics principles demonstrated by William Duddell's "singing arc" of 1900, and Hermann Theodor Simon published the same phenomenon in 1898.
The term ionophone was used by Dr. Siegfried Klein who developed a plasma tweeter that was licensed for commercial production by DuKane with the Ionovac and Fane Acoustics with the Ionofane in the late 1940s and 1950s.
The effect takes advantage of several physical principles: First, ionization of a gas creates a highly conductive plasma, which responds to alternating electric and magnetic fields. Second, this low-density plasma has a negligibly small mass. Thus, the air remains mechanically coupled with the essentially massless plasma, allowing it to radiate a nearly ideal reproduction of the sound source when the electric or magnetic field is modulated with the audio signal.
Comparison to conventional loudspeakers
Conventional loudspeaker transducer designs use the input electrical audio frequency signal to vibrate a significant mass: In a dynamic loudspeaker this driver is coupled to a stiff speaker cone—a diaphragm which pushes air at audio frequencies. But the inertia inherent in its mass resists acceleration—and all changes in cone position. Additionally, speaker cones will eventually suffer tensile fatigue from the repeated shaking of sonic vibration.
Thus conventional speaker output, or the fidelity of the device, is distorted by physical limitations inherent in its design. These distortions have long been the limiting factor in commercial reproduction of strong high frequencies. To a lesser extent square wave characteristics are also problematic; the reproduction of square waves most stress a speaker cone.
In a plasma speaker, as member of the family of massless speakers, these limitations do not exist. The low-inertia driver has exceptional transient response compared to other designs. The result is an even output, accurate even at higher frequencies beyond the human audible range. Such speakers are notable for accuracy and clarity, but not lower frequencies because plasma is composed of tiny molecules and with such low mass are unable to move large volumes of air unless the plasma are in large number. So these designs are more effective as tweeters.
Practical considerations
Plasma speaker designs ionize ambient air which contains the gases nitrogen and oxygen. In an intense electrical field these gases can produce reactive by-products, and in closed rooms these can reach a hazardous level. The two predominant gases produced are ozone and nitrogen dioxide.
Plasmatronics produced a commercial plasma speaker that used a helium tank to provide the ionization gas. In 1978 Alan E. Hill of the Air Force Weapons Laboratory in Albuquerque, NM, designed the Plasmatronics Hill Type I, a commercial helium-plasma tweeter. This avoided the ozone and nitrogen oxides produced by radio frequency decomposition of air in earlier generations of plasma tweeters. But the operation of such speakers requires a continuous supply of helium.
In the 1950s, the pioneering DuKane Corporation produced the air-ionizing Ionovac, marketed in the UK as the Ionophone. Currently there remain manufacturers in Germany who use this design, as well as many do-it-yourself designs available on the Internet.
To make the plasma speaker a more widely available product, ExcelPhysics, a Seattle-based company, and Images Scientific Instruments, a New York-based company, both offered their own variant of the plasma speaker as a DIY kit. The ExcelPhysics variant used a flyback transformer to step up voltage, a 555 timing chip to provide modulation and a 44 kHz carrier signal, and an audio amplifier. The kit is no longer marketed.
A flame speaker uses a modulated flame for the driver and could be considered related to the plasma loudspeaker. This was explored using the combustion of natural gas or candles to produce a plasma through which current is then passed. These combustion designs do not require high voltages to generate a plasma field, but there has been no commercial products using them.
A similar effect is occasionally observed in the vicinity of high-power amplitude-modulated radio transmitters when a corona discharge (inadvertently) occurs from the transmitting antenna, where voltages in the tens of thousands volts are involved. The ionized air is heated in direct relationship to the modulating signal with surprisingly high fidelity over a wide area. Due to the destructive effects of the (self-sustaining) discharge this cannot be permitted to persist, and automatic systems momentarily shut down transmission within a few seconds to quench the "flame".
See also
Singing Tesla coil
References
External links
William Duddell
Ionovac
Magnetic propulsion devices
speaker
Loudspeakers
Transducers | Plasma speaker | [
"Physics"
] | 1,035 | [
"Plasma technology and applications",
"Plasma physics"
] |
16,720,569 | https://en.wikipedia.org/wiki/Alkanolamine | In organic chemistry, alkanolamines (amino alcohols) are organic compounds that contain both hydroxyl () and amino (, , and ) functional groups on an alkane backbone. Most alkanolamines are colorless.
1-Aminoalcohols
1-Aminoalcohols are better known as hemiaminals. Methanolamine is the simplest member.
2-Aminoalcohols
2-Aminoalcohols are an important class of organic compounds that are often generated by the reaction of amines with epoxides:
Simple alkanolamines are used as solvents, synthetic intermediates, and high-boiling bases.
Hydrogenation or hydride reduction of amino acids gives the corresponding 2-aminoalcohols. Examples include prolinol (from proline), valinol (from valine), tyrosinol (from tyrosine).
Key members: ethanolamine, dimethylethanolamine, N-methylethanolamine, Aminomethyl propanol. Two popular drugs, often called alkanolamine beta blockers, are members of this structural class: propranolol, pindolol. Isoetarine is yet another medicinally useful derivative of ethanolamine.
1,3-, 1,4-, and 1,5-amino alcohols
Heptaminol, a cardiac stimulant
Propanolamines
Natural products
Most proteins and peptides contain both alcohols and amino groups. Two amino acids are alkanolamines, formally speaking: serine and hydroxyproline.
Veratridine and veratrine
Tropane alkaloids such as atropine
hormones and neurotransmitters epinephrine (adrenaline) and norepinephrine (noradrenaline)
References
External links
Alcohols
Amines | Alkanolamine | [
"Chemistry"
] | 389 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
16,725,900 | https://en.wikipedia.org/wiki/MEGAN | MEGAN ("MEtaGenome ANalyzer") is a computer program that allows optimized analysis of large metagenomic datasets.
Metagenomics is the analysis of the genomic sequences from a usually uncultured environmental sample. A large term goal of most metagenomics is to inventory and measure the extent and the role of microbial biodiversity in the ecosystem due to discoveries that the diversity of microbial organisms and viral agents in the environment is far greater than previously estimated. Tools that allow the investigation of very large data sets from environmental samples using shotgun sequencing techniques in particular, such as MEGAN, are designed to sample and investigate the unknown biodiversity of environmental samples where more precise techniques with smaller, better known samples, cannot be used.
Fragments of DNA from an metagenomics sample, such as ocean waters or soil, are compared against databases of known DNA sequences using BLAST or another sequence comparison tool to assemble the segments into discrete comparable sequences. MEGAN is then used to compare the resulting sequences with gene sequences from GenBank in NCBI. The program was used to investigate the DNA of a woolly mammoth recovered from the Siberian permafrost and Sargasso Sea data set.
Introduction
Metagenomics is the study of genomic content of samples from same habitat, which is designed to determine the role and the extent of species diversity. Targeted or random sequencing are widely used with comparisons against sequence databases. Recent developments in sequencing technology increased the number of metagenomics samples. MEGAN is an easy to use tool for analysing such metagenomics data. First version of MEGAN was released in 2007 and the most recent version is MEGAN6. First version is capable of analysing taxonomic content of a single dataset while the latest version can analyse multiple datasets including new features (query different databases, new algorithm etc.).
MEGAN Pipeline
MEGAN analysis starts with collecting reads from any shotgun platform. Then, the reads are compared with sequence databases using BLAST or similar. Third, MEGAN assigns a taxon ID to processed read results based on NCBI taxonomy which creates a MEGAN file that contains required information for statistical and graphical analysis. Lastly, lowest common ancestor (LCA) algorithm can be run to inspect assignments, to analyze data and to create summaries of data based on different NCBI taxonomy levels. LCA algorithm simply finds the lowest common ancestor of different species.
References
External links
Metagenomics software
Phylogenetics software
Molecular biology | MEGAN | [
"Chemistry",
"Biology"
] | 491 | [
"Biochemistry",
"Molecular biology"
] |
16,728,325 | https://en.wikipedia.org/wiki/Rhizofiltration | Rhizofiltration is a form of phytoremediation that involves filtering contaminated groundwater, surface water and wastewater through a mass of roots to remove toxic substances or excess nutrients.
Overview
Rhizofiltration is a type of phytoremediation, which refers to the approach of using hydroponically cultivated plant roots to remediate contaminated water through absorption, concentration, and precipitation of pollutants. It also filters through water and dirt.
The contaminated water is either collected from a waste site and brought to the plants, or the plants are planted in the contaminated area, where the roots then take up the water and the contaminants dissolved in it. Many plant species naturally uptake heavy metals and excess nutrients for a variety of reasons: sequestration, drought resistance, disposal by leaf abscission, interference with other plants, and defense against pathogens and herbivores. Some of these species are better than others and can accumulate extraordinary amounts of these contaminants. Identification of such plant species has led environmental researchers to realize the potential for using these plants for remediation of contaminated soil and wastewater.
Process
This process is very similar to phytoextraction in that it removes contaminants by trapping them into harvestable plant biomass. Both phytoextraction and rhizofiltration follow the same basic path to remediation. First, plants that have stable root systems are put in contact with the contamination to get acclimated to the toxins. They absorb contaminants through their root systems and store them in root biomass and/or transport them up into the stems and/or leaves. The plants continue to absorb contaminants until they are harvested. The plants are then replaced to continue the growth/harvest cycle until satisfactory levels of contaminant are achieved. Both processes are also aimed more toward concentrating and precipitating heavy metals than organic contaminants. The major difference between rhizofiltration and phytoextraction is that rhizofiltration is used for treatment in aquatic environments, while phytoextraction deals with soil remediation.
Applications
Rhizofiltration may be applicable to the treatment of surface water and groundwater, industrial and residential effluents, downwashes from power lines, storm waters, acid mine drainage, agricultural runoffs, diluted sludges, and radionuclide-contaminated solutions.
Plants suitable for rhizofiltration applications can efficiently remove toxic metals from a solution using rapid-growth root systems. Various terrestrial plant species have been found to effectively remove toxic metals such as Cu2+, Cd2+, Cr6+, Ni2+, Pb2+, and Zn2+ from aqueous solutions. It was also found that low level radioactive contaminants can successfully be removed from liquid streams. A system to achieve this can consist of a “feeder layer” of soil suspended above a contaminated stream through which plants grow, extending the bulk of their roots into the water. The feeder layer allows the plants to receive fertilizer without contaminating the stream, while simultaneously removing heavy metals from the water.
Trees have also been applied to remediation. Trees are the lowest cost plant type. They can grow on land of marginal quality and have long life-spans. This results in little or no maintenance costs. The most commonly used are willows and poplars, which can grow 6 - 8’ per year and have a high flood tolerance. For deep contamination, hybrid poplars with roots extending 30 feet deep have been used. Their roots penetrate microscopic scale pores in the soil matrix and can cycle 100 L of water per day per tree. These trees act almost like a pump and treat remediation system. Willows have been successfully used as “vegetation filters” for nutrient (e.g. nitrogen and phosphorus) removal from municipal wastewater and polluted groundwater.
Common Plants
There are a series of aquatic and land plants that are used for rhizofiltration with varying degrees of success among them. While many of these plants are hyperaccumulators, other plant species can be used as the contaminants do not always reach the shoots (stems and their appendages: leaves, lateral buds, flowering stems and flower buds).
Some of the most common plant species that have shown the ability to remove toxins from water via rhizofiltration:
Sunflower
Indian Mustard
Tobacco
Rye
Spinach
Corn
Parrot's Feather
Iris-leaved Rush
Cattail
Saltmarsh bulrush
Scirpus Robustus
Cost
Rhizofiltration is cost-effective for large volumes of water having low concentrations of contaminants that are subjected to stringent standards. It is relatively inexpensive, yet potentially more effective than comparable technologies. The removal of radionuclides from water using sunflowers was estimated to cost between $2 and $6 per thousand gallons of water treated, including waste disposal and capital costs.
Advantages
Rhizofiltration is a contamination treatment method that may be conducted in situ, with plants being grown directly in the contaminated water body or ex situ, where plants are grown off-site and later introduced to the contaminated water body. This allows for a relatively inexpensive procedure with low capital and operational costs, depending on the type of contaminant.
In some cases, contaminants have been shown to be significantly decreased in a very short amount of time. One study found that roots of sunflower reduced levels of Uranium by nearly 95% in just 24 hours.
This treatment method is also aesthetically pleasing and results in a decrease of water infiltration and leaching of contaminants.
After harvesting, the crop may be converted to biofuel briquette, a substitute for fossil fuel.
Disadvantages
This contamination treatment method has its limits. Any contaminant that is below the rooting depth will not be extracted. The plants used may not be able to grow in highly contaminated areas. Most importantly, it can take years to reach regulatory levels. This results in long-term maintenance.
Also, most contaminated sites are polluted with many different kinds of contaminants. There can be a combination of metals and organics, in which treatment through rhizofiltration will not suffice.
Plants grown on polluted water and soils become a potential threat to human and animal health, and therefore, careful attention must be paid to the harvesting process and only non-fodder crop should be chosen for the rhizofiltration remediation method.
See also
Phytoremediation
Hyperaccumulating Plants
Biodegradation
Bioremediation
References
External links
Phytoremediation and Hyperaccumulator Plants Comprehensive overview.
Using Plants To Clean Up Soils - from Agricultural Research magazine
Modeling rhizofiltration: heavy-metal uptake by plant roots Much data.
Bioremediation
-
Water treatment | Rhizofiltration | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,400 | [
"Water treatment",
"Phytoremediation plants",
"Water pollution",
"Biodegradation",
"Environmental engineering",
"Ecological techniques",
"Bioremediation",
"Water technology",
"Environmental soil science"
] |
16,728,971 | https://en.wikipedia.org/wiki/HD%2015115 | HD 15115 is a single star in the equatorial constellation of Cetus. It is readily visible in binoculars or a small telescope, but is considered too dim to be seen with the naked eye at an apparent visual magnitude of 6.76. The distance to this object is 160 light years based on parallax, and it is slowly drifting further away at the rate of about 1 km/s. It has been proposed as a member of the Beta Pictoris moving group or the Tucana-Horologium association of co-moving stars; there is some ambiguity as to its true membership.
This object has a stellar classification of F4IV, suggesting it is an aging subgiant star that has exhausted the supply of hydrogen at its core. MacGregor and associates (2015) instead classify it as a young F-type main-sequence star with a class of F2V. Age estimates give a value of 500 million years, while membership in the β Pictoris moving group would indicate an age of around . It has 1.19 times the mass of the Sun, 1.39 times the Sun's radius, and has a relatively high rate of spin with a projected rotational velocity of 90 km/s. The star is radiating 3.74 times the luminosity of the Sun from its photosphere at an effective temperature of 6,811 K. Its metallicity – the abundance of elements other than hydrogen and helium – is nearly the same as in the Sun.
HD 15115 was shown to have an asymmetric debris disk surrounding it, which is being viewed nearly edge-on. The reason for the asymmetry is thought to be either the gravitational pull of a passing star (HIP 12545), an exoplanet, or interaction with the local interstellar medium. A magnitude 11.35 visual companion lies at an angular separation of along a position angle of 195°, as of 2015.
References
F-type subgiants
F-type main-sequence stars
Beta Pictoris moving group
Circumstellar disks
Cetus
Durchmusterung objects
015115
011360 | HD 15115 | [
"Astronomy"
] | 435 | [
"Cetus",
"Constellations"
] |
16,729,798 | https://en.wikipedia.org/wiki/Project%20112 | Project 112 was a biological and chemical weapon experimentation project conducted by the United States Department of Defense from 1962 to 1973.
The project started under John F. Kennedy's administration, and was authorized by his Secretary of Defense Robert McNamara, as part of a total review of the US military. The name "Project 112" refers to this project's number in the 150 project review process authorized by McNamara. Funding and staff were contributed by every branch of the U.S. armed services and intelligence agencies—a euphemism for the Office of Technical Services of the Central Intelligence Agency's Directorate of Science & Technology. Canada and the United Kingdom also participated in some Project 112 activities.
Project 112 primarily concerned the use of aerosols to disseminate biological and chemical agents that could produce "controlled temporary incapacitation" (CTI). The test program would be conducted on a large scale at "extracontinental test sites" in the Central and South Pacific and Alaska in conjunction with Britain, Canada and Australia.
At least 50 trials were conducted; of these at least 18 tests involved simulants of biological agents (such as BG), and at least 14 involved chemical agents including sarin and VX, but also tear gas and other simulants. Test sites included Porton Down (UK), Ralston (Canada) and at least 13 US warships; the shipborne trials were collectively known as Shipboard Hazard and Defense—SHAD. The project was coordinated from Deseret Test Center, Utah.
, publicly available information on Project 112 remains incomplete.
Top-level directives
In January 1961, Defense Secretary Robert McNamara sent a directive about chemical and biological weapons to the Joint Chiefs of Staff, urging them to: "consider all possible applications, including use as an alternative to nuclear weapons. Prepare a plan for the development of an adequate biological and chemical deterrent capability, to include cost estimates, and appraisal of domestic and international political consequences." The Joint Chiefs established a Joint Task Force that recommended a five-year plan to be conducted in three phases.
On April 17, 1963, President Kennedy signed National Security Action Memorandum 235 (NSAM 235) which approved:
Project 112 was a highly classified military testing program which was aimed at both offensive and defensive human, animal, and plant reaction to biological and chemical warfare in various combinations of climate and terrain. The U.S. Army Chemical Corps sponsored the United States portion of an agreement between the US, Britain, Canada, and Australia to negotiate, host, conduct, or participate in mutual interest research and development activity and field testing.
Command
The command structure for the Deseret Test Center, which was organized to oversee Project 112, somewhat bypassed standard Defense Department channels and reported directly to the Joint Chiefs of Staff and US Cabinet consisting of Secretary of Defense, Secretary of State, and to a much smaller extent, the Secretary of Agriculture. Experiments were planned and conducted by the Deseret Test Center and Deseret Chemical Depot at Fort Douglas, Utah. The tests were designed to test the effects of biological weapons and chemical weapons on personnel, plants, animals, insects, toxins, vehicles, ships and equipment. Project 112 and Project SHAD experiments involved unknowing test subjects who did not give informed consent, and took place on land and at sea in various climates and terrains. Experiments involved humans, plants, animals, insects, aircraft, ships, submarines and amphibious vehicles.
Biological weapons tests
There was a large variety of goals for the proposed tests, for example: "selected protective devices in preventing penetration of a naval ship by a biological aerosol," the impact of "meteorological conditions on weapon system performance over the open sea," the penetrability of jungle vegetation by biological agents, "the penetration of an arctic inversion by a biological aerosol cloud," "the feasibility of an offshore release of Aedes aegypti mosquito as a vector for infectious diseases," "the feasibility of a biological attack against an island complex," and the study of the decay rates of biowarfare agents under various conditions.
Project 112 tests used the following agents and simulants: Francisella tularensis, Serratia marcescens, Escherichia coli, Bacillus globii, staphylococcal enterotoxin Type B, Puccinia graminis var. tritici (stem rust of wheat). Agents and simulants were usually dispensed as aerosols using spraying devices or bomblets.
In May 1965, vulnerability tests in the U.S. using the anthrax simulant Bacillus globigii were performed in the Washington, D.C. area by SOD covert agents. One test was conducted at the Greyhound bus terminal and the other at the north terminal of the National Airport. In these tests the bacteria were released from spray generators hidden in specially built briefcases. SOD also conducted a series of tests in the New York City Subway system between 7 and 10 June 1966 by dropping light bulbs filled with Bacillus subtilis var. niger. In the latter tests, results indicated that a city-level epidemic would have occurred. Local police and transit authorities were not informed of these tests.
SHAD – Shipboard Hazard and Defense
Project SHAD, an acronym for Shipboard Hazard and Defense (or sometimes Decontamination), was part of the larger program called Project 112, which was conducted during the 1960s. Project SHAD encompassed tests designed to identify U.S. warships' vulnerabilities to attacks with chemical or biological warfare agents and to develop procedures to respond to such attacks while maintaining a war-fighting capability. The Department of Defense (DoD) states that Project 112 was initiated out of concern for the ability of the United States to protect and defend against potential CB threats. Project 112 consisted of both land-based and sea-based tests. The sea-based tests, called Project SHAD were primarily launched from other ships such as the USS Granville S. Hall (YAG-40) and USS George Eastman (YAG-39), Army tugboats, submarines, or fighter aircraft and was designed to identify U.S. warships' vulnerabilities to attacks with chemical or biological warfare agents and to develop decontamination and other methods to counter such attacks while maintaining a war-fighting capability. The classified information related to SHAD was not completely cataloged or located in one facility. Furthermore, The Deseret Test Center was closed in the 1970s and the search for 40-year-old documents and records kept by different military services in different locations was a challenge to researchers. A fact sheet was developed for each test that was conducted and when a test cancellation was not documented, a cancellation analysis was developed outlining the logic used to presume that the test had been cancelled.
Declassification
The existence of Project 112 (along with the related Project SHAD) was categorically denied by the military until May 2000, when a CBS Evening News investigative report produced dramatic revelations about the tests. This report caused the Department of Defense and the Department of Veterans Affairs to launch an extensive investigation of the experiments, and reveal to the affected personnel their exposure to toxins.
Revelations concerning Project SHAD were first exposed by independent producer and investigative journalist Eric Longabardi. Longabardi's six-year investigation into the still secret program began in early 1994. It ultimately resulted in a series of investigative reports produced by him, which were broadcast on the CBS Evening News in May 2000. After the broadcast of these exclusive reports, the Pentagon and Veteran's Administration opened their own ongoing investigations into the long classified program. In 2002, Congressional hearings on Project SHAD, in both the Senate and House, further shed media attention on the program. In 2002, a class action federal lawsuit was filed on behalf of the US sailors exposed in the testing. Additional actions, including a multi-year medical study, were conducted by National Academy of Sciences/Institute of Medicine to assess the potential medical harm caused to the thousands of unwitting US Navy sailors, civilians, and others who were exposed in the secret testing. The results of that study were finally released in May 2007.
Because most of the participants that were involved with Project 112 and SHAD were unaware of any tests being done, no effort was made to ensure the informed consent of the military personnel. The US Department of Defense (DoD) conducted testing of agents in other countries that were considered too unethical to perform within the continental United States. Until 1998, the Department of Defense stated officially that Project SHAD did not exist. Because the DoD refused to acknowledge the program, surviving test subjects have been unable to obtain disability payments for health issues related to the project. US Representative Mike Thompson said of the program and the DoD's effort to conceal it, "They told me – they said, but don't worry about it, we only used simulants. And my first thought was, well, you've lied to these guys for 40 years, you've lied to me for a couple of years. It would be a real leap of faith for me to believe that now you're telling me the truth."
The Department of Veterans Affairs commenced a three-year study comparing known SHAD-affected veterans to veterans of similar ages who were not involved in any way with SHAD or Project 112. The study cost approximately US$3 million, and results are being compiled for future release. DoD has committed to providing the VA with the relevant information it needs to settle benefits claims as quickly and efficiently as possible and to evaluate and treat veterans who were involved in those tests. This required analyzing historical documents recording the planning and execution of Project 112/SHAD tests.
The released historical information about Project 112 from DoD consists of summary fact sheets rather than original documents or maintained federal information. As of 2003, 28 fact sheets have been released, focusing on the Deseret Test Center in Dugway, Utah, which was built entirely for Project 112/SHAD and was closed after the project was finished in 1973.
Original records are missing or incomplete. For example, a 91-meter aerosol test tower was sprayed by an F-4E with "aerosols" on Ursula Island in the Philippines and appears in released original Project SHAD documentation but without a fact sheet or further explanation or disclosure as to the nature of the test that was conducted or even what the test was called.
Criticisms after disclosure of CBW testing
Transfer of Japanese technical information (1945–1946)
Author Sheldon H. Harris researched the history of Japanese Biological warfare and the American cover-up extensively. Harris and other scholars found that U.S. intelligence authorities had seized the Japanese researchers' archive after the technical information was provided by Japan. The information was transferred in an arrangement that exchanged keeping the information a secret and not pursuing war crimes charges.
The arrangement with the United States concerning Japanese WMD research provided extensive Japanese technical information in exchange for not pursuing certain charges and also allowed Japan's government to deny knowledge of the use of these weapons by Japan's military in China during World War II. German scientists in Europe also skipped war crimes charges and went to work as U.S. employed intelligence agents and technical experts in an arrangement known as Operation Paperclip.
The U.S. would not cooperate when the Soviet Union attempted to pursue war crimes charges against the Japanese. General Douglas MacArthur denied the U.S. Military had any captured records on Japan's military biological program. "The U.S. denial was absolutely misleading but technically correct as the Japanese records on biological warfare were then in the custody of U.S intelligence agencies rather than in possession of the military". A formerly top secret report by the U.S. War Department at the close of World War II, clearly stipulates that the United States exchanged Japan's military technical information on Biological Warfare experimentation against humans, plants, and animals in exchange for war crimes immunity. The War department notes that, "The voluntary imparting of this BW information may serve as a forerunner for obtaining much additional information in other fields of research." Armed with Nazi and Imperial Japanese biowarfare know-how, the United States government and its intelligence agencies began conducting widespread field testing of potential CBW capabilities on American cities, crops, and livestock.
It is known that Japanese scientists were working at the direction of the Japan's military and intelligence agencies on advanced research projects of the United States including America's covert biomedical and biowarfare programs from the end of World War II through at least the 1960s.
Congressional action and GAO investigation
The U.S. General Accounting Office (GAO) in September 1994 found that between 1940 and 1974, DOD and other national security agencies studied "hundreds, perhaps thousands" of weapons tests and experiments involving large area coverage of hazardous substances. The report states:
Innocent civilians in cities, on subways and at airports were sprayed with disease carrying mosquitoes, "aerosols," containing bacteria, viruses, or exposed to a variety of dangerous chemical, biological and radiological agents as well as stimulant agents that were later found to be more dangerous than first thought.
Precise information on the number of tests, experiments, and participants is not available and the exact number of veterans exposed will probably never be known.
On December 2, 2002, President George W. Bush signed Public Law 107–314, the Bob Stump National Defense Authorization Act (NDAA) for Fiscal Year 2003 which included Section 709 entitled Disclosure of Information on Project 112 to Department of Veterans Affairs. Section 709 required disclosure of information concerning Project 112 to United States Department of Veterans Affairs (DVA) and the General Accounting Office (GAO).
Public Law 107–314 required the identification and release of not only Project 112 information to VA but also that of any other projects or tests where a service member might have been exposed to a CBW agent and directed The Secretary of Defense to work with veterans and veterans service organizations to identify the other projects or tests conducted by the Department of Defense that may have exposed members of the Armed Forces to chemical or biological agents.
However, the issues surrounding the test program were not resolved by the passage of the law and "the Pentagon was accused of continuing to withhold documents on Cold War chemical and biological weapons tests that used unsuspecting veterans as "human samplers" after reporting to Congress it had released all medically relevant information."
A 2004 GAO report revealed that of the participants who were identified from Project 112, 94 percent were from ship-based tests of Project SHAD that comprised only about one-third of the total number of tests conducted.
The Department of Defense informed the Veterans Administration that Project 112/SHAD and Mustard Gas programs have been officially closed as of June 2008 while Edgewood Arsenal testing remains open as DoD continues to identify Veterans who were "test participants" in the program. DoD's current effort to identify Cold War exposures began in 2004 and is endeavoring to identify all non-Project 112/SHAD veterans exposed to chemical and biological substances due to testing and accidents from World War II through 1975.
"America has a sad legacy of weapons testing in the Pacific...people were removed from their homes and their islands used as targets." While this statement during congressional testimony during the Department of Defense's inquiry into Project 112 was referring a completely different and separate testing program, there are common concerns about potential adverse health impacts and the timely release of information. Congress was unsatisfied with the DOD's unwillingness to disclose information relating to the scope of America's chemical and biological warfare past and provide the information necessary to assess and deal with the risks to public safety and U.S. service members' health that CBW testing may have posed or continue to pose.
A Government Accounting Office May 2004 report, Chemical and Biological Defense: DOD Needs to Continue to Collect and Provide Information on Tests and Potentially Exposed Personnel states:
Legal developments
On appeal in Vietnam Veterans of America v. Central Intelligence Agency, a panel majority held in July 2015 that Army Regulation 70-25 (AR 70–25) created an independent duty to provide ongoing medical care to veterans who many years ago participated in U.S. chemical and biological testing programs. Prior to the finding that the Army is required to provide medical care long after a veteran last participated in a testing program was a 2012 finding that the Army has an ongoing duty to seek out and provide "notice" to former test participants of any new information that could potentially affect their health. The case was initially brought forward by concerned veterans who participated in the Edgewood Arsenal human experiments.
Controversy over inclusion of Okinawa as Extracontinental Site 2
Corroborating suspicions of Project 112 activities on Okinawa include "An Organizational History of the 267th Chemical company", which was made available by the U.S. Army Heritage and Education Center to Yellow Medicine County, Minnesota, Veteran's Service Officer Michelle Gatz in 2012. According to the document, the 267th Chemical Company was activated on Okinawa on December 1, 1962, as the 267th Chemical Platoon (SVC) was billeted at Chibana Depot. During this deployment, "Unit personnel were actively engaged in preparing RED HAT area, site 2 for the receipt and storage of first increment items, [shipment] "YBA", DOD Project 112." The company received further shipments, code named YBB and YBF, which according to declassified documents also included sarin, VX, and mustard gas.
The late author Sheldon H. Harris in his book "Factories of Death: Japanese Biological Warfare, 1932–1945, and the American cover up" wrote about Project 112:
The U.S. government has previously disclosed information on chemical and biological warfare tests it held at sea and on land yet new-found documents show that the U.S. Army tested biological weapons in Okinawa in the early 1960s, when the prefecture was still under U.S. rule. During these tests, conducted at least a dozen times between 1961 and 1962, rice blast fungus was released by the Army using "a midget duster to release inoculum alongside fields in Okinawa and Taiwan," in order to measure effective dosages requirements at different distances and the negative effects on crop production. Rice blast or Pyricularia oryzae produces a mycotoxin called tenuazonic acid which has been implicated in human and animal disease.
Official briefings and reports
A number of studies, reports and briefings have been done on chemical and biological warfare exposures. A list of the major documents is provided below.
Government Accountability Office reports
Government Accountability Office (GAO) Report: GAO-04-410, DOD Needs to Continue to Collect and Provide Information on Tests and on Potentially Exposed Personnel, May 2004
Government Accountability Office (GAO) Report: GAO-08-366, DOD and VA Need to Improve Efforts to Identify and Notify Individuals Potentially Exposed during Chemical and Biological Tests, February 2008
Secrecy Policies
Release from Secrecy Oaths Under Chem-Bio Research Programs, January 11, 2011
Institute of Medicine reports
Institute of Medicine Study: SHAD II, study in progress, 2012
Institute of Medicine Study: Long-Term Health Effects of Participation in Project SHAD, 2007
Supplement to Institute of Medicine Study: Long-Term Health Effects of Participation in Project SHAD, "Health Effects of Perceived Exposure to Biochemical Warfare Agents, 2004"
Three-Part National Research Council Series Reports on Possible Long-Term Health Effects of Short-Term Exposure to Chemical Agents (1982–1985)
Congressional testimony
Senate Committee on Veterans' Affairs Hearing: Military Exposures: The Continuing Challenges of Care and Compensation, July 10, 2002
House Committee on Veterans' Affairs, Subcommittee on Health, Hearing: Military Operations Aspects of SHAD and Project 112, October 9, 2002
Senate Armed Services Committee, Subcommittee on Personnel, Prepared Statement of Dr. William Winkenwerder Jr., Assistant Secretary of Defense for Health Affairs, on Shipboard Hazard and Defense, October 10, 2002
DoD briefings
Military Service Organizations/Veterans Service Organizations Briefing: Chemical/Biological Exposure Databases, September 17, 2009
Military Service Organizations/Veterans Service Organizations Briefing: Chemical/Biological Exposure Databases, February 21, 2008
Extracts from 2003 Report to Congress Disclosure of Information on Project 112 to the Department of Veterans Affairs as Directed by PL 107-314, August 1, 2003 – Executive Summary and Disclosure of Information
News releases
The following is a list of Department of Defense-issued press releases for Project 112 and Project SHAD:
June 30, 2003 – SHAD – Project 112 – Deseret Test Center Investigation Draws To A Close The Department of Defense completed today its nearly three-year investigation of operational tests conducted in the 1960s.
December 31, 2002 – DoD corrects data on SHAD test "High Low" Since the Department of Defense began investigating the operational shipboard hazard and defense tests in September 2000, it has released fact sheets on 42 of the 46 shipboard and land-based tests.
October 31, 2002 – DoD Releases Five Project 112 SHAD Fact Sheets The Department of Defense today released five new detailed fact sheets on Cold War-era chemical and biological warfare tests conducted in support of Project 112.
October 9, 2002 – DoD Releases Deseret Test Center/Project 112/Project SHAD Fact Sheets The Department of Defense today released another 28 detailed fact sheets on 27 Cold War-era chemical and biological warfare tests identified as Project 112.
July 9, 2002 – DoD expands SHAD investigationThe Department of Defense announced today an expansion of the Shipboard Hazard and Defense investigation. A team of investigators will travel to Dugway Proving Ground in mid-August to review Deseret Test Center records.
May 23, 2002 – DoD releases Project SHAD fact sheets The Department of Defense today released detailed fact sheets on six Cold War-era chemical and biological warfare tests.
May 23, 2002 – DoD releases six new Project SHAD fact sheets The Department of Defense released detailed fact sheets on six Cold War-era chemical and biological warfare tests.
January 4, 2002 – DoD Releases Information on 1960 tests In the 1960s, the Department of Defense conducted a series of chemical and biological warfare vulnerability tests on naval ships known collectively as Project Shipboard Hazard and Defense.
January 4, 2002 – No Small Feat The ongoing investigation into the Project Shipboard Hazard and Defense, or SHAD, tests is a detective story worthy of Sherlock Holmes.
See also
CFB Suffield and Suffield Experimental Station
Dorset Biological Warfare Experiments
Edgewood Arsenal human experiments
Human experimentation in the United States
Operation LAC (Large Area Coverage)
Operation Whitecoat
Porton Down
United States biological weapons program
References
External links
Project SHAD at the United States Department of Veterans Affairs, includes pocket guides and Q&A
Force Protection and Readiness information page for SHAD (Project 112)
GAO
Environmental controversies
Environmental impact of war
United States biological weapons program
Chemical warfare
Defoliants
Herbicides
Japan–United States relations
Johnston Atoll
Non-combat military operations involving the United States
Bioethics
112
Human subject research in the United States | Project 112 | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 4,681 | [
"Bioethics",
"Herbicides",
"Defoliants",
"Military projects",
"Chemical weapons",
"Military projects of the United States",
"nan",
"Ethics of science and technology",
"Biocides"
] |
666,107 | https://en.wikipedia.org/wiki/Quantum%20operation | In quantum mechanics, a quantum operation (also known as quantum dynamical map or quantum process) is a mathematical formalism used to describe a broad class of transformations that a quantum mechanical system can undergo. This was first discussed as a general stochastic transformation for a density matrix by George Sudarshan. The quantum operation formalism describes not only unitary time evolution or symmetry transformations of isolated systems, but also the effects of measurement and transient interactions with an environment. In the context of quantum computation, a quantum operation is called a quantum channel.
Note that some authors use the term "quantum operation" to refer specifically to completely positive (CP) and non-trace-increasing maps on the space of density matrices, and the term "quantum channel" to refer to the subset of those that are strictly trace-preserving.
Quantum operations are formulated in terms of the density operator description of a quantum mechanical system. Rigorously, a quantum operation is a linear, completely positive map from the set of density operators into itself. In the context of quantum information, one often imposes the further restriction that a quantum operation must be physical, that is, satisfy for any state .
Some quantum processes cannot be captured within the quantum operation formalism; in principle, the density matrix of a quantum system can undergo completely arbitrary time evolution. Quantum operations are generalized by quantum instruments, which capture the classical information obtained during measurements, in addition to the quantum information.
Background
The Schrödinger picture provides a satisfactory account of time evolution of state for a quantum mechanical system under certain assumptions. These assumptions include
The system is non-relativistic
The system is isolated.
The Schrödinger picture for time evolution has several mathematically equivalent formulations. One such formulation expresses the time rate of change of the state via the Schrödinger equation. A more suitable formulation for this exposition is expressed as follows:
This means that if the system is in a state corresponding to v ∈ H at an instant of time s, then the state after t units of time will be Ut v. For relativistic systems, there is no universal time parameter, but we can still formulate the effect of certain reversible transformations on the quantum mechanical system. For instance, state transformations relating observers in different frames of reference are given by unitary transformations. In any case, these state transformations carry pure states into pure states; this is often formulated by saying that in this idealized framework, there is no decoherence.
For interacting (or open) systems, such as those undergoing measurement, the situation is entirely different. To begin with, the state changes experienced by such systems cannot be accounted for exclusively by a transformation on the set of pure states (that is, those associated to vectors of norm 1 in H). After such an interaction, a system in a pure state φ may no longer be in the pure state φ. In general it will be in a statistical mix of a sequence of pure states φ1, ..., φk with respective probabilities λ1, ..., λk. The transition from a pure state to a mixed state is known as decoherence.
Numerous mathematical formalisms have been established to handle the case of an interacting system. The quantum operation formalism emerged around 1983 from work of Karl Kraus, who relied on the earlier mathematical work of Man-Duen Choi. It has the advantage that it expresses operations such as measurement as a mapping from density states to density states. In particular, the effect of quantum operations stays within the set of density states.
Definition
Recall that a density operator is a non-negative operator on a Hilbert space with unit trace.
Mathematically, a quantum operation is a linear map Φ between spaces of trace class operators on Hilbert spaces H and G such that
If S is a density operator, Tr(Φ(S)) ≤ 1.
Φ is completely positive, that is for any natural number n, and any square matrix of size n whose entries are trace-class operators and which is non-negative, then is also non-negative. In other words, Φ is completely positive if is positive for all n, where denotes the identity map on the C*-algebra of matrices.
Note that, by the first condition, quantum operations may not preserve the normalization property of statistical ensembles. In probabilistic terms, quantum operations may be sub-Markovian. In order that a quantum operation preserve the set of density matrices, we need the additional assumption that it is trace-preserving.
In the context of quantum information, the quantum operations defined here, i.e. completely positive maps that do not increase the trace, are also called quantum channels or stochastic maps. The formulation here is confined to channels between quantum states; however, it can be extended to include classical states as well, therefore allowing quantum and classical information to be handled simultaneously.
Kraus operators
Kraus theorem (named after Karl Kraus) characterizes completely positive maps, which model quantum operations between quantum states. Informally, the theorem ensures that the action of any such quantum operation on a state can always be written as , for some set of operators satisfying , where is the identity operator.
Statement of the theorem
Theorem. Let and be Hilbert spaces of dimension and respectively, and be a quantum operation between and . Then, there are matrices mapping to such that, for any state ,
Conversely, any map of this form is a quantum operation provided .
The matrices are called Kraus operators. (Sometimes they are known as noise operators or error operators, especially in the context of quantum information processing, where the quantum operation represents the noisy, error-producing effects of the environment.) The Stinespring factorization theorem extends the above result to arbitrary separable Hilbert spaces H and G. There, S is replaced by a trace class operator and by a sequence of bounded operators.
Unitary equivalence
Kraus matrices are not uniquely determined by the quantum operation in general. For example, different Cholesky factorizations of the Choi matrix might give different sets of Kraus operators. The following theorem states that all systems of Kraus matrices representing the same quantum operation are related by a unitary transformation:
Theorem. Let be a (not necessarily trace-preserving) quantum operation on a finite-dimensional Hilbert space H with two representing sequences of Kraus matrices and . Then there is a unitary operator matrix such that
In the infinite-dimensional case, this generalizes to a relationship between two minimal Stinespring representations.
It is a consequence of Stinespring's theorem that all quantum operations can be implemented by unitary evolution after coupling a suitable ancilla to the original system.
Remarks
These results can be also derived from Choi's theorem on completely positive maps, characterizing a completely positive finite-dimensional map by a unique Hermitian-positive density operator (Choi matrix) with respect to the trace. Among all possible Kraus representations of a given channel, there exists a canonical form distinguished by the orthogonality relation of Kraus operators, . Such canonical set of orthogonal Kraus operators can be obtained by diagonalising the corresponding Choi matrix and reshaping its eigenvectors into square matrices.
There also exists an infinite-dimensional algebraic generalization of Choi's theorem, known as "Belavkin's Radon-Nikodym theorem for completely positive maps", which defines a density operator as a "Radon–Nikodym derivative" of a quantum channel with respect to a dominating completely positive map (reference channel). It is used for defining the relative fidelities and mutual informations for quantum channels.
Dynamics
For a non-relativistic quantum mechanical system, its time evolution is described by a one-parameter group of automorphisms {αt}t of Q. This can be narrowed to unitary transformations: under certain weak technical conditions (see the article on quantum logic and the Varadarajan reference), there is a strongly continuous one-parameter group {Ut}t of unitary transformations of the underlying Hilbert space such that the elements E of Q evolve according to the formula
The system time evolution can also be regarded dually as time evolution of the statistical state space. The evolution of the statistical state is given by a family of operators {βt}t
such that
Clearly, for each value of t, S → U*t S Ut is a quantum operation. Moreover, this operation is reversible.
This can be easily generalized: If G is a connected Lie group of symmetries of Q satisfying the same weak continuity conditions, then the action of any element g of G is given by a unitary operator U:
This mapping g → Ug is known as a projective representation of G. The mappings S → U*g S Ug are reversible quantum operations.
Quantum measurement
Quantum operations can be used to describe the process of quantum measurement. The presentation below describes measurement in terms of self-adjoint projections on a separable complex Hilbert space H, that is, in terms of a PVM (Projection-valued measure). In the general case, measurements can be made using non-orthogonal operators, via the notions of POVM. The non-orthogonal case is interesting, as it can improve the overall efficiency of the quantum instrument.
Binary measurements
Quantum systems may be measured by applying a series of yes–no questions. This set of questions can be understood to be chosen from an orthocomplemented lattice Q of propositions in quantum logic. The lattice is equivalent to the space of self-adjoint projections on a separable complex Hilbert space H.
Consider a system in some state S, with the goal of determining whether it has some property E, where E is an element of the lattice of quantum yes-no questions. Measurement, in this context, means submitting the system to some procedure to determine whether the state satisfies the property. The reference to system state, in this discussion, can be given an operational meaning by considering a statistical ensemble of systems. Each measurement yields some definite value 0 or 1; moreover application of the measurement process to the ensemble results in a predictable change of the statistical state. This transformation of the statistical state is given by the quantum operation
Here E can be understood to be a projection operator.
General case
In the general case, measurements are made on observables taking on more than two values.
When an observable A has a pure point spectrum, it can be written in terms of an orthonormal basis of eigenvectors. That is, A has a spectral decomposition
where EA(λ) is a family of pairwise orthogonal projections, each onto the respective eigenspace of A associated with the measurement value λ.
Measurement of the observable A yields an eigenvalue of A. Repeated measurements, made on a statistical ensemble S of systems, results in a probability distribution over the eigenvalue spectrum of A. It is a discrete probability distribution, and is given by
Measurement of the statistical state S is given by the map
That is, immediately after measurement, the statistical state is a classical distribution over the eigenspaces associated with the possible values λ of the observable: S is a mixed state.
Non-completely positive maps
Shaji and Sudarshan argued in a Physical Review Letters paper that, upon close examination, complete positivity is not a requirement for a good representation of open quantum evolution. Their calculations show that, when starting with some fixed initial correlations between the observed system and the environment, the map restricted to the system itself is not necessarily even positive. However, it is not positive only for those states that do not satisfy the assumption about the form of initial correlations. Thus, they show that to get a full understanding of quantum evolution, non completely-positive maps should be considered as well.
See also
Quantum dynamical semigroup
Superoperator
References
K. Kraus, States, Effects and Operations: Fundamental Notions of Quantum Theory, Springer Verlag 1983
W. F. Stinespring, Positive Functions on C*-algebras, Proceedings of the American Mathematical Society, 211–216, 1955
V. Varadarajan, The Geometry of Quantum Mechanics vols 1 and 2, Springer-Verlag 1985
Quantum mechanics | Quantum operation | [
"Physics"
] | 2,481 | [
"Theoretical physics",
"Quantum mechanics"
] |
667,451 | https://en.wikipedia.org/wiki/Tonks%E2%80%93Girardeau%20gas | In physics, a Tonks–Girardeau gas is a Bose gas in which the repulsive interactions between bosonic particles confined to one dimension dominate the system's physics. It is named after physicists Lewi Tonks, who developed a classical model in 1936, and Marvin D. Girardeau who generalized it to the quantum regime. It is not a Bose–Einstein condensate as it does not demonstrate any of the necessary characteristics, such as off-diagonal long-range order or a unitary two-body correlation function, even in a thermodynamic limit and as such cannot be described by a macroscopically occupied orbital (order parameter) in the Gross–Pitaevskii formulation.
The Tonks–Girardeau gas is a particular case of the Lieb–Liniger model.
Definition
A row of bosons all confined to a one-dimensional line cannot pass each other and therefore cannot exchange places. The resulting motion has been compared to a traffic jam: the motion of each boson is strongly correlated with that of its two neighbors. This can be thought of as the large-c limit of the delta Bose gas.
Because the particles cannot exchange places, their behavior might be expected to be fermionic, but their behavior differs from that of fermions in several important ways: the particles can all occupy the same momentum state, which corresponds to neither Bose-Einstein nor Fermi–Dirac statistics. This is the phenomenon of bosonization which happens in 1+1 dimensions.
In the case of a Tonks–Girardeau gas (TG), so many properties of this one-dimensional string of bosons would be sufficiently fermion-like that the situation is often referred to as the 'fermionization' of bosons. Tonks–Girardeau gas matches quantum Nonlinear Schrödinger equation for infinite repulsion, which can be efficiently analyzed by quantum inverse scattering method. This relation helps to study correlation functions. The correlation functions can be described by an Integrable system. In a simple case, it is a Painlevé transcendent. The quantum correlation functions of a Tonks–Girardeau gas can be described by means of classical, completely integrable, differential equations. Thermodynamics of Tonks–Girardeau gas was described by Chen Ning Yang.
Physical realization
The first example of TGs came in 2004 when Paredes and coworkers created an array of such gases using an optical lattice. In a different experiment, Kinoshita and coworkers observed a strongly correlated 1D Tonks–Girardeau gas.
The optical lattice is formed by six intersecting laser beams, which generate an interference pattern. The beams are arranged as standing waves along three orthogonal directions. This results in an array of optical dipole traps where atoms are stored in the intensity maxima of the interference pattern.
The researchers loaded ultracold rubidium atoms into one-dimensional tubes formed by a two-dimensional lattice (the third standing wave is initially off). This lattice is strong so that the atoms have insufficient energy to tunnel between neighboring tubes. The interaction is too low for the transition to the TG regime. For that, the third axis of the lattice is used. It is set to a lower intensity and shorter time than the other two, so that tunneling in this direction is possible. For increasing intensity of the third lattice, atoms in the same lattice well are more and more tightly trapped, which increases the collisional energy. When the collisional energy becomes much bigger than the tunneling energy, the atoms can still tunnel into empty lattice wells, but not into or across occupied ones.
This technique has been used by other researchers to obtain an array of one-dimensional Bose gases in the Tonks-Girardeau regime. However, the fact that an array of gases is observed only allows the measurement of averaged quantities. Moreover, the temperatures and chemical potential between the different tubes are dispersed, which wash out many effects. For instance, this configuration does not allow probing of system fluctuations. Thus it proved interesting to produce a single Tonks–Girardeau gas. In 2011 one team created a single one-dimensional TG gas by trapping rubidium atoms magnetically in the vicinity of a microstructure. Thibaut Jacqmin et al. measured density fluctuations in that single strongly interacting gas. Those fluctuations proved to be sub-Poissonian, as expected for a Fermi gas.
See also
BCS theory
Quantum mechanics
Super Tonks–Girardeau gas
References
External links
Condensed matter physics | Tonks–Girardeau gas | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 937 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
667,564 | https://en.wikipedia.org/wiki/Rapid%20single%20flux%20quantum | In electronics, rapid single flux quantum (RSFQ) is a digital electronic device that uses superconducting devices, namely Josephson junctions, to process digital signals. In RSFQ logic, information is stored in the form of magnetic flux quanta and transferred in the form of Single Flux Quantum (SFQ) voltage pulses. RSFQ is one family of superconducting or SFQ logic. Others include Reciprocal Quantum Logic (RQL), ERSFQ – energy-efficient RSFQ version that does not use bias resistors, etc. Josephson junctions are the active elements for RSFQ electronics, just as transistors are the active elements for semiconductor electronics. RSFQ is a classical digital, not quantum computing, technology.
RSFQ is very different from the CMOS transistor technology used in conventional computers:
Superconducting devices require cryogenic temperatures.
picosecond-duration SFQ voltage pulses produced by Josephson junctions are used to encode, process, and transport digital information instead of the voltage levels produced by transistors in semiconductor electronics.
SFQ voltage pulses travel on superconducting transmission lines which have very small, and usually negligible, dispersion if no spectral component of the pulse is above the frequency of the energy gap of the superconductor.
In the case of SFQ pulses of 1 ps, it is possible to clock the circuits at frequencies of the order of 100 GHz (one pulse every 10 picoseconds).
An SFQ pulse is produced when magnetic flux through a superconducting loop containing a Josephson junction changes by one flux quantum, Φ0 as a result of the junction switching. SFQ pulses have a quantized area ʃV(t)dt = Φ0 ≈ = 2.07 mV⋅ps = 2.07 mA⋅pH due to magnetic flux quantization, a fundamental property of superconductors. Depending on the parameters of the Josephson junctions, the pulses can be as narrow as 1 ps with an amplitude of about 2 mV, or broader (e.g., 5–10 ps) with correspondingly lower amplitude. The typical value of the pulse amplitude is approximately 2IcRn, where IcRn is the product of the junction critical current, Ic, and the junction damping resistor, Rn. For Nb-based junction technology IcRn is on the order of 1 mV.
Advantages
Interoperable with CMOS circuitry, microwave and infrared technology
Extremely fast operating frequency: from a few tens of gigahertz up to hundreds of gigahertz
Low power consumption: about 100,000 times lower than CMOS semiconductors circuits, without accounting for refrigeration
Existing chip manufacturing technology can be adapted to manufacture RSFQ circuitry
Good tolerance to manufacturing variations
RSFQ circuitry is essentially self clocking, making asynchronous designs much more practical.
Disadvantages
Requires cryogenic cooling. Traditionally this has been achieved using cryogenic liquids such as liquid nitrogen and liquid helium. More recently, closed-cycle cryocoolers, e.g., pulse tube refrigerators have gained considerable popularity as they eliminate cryogenic liquids which are both costly and require periodic refilling. Cryogenic cooling is also an advantage since it reduces the working environment's thermal noise.
The cooling requirements can be relaxed through the use of high-temperature superconductors. However, only very-low-complexity RFSQ circuits have been achieved to date using high-Tc superconductors. It is believed that SFQ-based digital technologies become impractical at temperatures above ~ 20 K – 25 K because of the exponentially increasing bit error rates (thermally-induced junction switching) cause by decreasing of the parameter EJ/kBT with increasing temperature T, where EJ = IcΦ0/2π is the Josephson energy.
Static power dissipation that is typically 10–100 times larger than the dynamic power required to perform logic operations was one of the drawbacks. However, the static power dissipation was eliminated in ERSFQ version of RSFQ by using superconducting inductors and Josephson junctions instead of bias resistors, the source of the static power dissipation.
Applications
Optical and other high-speed network switching devices
Digital signal processing, up to X-band signals and beyond
Ultrafast routers
Software-defined radio (SDR)
High speed analog-to-digital converters
High performance cryogenic computers
Control circuitry for superconducting qubits and quantum circuits
See also
Superconducting logic includes newer logic families with better energy efficiency than RSFQ.
Quantum flux parametron, a related digital logic technology.
References
Further reading
Superconducting Technology Assessment, study of RSFQ for computing applications, by the NSA (2005).
External links
An introduction to the basics and links to further information at the State University of New York at Stony Brook.
K.K. Likharev and V.K. Semenov, RSFQ logic/memory family: a new Josephson-junction technology for sub-terahertz-clock-frequency digital systems. IEEE Trans. Appl. Supercond. 1 (1991), 3. doi:10.1109/77.80745
A. H. Worsham, J. X. Przybysz, J. Kang, and D. L. Miller, "A single flux quantum cross-bar switch and demultiplexer," IEEE Trans. on Appl. Supercond., vol. 5, pp. 2996–2999, June 1995.
Feasibility Study of RSFQ-based Self-Routing Nonblocking Digital Switches (1996)
Design Issues in Ultra-Fast Ultra-Low-Power Superconductor Batcher-Banyan Switching Fabric Based on RSFQ Logic/Memory Family (1997)
A Clock Distribution Scheme for Large RSFQ Circuits (1995)
Josephson Junction Digital Circuits – Challenges and Opportunities (Feldman 1998)
Superconductor ICs: the 100-GHz second generation // IEEE Spectrum, 2000
Digital electronics
Quantum electronics
Superconductivity
Josephson effect | Rapid single flux quantum | [
"Physics",
"Materials_science",
"Engineering"
] | 1,278 | [
"Josephson effect",
"Physical quantities",
"Quantum electronics",
"Digital electronics",
"Superconductivity",
"Quantum mechanics",
"Materials science",
"Electronic engineering",
"Condensed matter physics",
"Nanotechnology",
"Electrical resistance and conductance"
] |
668,130 | https://en.wikipedia.org/wiki/Thermochemical%20equation | In thermochemistry, a thermochemical equation is a balanced chemical equation that represents the energy changes from a system to its surroundings. One such equation involves the enthalpy change, which is denoted with In variable form, a thermochemical equation would appear similar to the following:
, , and are the usual agents of a chemical equation with coefficients and is a positive or negative numerical value, which generally has units of kJ/mol. Another equation may include the symbol to denote energy; 's position determines whether the reaction is considered endothermic (energy-absorbing) or exothermic (energy-releasing).
Understanding aspects of thermochemical equations
Enthalpy () is the transfer of energy in a reaction (for chemical reactions, it is in the form of heat) and is the change in enthalpy. is a state function, meaning that is independent of processes occurring between initial and final states. In other words, it does not matter which steps are taken to get from initial reactants to final products, as will always be the same. , or the change in enthalpy of a reaction, has the same value of as in a thermochemical equation; however, is measured in units of kJ/mol, meaning that it is the enthalpy change per moles of any particular substance in an equation. Values of are determined experimentally under standard conditions of 1 atm and 25 °C (298.15K).
As discussed earlier, can have a positive or negative sign. If has a positive sign, the system uses heat and is endothermic; if is negative, then heat is produced and the system is exothermic.
Since enthalpy is a state function, the given for a particular reaction is only true for that exact reaction. Physical states of reactants and products matter, as do molar concentrations.
Since is dependent on the physical state and molar concentrations in reactions, thermochemical equations must be stoichiometrically correct. If one agent of an equation is changed through multiplication, then all agents must be proportionally changed, including .
The multiplicative property of thermochemical equations is mainly due to the first law of thermodynamics, which says that energy can neither be created nor destroyed; this concept is commonly known as the conservation of energy. It holds true on a physical or molecular scale.
Manipulating thermochemical equations
Coefficient multiplication
Thermochemical equations can be changed, as mentioned above, by multiplying by any numerical coefficient. All agents must be multiplied, including . Using the thermochemical equation of variables as above, one gets the following example.
One must assume that needs to be multiplied by two in order for the thermochemical equation to be used. All the agents in the reaction must then be multiplied by the same coefficient, like so:
This is again considered to be logical when the first law of thermodynamics is considered. Twice as much product is produced, so twice as much heat is removed or given off. The division of coefficients functions in the same way.
Hess's law: Addition of thermochemical equations
Hess's law states that the sum of the energy changes of all thermochemical equations included in an overall reaction is equal to the overall energy change. Since is a state function and is not dependent on how reactants become products as a result, steps (in the form of several thermochemical equations) can be used to find the of the overall reaction. For instance:
Reaction\ 1: \quad C_{graphite}(s)\ + O2 (g) \to CO2 (g)
This reaction is the result of two steps (a reaction sequence):
C_{graphite} (s) \ + \frac{1}{2}O2 (g) \to CO(g)
CO(g)\ + \frac{1}{2}O2(g) \to CO2 (g)
Adding these two reactions together results in Reaction 1, which allows to be found, so whether or not agents in the reaction sequence are equal to each other is verified. The reaction sequences are then added together. In the following example, CO is not in Reaction 1 and equals another reaction.
C_{graphite} (s) \ + \frac{1}{2} O2 (g) \ + \frac{1}{2} O2 (g) \to CO2(g)
and
C_{graphite} (s) \ + O2 (g) \to CO2(g), \ Reaction \ 1
To solve for , the s of the two equations in the reaction sequence are added together:
Another example involving thermochemical equations is that when methane gas is combusted, heat is released, making the reaction exothermic. In the process, 890.4 kJ of heat is released per mole of reactants, so the heat is written as a product of the reaction.
Other notes
If reactions have to be reversed for their products to be equal, the sign of must also be reversed.
If an agent has to be multiplied for it to equal another agent, all other agents and must also be multiplied by its coefficient.
Generally, values given in tables are under 1 atm and 25 °C (298.15 K), otherwise known as Standard Lab Conditions.
Locations of values of ΔH
Values of have been experimentally determined and are available in table form. Most general chemistry textbooks have appendixes including common values. There are several online tables available. A software offered with Active Thermochemical Tables (ATcT) provides more information online.
See also
Chemistry
Thermochemistry
Chemical reaction
Enthalpy
References
Atkins, Peter and Loretta Jones. 2005. Chemical Principles, the Quest for Insight (3rd edition). W. H. Freeman and Co., New York, NY.
External links
General chemistry information index: http://chemistry.about.com/library/blazlist4.htm
Further step by step help on Hess's law: http://members.aol.com/profchm/hess.html
Thermochemistry | Thermochemical equation | [
"Chemistry"
] | 1,305 | [
"Thermochemistry"
] |
668,449 | https://en.wikipedia.org/wiki/Material%20derivative | In continuum mechanics, the material derivative describes the time rate of change of some physical quantity (like heat or momentum) of a material element that is subjected to a space-and-time-dependent macroscopic velocity field. The material derivative can serve as a link between Eulerian and Lagrangian descriptions of continuum deformation.
For example, in fluid dynamics, the velocity field is the flow velocity, and the quantity of interest might be the temperature of the fluid. In this case, the material derivative then describes the temperature change of a certain fluid parcel with time, as it flows along its pathline (trajectory).
Other names
There are many other names for the material derivative, including:
advective derivative
convective derivative
derivative following the motion
hydrodynamic derivative
Lagrangian derivative
particle derivative
substantial derivative
substantive derivative
Stokes derivative
total derivative, although the material derivative is actually a special case of the total derivative
Definition
The material derivative is defined for any tensor field y that is macroscopic, with the sense that it depends only on position and time coordinates, :
where is the covariant derivative of the tensor, and is the flow velocity. Generally the convective derivative of the field , the one that contains the covariant derivative of the field, can be interpreted both as involving the streamline tensor derivative of the field , or as involving the streamline directional derivative of the field , leading to the same result.
Only this spatial term containing the flow velocity describes the transport of the field in the flow, while the other describes the intrinsic variation of the field, independent of the presence of any flow. Confusingly, sometimes the name "convective derivative" is used for the whole material derivative , instead for only the spatial term . The effect of the time-independent terms in the definitions are for the scalar and tensor case respectively known as advection and convection.
Scalar and vector fields
For example, for a macroscopic scalar field and a macroscopic vector field the definition becomes:
In the scalar case is simply the gradient of a scalar, while is the covariant derivative of the macroscopic vector (which can also be thought of as the Jacobian matrix of as a function of ).
In particular for a scalar field in a three-dimensional Cartesian coordinate system , the components of the velocity are , and the convective term is then:
Development
Consider a scalar quantity , where is time and is position. Here may be some physical variable such as temperature or chemical concentration. The physical quantity, whose scalar quantity is , exists in a continuum, and whose macroscopic velocity is represented by the vector field .
The (total) derivative with respect to time of is expanded using the multivariate chain rule:
It is apparent that this derivative is dependent on the vector
which describes a chosen path in space. For example, if is chosen, the time derivative becomes equal to the partial time derivative, which agrees with the definition of a partial derivative: a derivative taken with respect to some variable (time in this case) holding other variables constant (space in this case). This makes sense because if , then the derivative is taken at some constant position. This static position derivative is called the Eulerian derivative.
An example of this case is a swimmer standing still and sensing temperature change in a lake early in the morning: the water gradually becomes warmer due to heating from the sun. In which case the term is sufficient to describe the rate of change of temperature.
If the sun is not warming the water (i.e. ), but the path is not a standstill, the time derivative of may change due to the path. For example, imagine the swimmer is in a motionless pool of water, indoors and unaffected by the sun. One end happens to be at a constant high temperature and the other end at a constant low temperature. By swimming from one end to the other the swimmer senses a change of temperature with respect to time, even though the temperature at any given (static) point is a constant. This is because the derivative is taken at the swimmer's changing location and the second term on the right is sufficient to describe the rate of change of temperature. A temperature sensor attached to the swimmer would show temperature varying with time, simply due to the temperature variation from one end of the pool to the other.
The material derivative finally is obtained when the path is chosen to have a velocity equal to the fluid velocity
That is, the path follows the fluid current described by the fluid's velocity field . So, the material derivative of the scalar is
An example of this case is a lightweight, neutrally buoyant particle swept along a flowing river and experiencing temperature changes as it does so. The temperature of the water locally may be increasing due to one portion of the river being sunny and the other in a shadow, or the water as a whole may be heating as the day progresses. The changes due to the particle's motion (itself caused by fluid motion) is called advection (or convection if a vector is being transported).
The definition above relied on the physical nature of a fluid current; however, no laws of physics were invoked (for example, it was assumed that a lightweight particle in a river will follow the velocity of the water), but it turns out that many physical concepts can be described concisely using the material derivative. The general case of advection, however, relies on conservation of mass of the fluid stream; the situation becomes slightly different if advection happens in a non-conservative medium.
Only a path was considered for the scalar above. For a vector, the gradient becomes a tensor derivative; for tensor fields we may want to take into account not only translation of the coordinate system due to the fluid movement but also its rotation and stretching. This is achieved by the upper convected time derivative.
Orthogonal coordinates
It may be shown that, in orthogonal coordinates, the -th component of the convection term of the material derivative of a vector field is given by
where the are related to the metric tensors by
In the special case of a three-dimensional Cartesian coordinate system (x, y, z), and being a 1-tensor (a vector with three components), this is just:
where is a Jacobian matrix.
There is also a vector-dot-del identity and the material derivative for a vector field can be expressed as:
See also
Navier–Stokes equations
Euler equations (fluid dynamics)
Derivative (generalizations)
Lie derivative
Levi-Civita connection
Spatial acceleration
Spatial gradient
References
Further reading
Fluid dynamics
Multivariable calculus
Rates
Generalizations of the derivative | Material derivative | [
"Chemistry",
"Mathematics",
"Engineering"
] | 1,351 | [
"Calculus",
"Chemical engineering",
"Piping",
"Multivariable calculus",
"Fluid dynamics"
] |
669,713 | https://en.wikipedia.org/wiki/Capillary%20wave | A capillary wave is a wave traveling along the phase boundary of a fluid, whose dynamics and phase velocity are dominated by the effects of surface tension.
Capillary waves are common in nature, and are often referred to as ripples. The wavelength of capillary waves on water is typically less than a few centimeters, with a phase speed in excess of 0.2–0.3 meter/second.
A longer wavelength on a fluid interface will result in gravity–capillary waves which are influenced by both the effects of surface tension and gravity, as well as by fluid inertia. Ordinary gravity waves have a still longer wavelength.
When generated by light wind in open water, a nautical name for them is cat's paw waves. Light breezes which stir up such small ripples are also sometimes referred to as cat's paws. On the open ocean, much larger ocean surface waves (seas and swells) may result from coalescence of smaller wind-caused ripple-waves.
Dispersion relation
The dispersion relation describes the relationship between wavelength and frequency in waves. Distinction can be made between pure capillary waves – fully dominated by the effects of surface tension – and gravity–capillary waves which are also affected by gravity.
Capillary waves, proper
The dispersion relation for capillary waves is
where is the angular frequency, the surface tension, the density of the
heavier fluid, the density of the lighter fluid and the wavenumber. The wavelength is
For the boundary between fluid and vacuum (free surface), the dispersion relation reduces to
Gravity–capillary waves
When capillary waves are also affected substantially by gravity, they are called gravity–capillary waves. Their dispersion relation reads, for waves on the interface between two fluids of infinite depth:
where is the acceleration due to gravity, and are the densities of the two fluids . The factor in the first term is the Atwood number.
Gravity wave regime
For large wavelengths (small ), only the first term is relevant and one has gravity waves.
In this limit, the waves have a group velocity half the phase velocity: following a single wave's crest in a group one can see the wave appearing at the back of the group, growing and finally disappearing at the front of the group.
Capillary wave regime
Shorter (large ) waves (e.g. 2 mm for the water–air interface), which are proper capillary waves, do the opposite: an individual wave appears at the front of the group, grows when moving towards the group center and finally disappears at the back of the group. Phase velocity is two thirds of group velocity in this limit.
Phase velocity minimum
Between these two limits is a point at which the dispersion caused by gravity cancels out the dispersion due to the capillary effect. At a certain wavelength, the group velocity equals the phase velocity, and there is no dispersion. At precisely this same wavelength, the phase velocity of gravity–capillary waves as a function of wavelength (or wave number) has a minimum. Waves with wavelengths much smaller than this critical wavelength are dominated by surface tension, and much above by gravity. The value of this wavelength and the associated minimum phase speed are:
For the air–water interface, is found to be , and is .
If one drops a small stone or droplet into liquid, the waves then propagate outside an expanding circle of fluid at rest; this circle is a caustic which corresponds to the minimal group velocity.
Derivation
As Richard Feynman put it, "[water waves] that are easily seen by everyone and which are usually used as an example of waves in elementary courses [...] are the worst possible example [...]; they have all the complications that waves can have." The derivation of the general dispersion relation is therefore quite involved.
There are three contributions to the energy, due to gravity, to surface tension, and to hydrodynamics. The first two are potential energies, and responsible for the two terms inside the parenthesis, as is clear from the appearance of and . For gravity, an assumption is made of the density of the fluids being constant (i.e., incompressibility), and likewise (waves are not high enough for gravitation to change appreciably). For surface tension, the deviations from planarity (as measured by derivatives of the surface) are supposed to be small. For common waves both approximations are good enough.
The third contribution involves the kinetic energies of the fluids. It is the most complicated and calls for a hydrodynamic framework. Incompressibility is again involved (which is satisfied if the speed of the waves is much less than the speed of sound in the media), together with the flow being irrotational – the flow is then potential. These are typically also good approximations for common situations.
The resulting equation for the potential (which is Laplace equation) can be solved with the proper boundary conditions. On one hand, the velocity must vanish well below the surface (in the "deep water" case, which is the one we consider, otherwise a more involved result is obtained, see Ocean surface waves.) On the other, its vertical component must match the motion of the surface. This contribution ends up being responsible for the extra outside the parenthesis, which causes all regimes to be dispersive, both at low values of , and high ones (except around the one value at which the two dispersions cancel out.)
See also
Capillary action
Dispersion (water waves)
Ocean surface wave
Thermal capillary wave
Two-phase flow
Wave-formed ripple
Gallery
Notes
References
External links
Capillary waves entry at sklogwiki
Fluid dynamics
Water waves
Oceanographical terminology
ar:مويجة | Capillary wave | [
"Physics",
"Chemistry",
"Engineering"
] | 1,205 | [
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Piping",
"Fluid dynamics"
] |
670,279 | https://en.wikipedia.org/wiki/Cycle%20detection | In computer science, cycle detection or cycle finding is the algorithmic problem of finding a cycle in a sequence of iterated function values.
For any function that maps a finite set to itself, and any initial value in , the sequence of iterated function values
must eventually use the same value twice: there must be some pair of distinct indices and such that . Once this happens, the sequence must continue periodically, by repeating the same sequence of values from to . Cycle detection is the problem of finding and , given and .
Several algorithms are known for finding cycles quickly and with little memory. Robert W. Floyd's tortoise and hare algorithm moves two pointers at different speeds through the sequence of values until they both point to equal values. Alternatively, Brent's algorithm is based on the idea of exponential search. Both Floyd's and Brent's algorithms use only a constant number of memory cells, and take a number of function evaluations that is proportional to the distance from the start of the sequence to the first repetition. Several other algorithms trade off larger amounts of memory for fewer function evaluations.
The applications of cycle detection include testing the quality of pseudorandom number generators and cryptographic hash functions, computational number theory algorithms, detection of infinite loops in computer programs and periodic configurations in cellular automata, automated shape analysis of linked list data structures, and detection of deadlocks for transactions management in DBMS.
Example
The figure shows a function that maps the set to itself. If one starts from and repeatedly applies , one sees the sequence of values
The cycle in this value sequence is .
Definitions
Let be any finite set, be any endofunction from to itself, and be any element of . For any , let . Let be the smallest index such that the value reappears infinitely often within the sequence of values , and let (the loop length) be the smallest positive integer such that . The cycle detection problem is the task of finding and .
One can view the same problem graph-theoretically, by constructing a functional graph (that is, a directed graph in which each vertex has a single outgoing edge) the vertices of which are the elements of and the edges of which map an element to the corresponding function value, as shown in the figure. The set of vertices reachable from starting vertex form a subgraph with a shape resembling the Greek letter rho (): a path of length from to a cycle of vertices.
Practical cycle-detection algorithms do not find and exactly. They usually find lower and upper bounds for the start of the cycle, and a more detailed search of the range must be performed if the exact value of is needed. Also, most algorithms do not guarantee to find directly, but may find some multiple . (Continuing the search for an additional steps, where is the smallest prime divisor of , will either find the true or prove that .)
Computer representation
Except in toy examples like the above, will not be specified as a table of values. Such a table implies space complexity, and if that is permissible, an associative array mapping to will detect the first repeated value. Rather, a cycle detection algorithm is given a black box for generating the sequence , and the task is to find and using very little memory.
The black box might consist of an implementation of the recurrence function , but it might also store additional internal state to make the computation more efficient. Although must be true in principle, this might be expensive to compute directly; the function could be defined in terms of the discrete logarithm of or some other difficult-to-compute property which can only be practically computed in terms of additional information. In such cases, the number of black boxes required becomes a figure of merit distinguishing the algorithms.
A second reason to use one of these algorithms is that they are pointer algorithms which do no operations on elements of other than testing for equality. An associative array implementation requires computing a hash function on the elements of , or ordering them. But cycle detection can be applied in cases where neither of these are possible.
The classic example is Pollard's rho algorithm for integer factorization, which searches for a factor of a given number by looking for values and which are equal modulo without knowing in advance. This is done by computing the greatest common divisor of the difference with a known multiple of , namely . If the gcd is non-trivial (neither 1 nor ), then the value is a proper factor of , as desired. If is not prime, it must have at least one factor , and by the birthday paradox, a random function has an expected cycle length (modulo ) of .
Algorithms
If the input is given as a subroutine for calculating , the cycle detection problem may be trivially solved using only function applications, simply by computing the sequence of values and using a data structure such as a hash table to store these values and test whether each subsequent value has already been stored. However, the space complexity of this algorithm is proportional to , unnecessarily large. Additionally, to implement this method as a pointer algorithm would require applying the equality test to each pair of values, resulting in quadratic time overall. Thus, research in this area has concentrated on two goals: using less space than this naive algorithm, and finding pointer algorithms that use fewer equality tests.
Floyd's tortoise and hare
Floyd's cycle-finding algorithm is a pointer algorithm that uses only two pointers, which move through the sequence at different speeds. It is also called the "tortoise and the hare algorithm", alluding to Aesop's fable of The Tortoise and the Hare.
The algorithm is named after Robert W. Floyd, who was credited with its invention by Donald Knuth. However, the algorithm does not appear in Floyd's published work, and this may be a misattribution: Floyd describes algorithms for listing all simple cycles in a directed graph in a 1967 paper, but this paper does not describe the cycle-finding problem in functional graphs that is the subject of this article. In fact, Knuth's statement (in 1969), attributing it to Floyd, without citation, is the first known appearance in print, and it thus may be a folk theorem, not attributable to a single individual.
The key insight in the algorithm is as follows. If there is a cycle, then, for any integers and , , where is the length of the loop to be found, is the index of the first element of the cycle, and is a whole integer representing the number of loops. Based on this, it can then be shown that for some if and only if (if in the cycle, then there exists some such that , which implies that ; and if there are some and such that , then and ). Thus, the algorithm only needs to check for repeated values of this special form, one twice as far from the start of the sequence as the other, to find a period of a repetition that is a multiple of . Once is found, the algorithm retraces the sequence from its start to find the first repeated value in the sequence, using the fact that divides and therefore that . Finally, once the value of is known it is trivial to find the length of the shortest repeating cycle, by searching for the first position for which .
The algorithm thus maintains two pointers into the given sequence, one (the tortoise) at , and the other (the hare) at . At each step of the algorithm, it increases by one, moving the tortoise one step forward and the hare two steps forward in the sequence, and then compares the sequence values at these two pointers. The smallest value of for which the tortoise and hare point to equal values is the desired value .
The following Python code shows how this idea may be implemented as an algorithm.
def floyd(f, x0) -> (int, int):
"""Floyd's cycle detection algorithm."""
# Main phase of algorithm: finding a repetition x_i = x_2i.
# The hare moves twice as quickly as the tortoise and
# the distance between them increases by 1 at each step.
# Eventually they will both be inside the cycle and then,
# at some point, the distance between them will be
# divisible by the period λ.
tortoise = f(x0) # f(x0) is the element/node next to x0.
hare = f(f(x0))
while tortoise != hare:
tortoise = f(tortoise)
hare = f(f(hare))
# At this point the tortoise position, ν, which is also equal
# to the distance between hare and tortoise, is divisible by
# the period λ. So hare moving in cycle one step at a time,
# and tortoise (reset to x0) moving towards the cycle, will
# intersect at the beginning of the cycle. Because the
# distance between them is constant at 2ν, a multiple of λ,
# they will agree as soon as the tortoise reaches index μ.
# Find the position μ of first repetition.
mu = 0
tortoise = x0
while tortoise != hare:
tortoise = f(tortoise)
hare = f(hare) # Hare and tortoise move at same speed
mu += 1
# Find the length of the shortest cycle starting from x_μ
# The hare moves one step at a time while tortoise is still.
# lam is incremented until λ is found.
lam = 1
hare = f(tortoise)
while tortoise != hare:
hare = f(hare)
lam += 1
return lam, mu
This code only accesses the sequence by storing and copying pointers, function evaluations, and equality tests; therefore, it qualifies as a pointer algorithm. The algorithm uses operations of these types, and storage space.
Brent's algorithm
Richard P. Brent described an alternative cycle detection algorithm that, like the tortoise and hare algorithm, requires only two pointers into the sequence. However, it is based on a different principle: searching for the smallest power of two that is larger than both and . For , the algorithm compares with each subsequent sequence value up to the next power of two, stopping when it finds a match. It has two advantages compared to the tortoise and hare algorithm: it finds the correct length of the cycle directly, rather than needing to search for it in a subsequent stage, and its steps involve only one evaluation of the function rather than three.
The following Python code shows how this technique works in more detail.
def brent(f, x0) -> (int, int):
"""Brent's cycle detection algorithm."""
# main phase: search successive powers of two
power = lam = 1
tortoise = x0
hare = f(x0) # f(x0) is the element/node next to x0.
# this assumes there is a cycle; otherwise this loop won't terminate
while tortoise != hare:
if power == lam: # time to start a new power of two?
tortoise = hare
power *= 2
lam = 0
hare = f(hare)
lam += 1
# Find the position of the first repetition of length λ
tortoise = hare = x0
for i in range(lam):
# range(lam) produces a list with the values 0, 1, ... , lam-1
hare = f(hare)
# The distance between the hare and tortoise is now λ.
# Next, the hare and tortoise move at same speed until they agree
mu = 0
while tortoise != hare:
tortoise = f(tortoise)
hare = f(hare)
mu += 1
return lam, mu
Like the tortoise and hare algorithm, this is a pointer algorithm that uses tests and function evaluations and storage space. It is not difficult to show that the number of function evaluations can never be higher than for Floyd's algorithm. Brent claims that, on average, his cycle finding algorithm runs around 36% more quickly than Floyd's and that it speeds up the Pollard rho algorithm by around 24%. He also performs an average case analysis for a randomized version of the algorithm in which the sequence of indices traced by the slower of the two pointers is not the powers of two themselves, but rather a randomized multiple of the powers of two. Although his main intended application was in integer factorization algorithms, Brent also discusses applications in testing pseudorandom number generators.
Gosper's algorithm
R. W. Gosper's algorithm finds the period , and the lower and upper bound of the starting point, and , of the first cycle. The difference between the lower and upper bound is of the same order as the period, i.e. .
The algorithm maintains an array of tortoises . For each :
For each compare to .
If , a cycle has been detected, of length
If no match is found, set , where is the number of trailing zeros in the binary representation of . I.e. the greatest power of 2 which divides .
If it is inconvenient to vary the number of comparisons as increases, you may initialize all of the , but must then return if while .
Advantages
The main features of Gosper's algorithm are that it is economical in space, very economical in evaluations of the generator function, and always finds the exact cycle length (never a multiple). The cost is a large number of equality comparisons. It could be roughly described as a concurrent version of Brent's algorithm. While Brent's algorithm uses a single tortoise, repositioned every time the hare passes a power of two, Gosper's algorithm uses several tortoises (several previous values are saved), which are roughly exponentially spaced. According to the note in HAKMEM item 132, this algorithm will detect repetition before the third occurrence of any value, i.e. the cycle will be iterated at most twice. HAKMEM also states that it is sufficient to store previous values; however, this only offers a saving if we know a priori that is significantly smaller than . The standard implementations store values. For example, assume the function values are 32-bit integers, so and Then Gosper's algorithm will find the cycle after less than function evaluations (in fact, the most possible is ), while consuming the space of 33 values (each value being a 32-bit integer).
Complexity
Upon the -th evaluation of the generator function, the algorithm compares the generated value with previous values; observe that goes up to at least and at most . Therefore, the time complexity of this algorithm is . Since it stores values, its space complexity is . This is under the usual transdichotomous model, assumed throughout this article, in which the size of the function values is constant. Without this assumption, we know it requires space to store distinct values, so the overall space complexity is
Time–space tradeoffs
A number of authors have studied techniques for cycle detection that use more memory than Floyd's and Brent's methods, but detect cycles more quickly. In general these methods store several previously-computed sequence values, and test whether each new value equals one of the previously-computed values. In order to do so quickly, they typically use a hash table or similar data structure for storing the previously-computed values, and therefore are not pointer algorithms: in particular, they usually cannot be applied to Pollard's rho algorithm. Where these methods differ is in how they determine which values to store. Following Nivasch, we survey these techniques briefly.
Brent already describes variations of his technique in which the indices of saved sequence values are powers of a number other than two. By choosing to be a number close to one, and storing the sequence values at indices that are near a sequence of consecutive powers of , a cycle detection algorithm can use a number of function evaluations that is within an arbitrarily small factor of the optimum .
Sedgewick, Szymanski, and Yao provide a method that uses memory cells and requires in the worst case only function evaluations, for some constant , which they show to be optimal. The technique involves maintaining a numerical parameter , storing in a table only those positions in the sequence that are multiples of , and clearing the table and doubling whenever too many values have been stored.
Several authors have described distinguished point methods that store function values in a table based on a criterion involving the values, rather than (as in the method of Sedgewick et al.) based on their positions. For instance, values equal to zero modulo some value might be stored. More simply, Nivasch credits D. P. Woodruff with the suggestion of storing a random sample of previously seen values, making an appropriate random choice at each step so that the sample remains random.
Nivasch describes an algorithm that does not use a fixed amount of memory, but for which the expected amount of memory used (under the assumption that the input function is random) is logarithmic in the sequence length. An item is stored in the memory table, with this technique, when no later item has a smaller value. As Nivasch shows, the items with this technique can be maintained using a stack data structure, and each successive sequence value need be compared only to the top of the stack. The algorithm terminates when the repeated sequence element with smallest value is found. Running the same algorithm with multiple stacks, using random permutations of the values to reorder the values within each stack, allows a time–space tradeoff similar to the previous algorithms. However, even the version of this algorithm with a single stack is not a pointer algorithm, due to the comparisons needed to determine which of two values is smaller.
Any cycle detection algorithm that stores at most values from the input sequence must perform at least function evaluations.
Applications
Cycle detection has been used in many applications.
Determining the cycle length of a pseudorandom number generator is one measure of its strength. This is the application cited by Knuth in describing Floyd's method. Brent describes the results of testing a linear congruential generator in this fashion; its period turned out to be significantly smaller than advertised. For more complex generators, the sequence of values in which the cycle is to be found may not represent the output of the generator, but rather its internal state.
Several number-theoretic algorithms are based on cycle detection, including Pollard's rho algorithm for integer factorization and his related kangaroo algorithm for the discrete logarithm problem.
In cryptographic applications, the ability to find two distinct values xμ−1 and xλ+μ−1 mapped by some cryptographic function ƒ to the same value xμ may indicate a weakness in ƒ. For instance, Quisquater and Delescaille apply cycle detection algorithms in the search for a message and a pair of Data Encryption Standard keys that map that message to the same encrypted value; Kaliski, Rivest, and Sherman also use cycle detection algorithms to attack DES. The technique may also be used to find a collision in a cryptographic hash function.
Cycle detection may be helpful as a way of discovering infinite loops in certain types of computer programs.
Periodic configurations in cellular automaton simulations may be found by applying cycle detection algorithms to the sequence of automaton states.
Shape analysis of linked list data structures is a technique for verifying the correctness of an algorithm using those structures. If a node in the list incorrectly points to an earlier node in the same list, the structure will form a cycle that can be detected by these algorithms. In Common Lisp, the S-expression printer, under control of the *print-circle* variable, detects circular list structure and prints it compactly.
Teske describes applications in computational group theory: determining the structure of an Abelian group from a set of its generators. The cryptographic algorithms of Kaliski et al. may also be viewed as attempting to infer the structure of an unknown group.
briefly mentions an application to computer simulation of celestial mechanics, which she attributes to William Kahan. In this application, cycle detection in the phase space of an orbital system may be used to determine whether the system is periodic to within the accuracy of the simulation.
In Mandelbrot Set fractal generation some performance techniques are used to speed up the image generation. One of them is called "period checking", which consists of finding the cycles in a point orbit. This article describes the "period checking" technique. You can find another explanation here. Some cycle detection algorithms have to be implemented in order to implement this technique.
References
External links
Gabriel Nivasch, The Cycle Detection Problem and the Stack Algorithm
Tortoise and Hare, Portland Pattern Repository
Floyd's Cycle Detection Algorithm (The Tortoise and the Hare)
Brent's Cycle Detection Algorithm (The Teleporting Turtle)
Fixed points (mathematics)
Combinatorial algorithms
Articles with example Python (programming language) code
The Tortoise and the Hare | Cycle detection | [
"Mathematics"
] | 4,350 | [
"Combinatorial algorithms",
"Mathematical analysis",
"Fixed points (mathematics)",
"Computational mathematics",
"Combinatorics",
"Topology",
"Dynamical systems"
] |
13,910,775 | https://en.wikipedia.org/wiki/Advanced%20steam%20technology | Advanced steam technology (sometimes known as modern steam) reflects an approach to the technical development of the steam engine intended for a wider variety of applications than has recently been the case. Particular attention has been given to endemic problems that led to the demise of steam power in small- to medium-scale commercial applications: excessive pollution, maintenance costs, labour-intensive operation, low power/weight ratio, and low overall thermal efficiency; where steam power has generally now been superseded by the internal combustion engine or by electrical power drawn from an electrical grid. The only steam installations that are in widespread use are the highly efficient thermal power plants used for generating electricity on a large scale. In contrast, the proposed steam engines may be for stationary, road, rail or marine use.
Improving steam traction
Although most references to "Modern Steam" apply to developments since the 1970s, certain aspects of advanced steam technology can be discerned throughout the 20th century, notably automatic boiler control along with rapid startup.
Abner Doble
In 1922, Abner Doble developed an electro-mechanical system that reacted simultaneously to steam temperature and pressure, starting and stopping the feed pumps whilst igniting and cutting out the burner according to boiler pressure. The contraflow monotube boiler had a working pressure of but contained so little water in circulation as to present no risk of explosion. This type of boiler was continuously developed in the US, Britain and Germany throughout the 1930s and into the 1950s for use in cars, buses, trucks, railcars, shunting locomotives (US; switchers), a speedboat and, in 1933, a converted Travel Air 2000 biplane.
Sentinel
In the UK, Sentinel Waggon Works developed a vertical water-tube boiler running at which was used in road vehicles, shunting locomotives and railcars. Steam could be raised much more quickly than with a conventional locomotive boiler.
Anderson and Holcroft
Trials of the Anderson condensing system on the Southern Railway (Great Britain) took place between 1930 and 1935. Condensing apparatus has not been widely used on steam locomotives, because of the additional complexity and weight, but it offers four potential advantages:
Improved thermal efficiency
Reduced water consumption
Reduced boiler maintenance for limescale removal
Reduced noise
The Anderson condensing system uses a process known as mechanical vapor recompression. It was devised by a Glasgow marine engineer, Harry Percival Harvey Anderson. The theory was that, by removing around 600 of the 970 British thermal units present in each pound of steam (1400 of the 2260 kilojoules in each kilogram), it would be possible to return the exhaust steam to the boiler by a pump which would consume only 1–2% of the engine's power output. Between 1925 and 1927 Anderson, and another Glasgow engineer John McCullum (some sources give McCallum), conducted experiments on a stationary steam plant with encouraging results. A company, Steam Heat Conservation (SHC), was formed and a demonstration of Anderson's system was arranged at Surbiton Electricity Generating Station.
SHC was interested in applying the system to a railway locomotive and contacted Richard Maunsell of the Southern Railway. Maunsell requested that a controlled test be carried out at Surbiton and this was done about 1929. Maunsell's technical assistant, Harold Holcroft, was present and a fuel saving of 29% was recorded, compared to conventional atmospheric working. The Southern Railway converted SECR N class locomotive number A816 (later 1816 and 31816) to the Anderson system in 1930. The locomotive underwent trials and initial results were encouraging. After an uphill trial from Eastleigh to Litchfield Summit, Holcroft is reported as saying:
"In the ordinary way this would have created much noise and clouds of steam, but with the condensing set in action it was all absorbed with the ease with which snow would melt in a furnace! The engine was as silent as an electric locomotive and the only faint noises were due to slight pounding of the rods and a small blow at a piston gland. This had to be experienced to be believed; but for the regulator being wide open and the reverser well over, one would have imagined that the second engine (an LSWR T14 class that had been provided as a back-up) was propelling the first."
The trials continued until 1934 but various problems arose, mostly with the fan for forced draught, and the project went no further. The locomotive was converted back to standard form in 1935.
André Chapelon
The work of French mechanical engineer André Chapelon in applying scientific analysis and a strive for thermal efficiency was an early example of advanced steam technology. Chapelon's protégé Livio Dante Porta continued Chapelon's work.
Livio Dante Porta
Postwar in the late 1940s and 1950s some designers worked on modernising steam locomotives. The Argentinian engineer Livio Dante Porta in the development of Stephensonian railway locomotives incorporating advanced steam technology was a precursor of the 'Modern Steam' movement from 1948. Where possible, Porta much preferred to design new locomotives, but more often in practice he was forced to radically update old ones to incorporate the new technology.
Bulleid and Riddles
In Britain the SR Leader class of c. 1949 by Oliver Bulleid and the British Rail ‘Standard’ class steam locomotives of the 1950s by Robert Riddles, particularly the BR Standard Class 9F, were used to trial new steam locomotive design features, including the Franco-Crosti boiler. On moving to Ireland, Bulleid also designed CIÉ No. CC1 which had many novel features.
Achieving the ends
The Sir Biscoe Tritton Lecture, given by Roger Waller, of the DLM company to the Institute of Mechanical Engineers in 2003 gives an idea of how problems in steam power are being addressed. Waller refers mainly to some rack and pinion mountain railway locomotives that were newly built from 1992 to 1998. They were developed for three companies in Switzerland and Austria and continued to work on two of these lines . The new steam locomotives burn the same grade of light oil as their diesel counterparts, and all demonstrate the same advantages of ready availability and reduced labour cost; at the same time, they have been shown to greatly reduce air and ground pollution. Their economic superiority has meant that they have largely replaced the diesel locomotives and railcars previously operating the line; additionally, steam locomotives are a tourist attraction.
A parallel line of development was the return to steam power of the old Lake Geneva paddle steamer Montreux that had been refitted with a diesel-electric engine in the 1960s. Economic aims similar to those achieved with the rack locomotives were pursued through automatic control of the light-oil-fired boiler and remote control of the engine from the bridge, enabling the steamship to be operated by a crew of the same size as a motor ship.
Carbon neutrality
A power unit based on advanced steam technology burning fossil fuel will inevitably emit carbon dioxide, a long-lasting greenhouse gas. However, significant reductions of other pollutants such as CO and NOx are achievable by steam compared to other combustion technologies, since it does not involve explosive combustion, thus removing the need for add-ons (such as filters) or special preparation of fuel.
If renewable fuel such as wood or other biofuel is used then the system could be carbon neutral. The use of biofuel remains controversial; however, liquid biofuels are easier to manufacture for steam plant than for diesels as they do not demand the stringent fuel standards required to protect diesel injectors.
Advantages of advanced steam technology
In principle, combustion and power delivery of steam plant can be considered separate stages. While high overall thermal efficiency may be difficult to achieve, largely due to the extra stage of generating a working fluid between combustion and power delivery attributable mainly to leakages and heat losses, the separation of the processes allows specific problems to be addressed at each stage without revising the whole system every time. For instance, the boiler or steam generator can be adapted to use any heat source, whether obtained from solid, liquid or gaseous fuel, and can use waste heat. Whatever the choice, it will have no direct effect on the design of the engine unit, as that only ever has to deal with steam.
Early twenty-first century
Small-scale stationary plant
This project mainly includes combined electrical generation and heating systems for private homes and small villages burning wood or bamboo chips. This is intended to replace 2-stroke donkey engines and small diesel power plants. Drastic reduction in noise level is one immediate benefit of a steam-powered small plant. Ted Pritchard, of Melbourne, Australia, was intensively developing this type of unit from 2002 until his death in 2007. The company Pritchard Power (now Uniflow Power) stated in 2010 that they continue to develop the stationary S5000, and that a prototype had been built and was being tested, and designs were being refined for market ready products.
Until 2006 a German company called Enginion was actively developing a Steamcell, a micro CHP unit about the size of a PC tower for domestic use. It seems that by 2008 it had merged with Berlin company AMOVIS.
Since 2012, a French company, EXOES, is selling to industrial firms a Rankine Cycle, patented, engine, which is designed to work with many fuels such as concentrated solar power, biomass, or fossil. The system, called "SHAPE" for Sustainable Heat And Power Engine, converts the heat into electricity. The SHAPE engine is suitable for embedded, and stationary, applications. A SHAPE engine has been integrated into a biomass boiler, and into a Concentrated solar power system. The company is planning to work with automobile manufactures, long-haul truck manufactures, and railway corporations.
A similar unit is marketed by Powertherm, a subsidiary of Spilling (see below).
A company in India manufactures steam-powered generators in a range of sizes from 4 hp to 50 hp. They also offer a number of different mills that can be powered by their engines.
In matter of technology, notice that the Quasiturbine is a uniflow rotary steam engine where steam intakes in hot areas, while exhausting in cold areas.
Small fixed stationary plant
The Spilling company produces a variety of small fixed stationary plant adapted to biomass combustion or power derived from waste heat or pressure recovery.
The Finnish company Steammotor Finland has developed a small rotary steam engine that runs with 800 kW steam generator. The engines are planned to produce electricity in wood chip fired power plants. According to the company, the steam engine named Quadrum generates 27% efficiency and runs with 180 °C steam at 8 bar pressure, while a corresponding steam turbine produces just 15% efficiency, requires steam temperature of 240 °C and pressure of 40 bar. The high efficiency comes from a patented crank mechanism, that gives a smooth, pulseless torque. The company believes that by further developing the construction there is potential to reach as high efficiency as 30–35%.
Automotive uses
During the first 1970s oil crisis, a number of investigations into steam technology were initiated by large automobile corporations although as the crisis died down, impetus was soon lost.
Australian engineer Ted Pritchard's main field of research from the late 1950s until the 1970s was the building of several efficient steam power units working on the uniflow system adapted to a small truck and two cars. One of the cars was achieving the lowest emissions figures of that time.
IAV, a Berlin-based R&D company that later developed the Steamcell, during the 1990s was working on the single-cylinder ZEE (Zero Emissions Engine), followed by the compact 3-cylinder EZEE (Equal-to-Zero-Emissions-Engine) designed to fit in the engine compartment of a Škoda Fabia small family saloon. All these engines made heavy use of flameless ceramic heat cells both for the steam generator and at strategic boost points where steam was injected into the cylinder(s).
Rail use
No. 52 8055, a rebuild of an existing locomotive (1943: built as 52 1649 (DRB); 1962: reconstruction as 52 8055 (DR), 1992: 52 8055 (EFZ - Eisenbahnfreunde Zollernbahn e.V.), 2003: rebuilt and modernized as 52 8055 NG (DLM - Dampflokomotiv- und Maschinenfabrik).
The 5AT project, a proposal for an entirely new locomotive (Britain, 2000s).
The ACE 3000 project, proposed by locomotive enthusiast Ross Rowland during the 1970s oil crisis. The locomotive would look like a diesel, and was designed to compete with current diesel locomotives by using coal, much cheaper than oil at the time. The ACE 3000 would feature many new technologies, such as automatic firing and water-level control. The locomotive would be able to be connected to a diesel unit and run in unison with it, so that it would not be necessary to hook up two identical locomotives. The ACE 3000 was one of the most publicised attempts at modern steam, but the project ultimately failed due to lack of funds.
The CSR Project 130, intends to develop a modern steam locomotive (based on an existing ATSF 3460 class locomotive) capable of higher-speed passenger transport at more than 100 mph, and tested up to 130 mph (hence the name Project 130). It is proposed to be carbon-neutral, as it will run on torrefied biomass as solid fuel (unlike all other contemporary designs, which mandate liquid fuel). The development is a joint effort between University of Minnesota's Institute on the Environment (IonE) and Sustainable Rail International, a non-profit employing railway experts and steam engineers established for the purpose.
Novel versus conventional layout
A design mounted on power bogies with compact water-tube boiler similar to Sentinel designs of the 1930s. Example: Sentinel-Cammell locomotive (right).
Both 528055 and the proposed 5AT are of conventional layout, with the cab at the back, while the ACE3000 had the cab located at the front. Other approaches are possible, especially with liquid fuel firing. For example:
Cab-forward type This is a well-tried design with the potential for a large power output and would provide the driver good visibility. Being single-ended it would have to be turned on a turntable, or a triangular junction. Example: Southern Pacific 4294.
Garratt type Another well-tried design with large power potential. Example: South Australian Railways 400 class. A future design could include shorter water tanks, and a cab at each end, to give the driver a good view in either direction.
With power bogies
Fireless locomotives
Another proposal for advanced steam technology is to revive the fireless locomotive, which runs on stored steam independently pre-generated. An example is the Solar Steam Train project in Sacramento, California.
See also
Combined gas and steam, a combined cycle in which otherwise wasted heat from a gas turbine is used to generate steam to drive a steam turbine
List of steam technology patents
Steam car
Steam locomotives of the 21st century
Steam motor
Uniflow steam engine
References
Steam engines
Steam locomotive technologies
Steam power
History of the steam engine | Advanced steam technology | [
"Physics"
] | 3,075 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
13,920,542 | https://en.wikipedia.org/wiki/Wolf%20summation | The Wolf summation is a method for computing the electrostatic interactions of systems (e.g. crystals). This method is generally more computationally efficient than the Ewald summation. It was proposed by Dieter Wolf.
References
See also
Wolf method on SklogWiki
Potential theory
Computational physics | Wolf summation | [
"Physics",
"Chemistry",
"Mathematics"
] | 61 | [
" and optical physics stubs",
"Functions and mappings",
"Mathematical objects",
"Potential theory",
"Computational physics",
"Mathematical relations",
" molecular",
"Atomic",
"Physical chemistry stubs",
"Computational physics stubs",
" and optical physics"
] |
13,924,093 | https://en.wikipedia.org/wiki/Environmental%20epidemiology | Environmental epidemiology is a branch of epidemiology concerned with determining how environmental exposures impact human health. This field seeks to understand how various external risk factors may predispose to or protect against disease, illness, injury, developmental abnormalities, or death. These factors may be naturally occurring or may be introduced into environments where people live, work, and play.
Scope
The World Health Organization European Centre for Environment and Health (WHO-ECEH) claims that 1.4 million deaths per year in Europe alone are due to avoidable environmental exposures. Environmental exposures can be broadly categorized into those that are proximate (e.g., directly leading to a health condition), including chemicals, physical agents, and microbiological pathogens, and those that are distal (e.g., indirectly leading to a health condition), such as socioeconomic conditions, climate change, and other broad-scale environmental changes. Proximate exposures occur through air, food, water, and skin contact. Distal exposures cause adverse health conditions directly by altering proximate exposures, and indirectly through changes in ecosystems and other support systems for human health.
Environmental epidemiology research can inform government policy change, risk management activities, and development of environmental standards. Vulnerability is the summation of all risk and protective factors that ultimately determine whether an individual or subpopulation experiences adverse health outcomes when an exposure to an environmental agent occurs. Sensitivity is an individual's or subpopulation's increased responsiveness, primarily for biological reasons, to that exposure. Biological sensitivity may be related to developmental stage, pre-existing medical conditions, acquired factors, and genetic factors. Socioeconomic factors also play a critical role in altering vulnerability and sensitivity to environmentally mediated factors by increasing the likelihood of exposure to harmful agents, interacting with biological factors that mediate risk, and/or leading to differences in the ability to prepare for or cope with exposures or early phases of illness. Populations living in certain regions may be at increased risk due to location and the environmental characteristics of a region.
History
Acknowledgement that the environment impacts human health can be found as far back as 460 B.C. in Hippocrates' essay On Airs, Waters, and Places. In it, he urges physicians to contemplate how factors such as drinking water can impact the health of their patients. Another famous example of environment-health interaction is the lead poisoning experienced by the ancient Romans, who used lead in their water pipes and kitchen pottery. Vitruvius, a Roman architect, wrote to discourage the use of lead pipes, citing health concerns:
"Water conducted through earthen pipes is more wholesome than that through lead; indeed that conveyed in lead must be injurious, because from it white lead is obtained, and this is said to be injurious to the human system. Hence, if what is generated from it is pernicious, there can be no doubt that itself cannot be a wholesome body. This may be verified by observing the workers in lead, who are of a pallid colour; for in casting lead, the fumes from it fixing on the different members, and daily burning them, destroy the vigour of the blood; water should therefore on no account be conducted in leaden pipes if we are desirous that it should be wholesome. That the flavour of that conveyed in earthen pipes is better, is shewn at our daily meals, for all those whose tables are furnished with silver vessels, nevertheless use those made of earth, from the purity of the flavour being preserved in them"
Generally considered to be one of the founders of modern epidemiology, John Snow conducted perhaps the first environmental epidemiology study in 1854. He showed that London residents who drank sewage-contaminated water were more likely to develop cholera than those who drank clean water.
U.S. government regulation
Throughout the 20th century, the United States Government passed legislation and regulations to address environmental health concerns. A partial list is below.
Precautionary principle
The precautionary principle is a concept in the environmental sciences that if an activity is suspected to cause harm, we should not wait until sufficient evidence of that harm is collected to take action. It has its roots in German environmental policy, and was adopted in 1990 by the participants of the North-Sea Conferences in The Hague by declaration. In 2000, the European Union began to formally adopt the precautionary principle into its laws as a Communication from the European Commission. The United States has resisted adoption of this principle, citing concerns that unfounded science could lead to obligations for expensive control measures, especially as related to greenhouse gas emissions.
Investigations
Observational studies
Environmental epidemiology studies are most frequently observational in nature, meaning researchers look at people's exposures to environmental factors without intervening and then observe the patterns that emerge. This is due to the fact that it is often unethical or unfeasible to conduct an experimental study of environmental factors in humans. For example, a researcher cannot ask some of their study subjects to smoke cigarettes to see if they have poorer health outcomes than subjects who are asked not to smoke. The study types most often employed in environmental epidemiology are:
Cohort studies
Case-control studies
Cross-sectional studies
Estimating risk
Epidemiologic studies that assess how an environmental exposure and a health outcome may be connected use a variety of biostatistical approaches to attempt to quantify the relationship. Risk assessment tries to answer questions such as "How does an individual's risk for disease A change when they are exposed to substance B?," and "How many excess cases of disease A can we prevent if exposure to substance B is lowered by X amount?."
Some statistics and approaches used to estimate risk are:
Odds ratio
Relative risk
Hazards ratio
Regression modeling
Mortality rates
Attributable risk
Ethics
Environmental epidemiology studies often identify associations between pollutants in the air, water, or food and adverse health outcomes; these findings can be inconvenient for polluting industries. Environmental epidemiologists are confronted with significant ethical challenges because of the involvement of powerful stakeholders who may try to influence the results or interpretation of their studies. Epidemiologic findings can sometimes have direct effects on industry profits. Because of these concerns, environmental epidemiology maintains guidelines for ethical practice. The International Society for Environmental Epidemiology (ISEE) first adopted ethics guidelines in the late 1990s. The guidelines are maintained by its Ethics and Philosophy Committee, one of the earliest, active, and enduring ethics committees in the field of epidemiology. Since its inception in 1991, the Committee has taken an active role in supporting ethical conduct and promulgating Ethics Guidelines for Environmental Epidemiologists. The most recent Ethics Guidelines were adopted in 2023.
Bradford Hill factors
To differentiate between correlation and causation, epidemiologists often consider a set of factors to determine the likelihood that an observed relationship between an environmental exposure and health consequence is truly causal. In 1965, Austin Bradford Hill devised a set of postulates to help him determine if there was sufficient evidence to conclude that cigarette smoking causes lung cancer.
The Bradford Hill criteria are:
Strength of association
Consistency of evidence
Specificity
Temporality
Biological gradient
Plausibility
Coherence
Experiment
Analogy
These factors are generally considered to be a guide to scientists, and it is not necessary that all of the factors be met for a consensus to be reached.
See also
Epidemiology
Envirome
Environmental health
Environmental science
Epigenetics
Exposome
Occupational epidemiology
Occupational safety and health
Pollution
Air pollution
References
Further reading
External links
"ENVIRONMENTAL EPIDEMIOLOGY] journal
International Epidemiological Association
International Society for Environmental Epidemiology
Journal of Exposure Science and Environmental Epidemiology
EPIDEMIOLOGY journal
[https://ehp.niehs.nih.gov Environmental Health Perspectives'' (news and peer-reviewed research journal published by the National Institute of Environmental Health Sciences)
Environmental Health News current events in environmental health
Epidemiology
Environmental health | Environmental epidemiology | [
"Environmental_science"
] | 1,648 | [
"Epidemiology",
"Environmental social science"
] |
13,924,377 | https://en.wikipedia.org/wiki/Molecular%20epidemiology | Molecular epidemiology is a branch of epidemiology and medical science that focuses on the contribution of potential genetic and environmental risk factors, identified at the molecular level, to the etiology, distribution and prevention of disease within families and across populations. This field has emerged from the integration of molecular biology into traditional epidemiological research. Molecular epidemiology improves our understanding of the pathogenesis of disease by identifying specific pathways, molecules and genes that influence the risk of developing disease. More broadly, it seeks to establish understanding of how the interactions between genetic traits and environmental exposures result in disease.
History
The term "molecular epidemiology" was first coined by Edwin D. Kilbourne in a 1973 article entitled "The molecular epidemiology of influenza". The term became more formalized with the formulation of the first book on molecular epidemiology titled Molecular Epidemiology: Principles and Practice by Paul A. Schulte and Frederica Perera. At the heart of this book is the impact of advances in molecular research that have given rise to and enabled the measurement and exploitation of the biomarker as a vital tool to link traditional molecular and epidemiological research strategies to understand the underlying mechanisms of disease in populations.
Modern use
While most molecular epidemiology studies are using conventional disease designation system for an outcome (with the use of exposures at the molecular level), compelling evidence indicates that disease evolution represents inherently heterogeneous process differing from person to person. Conceptually, each individual has a unique disease process different from any other individual ("the unique disease principle"), considering uniqueness of the exposome and its unique influence on molecular pathologic process in each individual. Studies to examine the relationship between an exposure and molecular pathologic signature of disease (particularly, cancer) became increasingly common throughout the 2000s. However, the use of molecular pathology in epidemiology posed unique challenges including lack of standardized methodologies and guidelines as well as paucity of interdisciplinary experts and training programs. The use of "molecular epidemiology" for this type of research masked the presence of these challenges, and hindered the development of methods and guidelines. Furthermore, the concept of disease heterogeneity appears to conflict with the premise that individuals with the same disease name have similar etiologies and disease processes.
Analytical methods
The genome of a bacterial species fundamentally determines its identity. Thus, gel electrophoresis techniques like pulsed-field gel electrophoresis can be used in molecular epidemiology to comparatively analyze patterns of bacterial chromosomal fragments and to elucidate the genomic content of bacterial cells. Due to its widespread use and ability to analyse epidemiological information about most bacterial pathogens based on their molecular markers, pulsed-field gel electrophoresis is relied upon heavily in molecular epidemiological studies.
Applications
Molecular epidemiology allows for an understanding of the molecular outcomes and implications of diet, lifestyle, and environmental exposure, particularly how these choices and exposures result in acquired genetic mutations and how these mutations are distributed throughout selected populations through the use of biomarkers and genetic information. Molecular epidemiological studies are able to provide additional understanding of previously-identified risk factors and disease mechanisms. Specific applications include:
Molecular surveillance of disease risk factors
Measuring the geographical and temporal distribution of disease risk factors
Characterizing the evolution of pathogens and classifying new pathogen species
Criticism
While the use of advanced molecular analysis techniques within the field of molecular epidemiology is providing the larger field of epidemiology with greater means of analysis, Miquel Porta identified several challenges that the field of molecular epidemiology faces, particularly selecting and incorporating requisite applicable data in an unbiased manner. Limitations of molecular epidemiological studies are similar in nature to those of generic epidemiological studies, that is, samples of convenience - both of the target population and genetic information, small sample sizes, inappropriate statistical methods, poor quality control, and poor definition of target populations.
See also
Genetic epidemiology
Genome-wide association study
Genomics
Molecular medicine
Personalized medicine
Precision medicine
References
Epidemiology
Molecular genetics
Global health | Molecular epidemiology | [
"Chemistry",
"Biology",
"Environmental_science"
] | 858 | [
"Epidemiology",
"Molecular genetics",
"Environmental social science",
"Molecular biology"
] |
4,404,564 | https://en.wikipedia.org/wiki/Method%20of%20image%20charges | The method of image charges (also known as the method of images and method of mirror charges) is a basic problem-solving tool in electrostatics. The name originates from the replacement of certain elements in the original layout with fictitious charges, which replicates the boundary conditions of the problem (see Dirichlet boundary conditions or Neumann boundary conditions).
The validity of the method of image charges rests upon a corollary of the uniqueness theorem, which states that the electric potential in a volume V is uniquely determined if both the charge density throughout the region and the value of the electric potential on all boundaries are specified. Alternatively, application of this corollary to the differential form of Gauss' Law shows that in a volume V surrounded by conductors and containing a specified charge density ρ, the electric field is uniquely determined if the total charge on each conductor is given. Possessing knowledge of either the electric potential or the electric field and the corresponding boundary conditions we can swap the charge distribution we are considering for one with a configuration that is easier to analyze, so long as it satisfies Poisson's equation in the region of interest and assumes the correct values at the boundaries.
Reflection in a conducting plane
Point charges
The simplest example of method of image charges is that of a point charge, with charge q, located at above an infinite grounded (i.e.: ) conducting plate in the xy-plane. To simplify this problem, we may replace the plate of equipotential with a charge −q, located at . This arrangement will produce the same electric field at any point for which (i.e., above the conducting plate), and satisfies the boundary condition that the potential along the plate must be zero. This situation is equivalent to the original setup, and so the force on the real charge can now be calculated with Coulomb's law between two point charges.
The potential at any point in space, due to these two point charges of charge +q at +a and −q at −a on the z-axis, is given in cylindrical coordinates as
The surface charge density on the grounded plane is therefore given by
In addition, the total charge induced on the conducting plane will be the integral of the charge density over the entire plane, so:
The total charge induced on the plane turns out to be simply −q. This can also be seen from the Gauss's law, considering that the dipole field decreases at the cube of the distance at large distances, and the therefore total flux of the field though an infinitely large sphere vanishes.
Because electric fields satisfy the superposition principle, a conducting plane below multiple point charges can be replaced by the mirror images of each of the charges individually, with no other modifications necessary.
Electric dipole moments
The image of an electric dipole moment p at above an infinite grounded conducting plane in the xy-plane is a dipole moment at with equal magnitude and direction rotated azimuthally by π. That is, a dipole moment with Cartesian components will have in image dipole moment . The dipole experiences a force in the z direction, given by
and a torque in the plane perpendicular to the dipole and the conducting plane,
Reflection in a dielectric planar interface
Similar to the conducting plane, the case of a planar interface between two different dielectric media can be considered. If a point charge is placed in the dielectric that has the dielectric constant , then the interface (with the dielectric that has the dielectric constant ) will develop a bound polarization charge. It can be shown that the resulting electric field inside the dielectric containing the particle is modified in a way that can be described by an image charge inside the other dielectric. Inside the other dielectric, however, the image charge is not present.
Unlike the case of the metal, the image charge is not exactly opposite to the real charge: . It may not even have the same sign, if the charge is placed inside the stronger dielectric material (charges are repelled away from regions of lower dielectric constant). This can be seen from the formula.
Reflection in a conducting sphere
Point charges
The method of images may be applied to a sphere as well. In fact, the case of image charges in a plane is a special case of the case of images for a sphere. Referring to the figure, we wish to find the potential inside a grounded sphere of radius R, centered at the origin, due to a point charge inside the sphere at position (For the opposite case, the potential outside a sphere due to a charge outside the sphere, the method is applied in a similar way). In the figure, this is represented by the green point. Let q be the point charge of this point. The image of this charge with respect to the grounded sphere is shown in red. It has a charge of and lies on a line connecting the center of the sphere and the inner charge at vector position . It can be seen that the potential at a point specified by radius vector due to both charges alone is given by the sum of the potentials:
Multiplying through on the rightmost expression yields:
and it can be seen that on the surface of the sphere (i.e. when ), the potential vanishes. The potential inside the sphere is thus given by the above expression for the potential of the two charges. This potential will not be valid outside the sphere, since the image charge does not actually exist, but is rather "standing in" for the surface charge densities induced on the sphere by the inner charge at . The potential outside the grounded sphere will be determined only by the distribution of charge outside the sphere and will be independent of the charge distribution inside the sphere. If we assume for simplicity (without loss of generality) that the inner charge lies on the z-axis, then the induced charge density will be simply a function of the polar angle θ and is given by:
The total charge on the sphere may be found by integrating over all angles:
Note that the reciprocal problem is also solved by this method. If we have a charge q at vector position outside of a grounded sphere of radius R, the potential outside of the sphere is given by the sum of the potentials of the charge and its image charge inside the sphere. Just as in the first case, the image charge will have charge −qR/p and will be located at vector position . The potential inside the sphere will be dependent only upon the true charge distribution inside the sphere. Unlike the first case the integral will be of value −qR/p.
Electric dipole moments
The image of an electric point dipole is a bit more complicated. If the dipole is pictured as two large charges separated by a small distance, then the image of the dipole will not only have the charges modified by the above procedure, but the distance between them will be modified as well. Following the above procedure, it is found that a dipole with dipole moment at vector position lying inside the sphere of radius R will have an image located at vector position (i.e. the same as for the simple charge) and will have a simple charge of:
and a dipole moment of:
Method of inversion
The method of images for a sphere leads directly to the method of inversion. If we have a harmonic function of position where are the spherical coordinates of the position, then the image of this harmonic function in a sphere of radius R about the origin will be
If the potential arises from a set of charges of magnitude at positions , then the image potential will be the result of a series of charges of magnitude at positions . It follows that if the potential arises from a charge density , then the image potential will be the result of a charge density .
See also
Kelvin transform
Coulomb's law
Divergence theorem
Flux
Gaussian surface
Schwarz reflection principle
Uniqueness theorem for Poisson's equation
Image antenna
Surface equivalence principle
References
Notes
Sources
Further reading
Electromagnetism
Electrostatics | Method of image charges | [
"Physics"
] | 1,631 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.