id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
1,278,026 | https://en.wikipedia.org/wiki/Watt%27s%20linkage | A Watt's linkage is a type of mechanical linkage invented by James Watt in which the central moving point of the linkage is constrained to travel a nearly straight path. Watt's described the linkage in his patent specification of 1784 for the Watt steam engine.
Today it is used in automobile suspensions, where it is key to a suspension's kinematics, i.e., its motion properties, constraining the vehicle axle's movement to nearly vertical travel while also limiting horizontal motion.
Description
Watt's linkage consists of three bars bolted together in a chain. The chain of bars consists of two end bars and a middle bar. The middle bar is bolted at each of its ends to one of the ends of each outer bar. The two outer bars are of equal length, and are longer than the middle bar. The three bars can pivot around the two bolts. The outer endpoints of the long bars are fixed in place relative to each other, but otherwise the three bars are free to pivot around the two joints where they meet.
In linkage analysis, there is an imaginary fixed-length bar connecting the outer endpoints. Thus, Watt's linkage is an example of a four-bar linkage.
History
Its genesis is contained in a letter Watt wrote to Matthew Boulton in June 1784.
The context of Watt's innovation has been described by C. G. Gibson:
During the Industrial Revolution, mechanisms for converting rotary into linear motion were widely adopted in industrial and mining machinery, locomotives and metering devices. Such devices had to combine engineering simplicity with a high degree of accuracy, and the ability to operate at speed for lengthy periods. For many purposes approximate linear motion is an acceptable substitute for exact linear motion. Perhaps the best known example is the Watt four bar linkage, invented by the Scottish engineer James Watt in 1784.
This type of linkage is one of several types described in Watt's 28 April 1784 patent specification. However, in his letter to Boulton he was actually describing a development of the linkage which was not included in the patent. The slightly later design, called a parallel motion linkage, led to a more convenient space-saving design which was actually used in his reciprocating, and his rotary, beam engines.
Shape traced by the linkage
This linkage does not generate a true straight line motion, and indeed Watt did not claim it did so. Rather, it traces out Watt's curve, a lemniscate or figure eight shaped curve; when the lengths of its bars and its base are chosen to form a crossed square, it traces the lemniscate of Bernoulli. In a letter to Boulton on 11 September 1784 Watt describes the linkage as follows.
Although the Peaucellier–Lipkin linkage, Hart's inversor, and other straight line mechanisms generate true straight-line motion, Watt's linkage has the advantage of much greater simplicity than these other linkages. It is similar in this respect to the Chebyshev linkage, a different linkage that produces approximate straight-line motion; however, in the case of Watt's linkage, the motion is perpendicular to the line between its two endpoints, whereas in the Chebyshev linkage the motion is parallel to this line.
Applications
Double-acting piston
The earlier single-action beam engines used a chain to connect the piston to the beam and this worked satisfactorily for pumping water from mines, etc. However, for rotary motion a linkage that works both in compression and tension provides a better design and allows a double-acting cylinder to be used. Such an engine incorporates a piston acted upon by steam alternately on the two sides, hence doubling its power. The linkage actually used by Watt (also invented by him) in his later rotary beam engines was called the parallel motion linkage, a development of "Watt's linkage", but using the same principle. The piston of the engine is attached to the central point of the linkage, allowing it to act on the two outer beams of the linkage both by pushing and by pulling. The nearly linear motion of the linkage allows this type of engine to use a rigid connection to the piston without causing the piston to bind in its containing cylinder. This configuration also results in a smoother motion of the beam than the single-action engine, making it easier to convert its back-and-forth motion into rotation.
An example of Watt's linkage can be found on the high and intermediate pressure piston rod of the 1865 Crossness engines. In these engines, the low pressure piston rod uses the more conventional parallel motion linkage, but the high and intermediate pressure rod does not connect to the end of the beam so there is no requirement to save space.
Vehicle suspension
Watt's linkage is used in the rear axle of some car suspensions as an improvement over the Panhard rod, which was designed in the early twentieth century. Both methods are intended to prevent relative sideways motion between the axle and body of the car. Watt's linkage approximates a vertical straight-line motion much more closely, and it does so while consistently locating the centre of the axle at the vehicle's longitudinal centreline, rather than toward one side of the vehicle as would be the case if a simple Panhard rod were used.
It consists of two horizontal rods of equal length mounted at each side of the chassis. In between these two rods, a short vertical bar is connected. The center of this short vertical rod – the point which is constrained in a straight line motion - is mounted to the center of the axle. All pivoting points are free to rotate in a vertical plane.
In a way, Watt's linkage can be seen as two Panhard rods mounted opposite each other. In Watt's arrangement, however, the opposing curved movements introduced by the pivoting Panhard rods largely balance each other in the short vertical rotating bar.
The linkage can be inverted, in which case the centre P is attached to the body, and L1 and L3 mount to the axle. This reduces the unsprung mass and changes the kinematics slightly. This arrangement was used on Australian V8 Supercars until the end of the 2012 season.
Watt's linkage can also be used to prevent axle movement in the longitudinal direction of the car. This application involves two Watt's linkages on each side of the axle, mounted parallel to the driving direction, but just a single 4-bar linkage is more common in racing suspension systems.
See also
Four-bar linkage
Linkage (mechanical)
Straight line mechanism
Watt's Parallel motion linkage, a straight line linkage built off of Watt's linkage.
References
External links
Watt Beam Engine
How to draw a straight line, by A.B. Kempe, B.A.
Lemniscoidal (figure 8 curved) linkage of the first kind by Watt
Lemniscoidal linkage of the second and third kind by Watt
A simulation using the Molecular Workbench software.
Automotive suspension technologies
Linkages (mechanical)
Scottish inventions
Linkage
Linear motion
Straight line mechanisms | Watt's linkage | [
"Physics"
] | 1,472 | [
"Physical phenomena",
"Motion (physics)",
"Linear motion"
] |
1,278,389 | https://en.wikipedia.org/wiki/Related%20rates | In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule, since most problems involve several variables.
Fundamentally, if a function is defined such that , then the derivative of the function can be taken with respect to another variable. We assume is a function of , i.e. . Then , so
Written in Leibniz notation, this is:
Thus, if it is known how changes with respect to , then we can determine how changes with respect to and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc.
For example, if then
Procedure
The most common way to approach related rates problems is the following:
Identify the known variables, including rates of change and the rate of change that is to be found. (Drawing a picture or representation of the problem can help to keep everything in order)
Construct an equation relating the quantities whose rates of change are known to the quantity whose rate of change is to be found.
Differentiate both sides of the equation with respect to time (or other rate of change). Often, the chain rule is employed at this step.
Substitute the known rates of change and the known quantities into the equation.
Solve for the wanted rate of change.
Errors in this procedure are often caused by plugging in the known values for the variables before (rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in.
Example
A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall?
The distance between the base of the ladder and the wall, x, and the height of the ladder on the wall, y, represent the sides of a right triangle with the ladder as the hypotenuse, h. The objective is to find dy/dt, the rate of change of y with respect to time, t, when h, x and dx/dt, the rate of change of x, are known.
Step 1:
Step 2:
From the Pythagorean theorem, the equation
describes the relationship between x, y and h, for a right triangle. Differentiating both sides of this equation with respect to time, t, yields
Step 3:
When solved for the wanted rate of change, dy/dt, gives us
Step 4 & 5:
Using the variables from step 1 gives us:
Solving for y using the Pythagorean Theorem gives:
Plugging in 8 for the equation:
It is generally assumed that negative values represent the downward direction. In doing such, the top of the ladder is sliding down the wall at a rate of meters per second.
Physics examples
Because one physical quantity often depends on another, which, in turn depends on others, such as time, related-rates methods have broad applications in Physics. This section presents an example of related rates kinematics and electromagnetic induction.
Relative kinematics of two vehicles
For example, one can consider the kinematics problem where one vehicle is heading West toward an intersection at 80 miles per hour while another is heading North away from the intersection at 60 miles per hour. One can ask whether the vehicles are getting closer or further apart and at what rate at the moment when the North bound vehicle is 3 miles North of the intersection and the West bound vehicle is 4 miles East of the intersection.
Big idea: use chain rule to compute rate of change of distance between two vehicles.
Plan:
Choose coordinate system
Identify variables
Draw picture
Big idea: use chain rule to compute rate of change of distance between two vehicles
Express c in terms of x and y via Pythagorean theorem
Express dc/dt using chain rule in terms of dx/dt and dy/dt
Substitute in x, y, dx/dt, dy/dt
Simplify.
Choose coordinate system:
Let the y-axis point North and the x-axis point East.
Identify variables:
Define y(t) to be the distance of the vehicle heading North from the origin and x(t) to be the distance of the vehicle heading West from the origin.
Express c in terms of x and y via the Pythagorean theorem:
Express dc/dt using chain rule in terms of dx/dt and dy/dt:
Substitute in x = 4 mi, y = 3 mi, dx/dt = −80 mi/hr, dy/dt = 60 mi/hr and simplify
Consequently, the two vehicles are getting closer together at a rate of 28 mi/hr.
Electromagnetic induction of conducting loop spinning in magnetic field
The magnetic flux through a loop of area A whose normal is at an angle θ to a magnetic field of strength B is
Faraday's law of electromagnetic induction states that the induced electromotive force is the negative rate of change of magnetic flux through a conducting loop.
If the loop area A and magnetic field B are held constant, but the loop is rotated so that the angle θ is a known function of time, the rate of change of θ can be related to the rate of change of (and therefore the electromotive force) by taking the time derivative of the flux relation
If for example, the loop is rotating at a constant angular velocity ω, so that θ = ωt, then
References
Differential calculus | Related rates | [
"Mathematics"
] | 1,222 | [
"Differential calculus",
"Calculus"
] |
1,278,615 | https://en.wikipedia.org/wiki/Wave%20loading | Wave loading is most commonly the application of a pulsed or wavelike load to a material or object. This is most commonly used in the analysis of piping, ships, or building structures which experience wind, water, or seismic disturbances.
Examples of wave loading
Offshore storms and pipes: As large waves pass over shallowly buried pipes, water pressure increases above it. As the trough approaches, pressure over the pipe drops and this sudden and repeated variation in pressure can break pipes. The difference in pressure for a wave with wave height of about 10 m would be equivalent to one atmosphere (101.3 kPa or 14.7 psi) pressure variation between crest and trough and repeated fluctuations over pipes in relatively shallow environments could set up resonance vibrations within pipes or structures and cause problems.
Engineering oil platforms: The effects of wave-loading are a serious issue for engineers designing oil platforms, which must contend with the effects of wave loading, and have devised a number of algorithms to do so.
References
Waves
Articles containing video clips | Wave loading | [
"Physics"
] | 203 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
12,965,053 | https://en.wikipedia.org/wiki/Wigner%E2%80%93Seitz%20radius | The Wigner–Seitz radius , named after Eugene Wigner and Frederick Seitz, is the radius of a sphere whose volume is equal to the mean volume per atom in a solid (for first group metals). In the more general case of metals having more valence electrons, is the radius of a sphere whose volume is equal to the volume per a free electron. This parameter is used frequently in condensed matter physics to describe the density of a system. Worth to mention, is calculated for bulk materials.
Formula
In a 3-D system with free valence electrons in a volume , the Wigner–Seitz radius is defined by
where is the particle density. Solving for we obtain
The radius can also be calculated as
where is molar mass, is count of free valence electrons per particle, is mass density and
is the Avogadro constant.
This parameter is normally reported in atomic units, i.e., in units of the Bohr radius.
Assuming that each atom in a simple metal cluster occupies the same volume as in a solid, the radius of the cluster is given by
where n is the number of atoms.
Values of for the first group metals:
Wigner–Seitz radius is related to the electronic density by the formula
where, ρ can be regarded as the average electronic density in the outer portion of the Wigner-Seitz cell.
See also
Wigner–Seitz cell
Wigner crystal
References
Atomic radius | Wigner–Seitz radius | [
"Physics",
"Chemistry"
] | 291 | [
" and optical physics stubs",
"Physical chemistry stubs",
"Atomic radius",
" molecular",
"Atomic",
"Atoms",
"Matter",
" and optical physics"
] |
12,967,535 | https://en.wikipedia.org/wiki/IEEE%20C2 | American National Standard C2 is the American National Standards Institute (ANSI) standard for the National Electrical Safety Code (NESC), published by the Institute of Electrical and Electronics Engineers (IEEE).
The NESC is a document containing voluntary (unless adopted by law) standards for safeguarding persons against electrical hazards during the installation, operation and maintenance of electric supply and communication lines. It includes general updates and critical revisions that directly impact the power utility industry. Adopted by law by the majority of states and Public Service Commissions across the US, the NESC is a performance code considered to be the authoritative source on good electrical engineering practice.
See also
IEC 60364
National Electrical Safety Code
Canadian Electrical Code
PSE law, Japan Electrical Safety Law.
Slash rating
Central Electricity Authority Regulations
References
IEEE Standards Association
C2
Electrical safety
Electrical wiring
Safety codes | IEEE C2 | [
"Physics",
"Technology",
"Engineering"
] | 166 | [
"Electrical systems",
"Building engineering",
"Computer standards",
"Physical systems",
"Electrical engineering",
"Electrical wiring",
"IEEE standards"
] |
12,968,385 | https://en.wikipedia.org/wiki/Joshua%20Jortner | Joshua Jortner (Hebrew: יהושע יורטנר) (March 14, 1933) is an Israeli physical chemist. He is a professor emeritus at the School of Chemistry, The Sackler Faculty of Exact Sciences, Tel Aviv University in Tel Aviv, Israel.
Birth and education
Jortner was born on March 14, 1933, in Tarnów, Poland, to a Jewish family. He migrated with his parents to Palestine under the British Mandate during the Second World War in 1940. He received his Ph.D. from the Hebrew University of Jerusalem in 1960.
Academic career
After completing his Ph.D., Jortner became a lecturer in the Department of Physical Chemistry at the Hebrew University of Jerusalem from 1961 to 1963. From 1962 to 1964, he was a research associate at the University of Chicago. In 1964, he was appointed to a professorship in the Department of Chemistry at Tel Aviv University and was its first chairman. From 1966 to 1972, he was deputy rector, acting rector and vice president of Tel Aviv University. Since 1973, he has held the position of the Heinemann Professor of Chemistry at the School of Chemistry, the Raymond and Beverly Sackler Faculty of Exact Sciences of Tel Aviv University. He also held a professorship at the University of Chicago from 1964 to 1971 as a part-time appointment. He was a visiting professor at the University of Copenhagen in 1974 and 1978 and at the University of California, Berkeley, in 1975.
He also held honorary fellowships, lectureships and chairs at the California Institute of Technology in 1997, St Catherine's College, Oxford, in 1995 and the École Normale Supérieure in Paris from 1998 to 2000. Since 1973, he has been a member of the Israel Academy of Sciences and Humanities and was its president from 1986 to 1995. He is an Honorary Foreign Member of 13 Academies of Sciences in the United States, Europe (The Netherlands, 1998) and Asia.
Research
Jortner has undertaken research on a broad range of areas in both physical and theoretical chemistry, involving dynamical phenomena in chemical systems. His research focuses on the relations between structure, spectroscopy, dynamics and function in microscopic and macroscopic systems. He made some central contributions to the elucidation of the mechanisms of energy acquisition, storage and disposal in large molecules, clusters, condensed phase and biophysical systems, as explored from the microscopic point of view.
He is known for the recognition and elucidation of the intramolecular nature of radiationless dissipation of energy in molecules of large and medium size. Based on a simple theoretical model, in 1968 he proposed, in collaboration with Mordechai Bixon, the basic notions specifying the energy acquisition process, the interstate coupling modes, and the mechanisms of energy disposal were laid open. Subsequently, he developed the theory of molecular wavepacket dynamics and quantum beats.
His contributions became seminal to the study of laser chemistry, multiphoton processes in molecules, relaxation phenomena in condensed phases and the dynamics of biophysical systems, and had an indelible impact on the modern development of chemical physics and theoretical chemistry.
His research covers a vast range of fields, such as the theory of solvated electrons, properties of excited electronic states of molecules, coherent multiphoton processes, charge transfer in polar solvents and in biophysical systems and the dynamics of supercooled large molecules and of molecular clusters.
Awards
In 1982, Jortner received the Israel Prize, in chemistry.
In 1988, Jortner was awarded the Wolf Prize in Chemistry along with Raphael David Levine of Hebrew University of Jerusalem for "their incisive theoretical studies elucidating energy acquisition and disposal in molecular systems and mechanisms for dynamical selectivity and specificity".
In 1990, Jortner was elected an International member of the American Philosophical Society.
In 1991 Jortner was elected to the American Academy of Arts and Sciences.
In 1995 he became a member of the German Academy of Sciences Leopoldina.
In 1997, he was elected an International member of the United States National Academy of Sciences.
In 2008, he received the "Emet" prize.
Personal life
He is married to Ruth T. Jortner, a cardiologist. His son Roni is a biologist and his daughter Iris is a cellist.
See also
List of Israel Prize recipients
References
External links
Curriculum vitae of Joshua Jortner
Research of Joshua Jortner
The Wolf Prize in Chemistry in 1988
1933 births
Living people
People from Tarnów
Israeli physical chemists
Theoretical chemists
University of Chicago faculty
Academic staff of Tel Aviv University
Israel Prize in chemistry recipients
Wolf Prize in Chemistry laureates
Hebrew University of Jerusalem alumni
Members of the Israel Academy of Sciences and Humanities
Israeli Jews
Polish emigrants to Israel
Israeli expatriates in the United States
Jewish chemists
Presidents of the Israel Academy of Sciences and Humanities
Members of the Royal Netherlands Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Foreign members of the Russian Academy of Sciences
Foreign fellows of the Indian National Science Academy
Fellows of the American Physical Society
Members of the German National Academy of Sciences Leopoldina
Members of the American Philosophical Society
Weizmann Prize recipients | Joshua Jortner | [
"Chemistry"
] | 1,041 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
12,973,053 | https://en.wikipedia.org/wiki/Viscimation | Viscimation is the turbulence when liquids of different viscosities mix,
particularly the formation of vortices (also known as "viscimetric whorls") and visible separate threads of the different liquids.
The term viscimation is archaic and idiosyncratic to whisky tasting; their study (or appreciation) is called viscimetry, and the capacity of a whisky to sustain viscimation (which is predominantly its alcohol percentage) is viscimetric potential or viscimetric index.
Causing viscometric whorls by adding water to liquor is colloquially called "awakening the serpent".
References
https://web.archive.org/web/20070930102236/http://www.smws.ch/downloads/Newsletter-Xmas2004-en.PDF
Fluid dynamics
Turbulence
Vortices
Whisky | Viscimation | [
"Chemistry",
"Mathematics",
"Engineering"
] | 173 | [
"Turbulence",
"Vortices",
"Dynamical systems",
"Chemical engineering",
"Piping",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
4,848,693 | https://en.wikipedia.org/wiki/Cameleon%20%28protein%29 | Cameleon is an engineered protein based on variant of green fluorescent protein used to visualize calcium levels in living cells. It is a genetically encoded calcium sensor created by Roger Y. Tsien and coworkers. The name is a conflation of CaM (the common abbreviation of calmodulin) and chameleon to indicate the fact that the sensor protein undergoes a conformation change and radiates at an altered wavelength upon calcium binding to the calmodulin element of the Cameleon. Cameleon was the first genetically encoded calcium sensor that could be used for ratiometric measurements and the first to be used in a transgenic animal to record activity in neurons and muscle cells. Cameleon and other genetically encoded calcium indicators (GECIs) have found many applications in neuroscience and other fields of biology, including understanding the mechanisms of cell signaling by conducting time-resolved Ca2+ activity measurement experiments with endoplasmic reticulum (ER) enzymes. It was created by fusing BFP, calmodulin, calmodulin-binding peptide M13 and EGFP.
Mechanism
The DNA encoding cameleon fusion protein must be either stably or transiently introduced into the cell of interest. Protein made by the cell according to this DNA information then serves as a fluorescent indicator of calcium concentration. In the presence of calcium, Ca2+ binds to M13, which enables calmodulin to wrap around the M13 domain. This brings the two GFP-variant proteins closer to each other, which increases FRET efficiency between them. A time-resolved spectroscopy study done on resonance energy transfer by Habuchi et al. in 2002 suggested the existence of 3 different calmodulin conformations that were dependent on Ca2+ binding. The study concluded that the mechanism of conformation interconversion remains unclear, but the data provided estimates of rate constants, energy transfer efficiency, and donor-acceptor distances in Ca2+-free and Ca2+-bound YC3.1 cameleon proteins.
References
Sensors
Engineered proteins
Fluorescent proteins
Cell imaging
Calcium signaling | Cameleon (protein) | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 417 | [
"Biochemistry methods",
"Fluorescent proteins",
"Measuring instruments",
"Signal transduction",
"Calcium signaling",
"Microscopy",
"Sensors",
"Bioluminescence",
"Cell imaging"
] |
4,850,275 | https://en.wikipedia.org/wiki/Pentagonal%20tiling | In geometry, a pentagonal tiling is a tiling of the plane where each individual piece is in the shape of a pentagon.
A regular pentagonal tiling on the Euclidean plane is impossible because the internal angle of a regular pentagon, 108°, is not a divisor of 360°, the angle measure of a whole turn. However, regular pentagons can tile the hyperbolic plane with four pentagons around each vertex (or more) and sphere with three pentagons; the latter produces a tiling that is topologically equivalent to the dodecahedron.
Monohedral convex pentagonal tilings
Fifteen types of convex pentagons are known to tile the plane monohedrally (i.e. with one type of tile). The most recent one was discovered in 2015. This list has been shown to be complete by (result subject to peer-review). showed that there are only eight edge-to-edge convex types, a result obtained independently by .
Michaël Rao of the École normale supérieure de Lyon claimed in May 2017 to have found the proof that there are in fact no convex pentagons that tile beyond these 15 types. As of 11 July 2017, the first half of Rao's proof had been independently verified (computer code available) by Thomas Hales, a professor of mathematics at the University of Pittsburgh. As of December 2017, the proof was not yet fully peer-reviewed.
Each enumerated tiling family contains pentagons that belong to no other type; however, some individual pentagons may belong to multiple types. In addition, some of the pentagons in the known tiling types also permit alternative tiling patterns beyond the standard tiling exhibited by all members of its type.
The sides of length a, b, c, d, e are directly clockwise from the angles at vertices A, B, C, D, E respectively. (Thus,
A, B, C, D, E are opposite to d, e, a, b, c respectively.)
Many of these monohedral tile types have degrees of freedom. These freedoms include variations of internal angles and edge lengths. In the limit, edges may have lengths that approach zero or angles that approach 180°. Types 1, 2, 4, 5, 6, 7, 8, 9, and 13 allow parametric possibilities with nonconvex prototiles.
Periodic tilings are characterised by their wallpaper group symmetry, for example p2 (2222) is defined by four 2-fold gyration points. This nomenclature is used in the diagrams below, where the tiles are also colored by their k-isohedral positions within the symmetry.
A primitive unit is a section of the tiling that generates the whole tiling using only translations, and is as small as possible.
Reinhardt (1918)
found the first five types of pentagonal tile. All five can create isohedral tilings, meaning that the symmetries of the tiling can take any tile to any other tile (more formally, the automorphism group acts transitively on the tiles).
B. Grünbaum and G. C. Shephard have shown that there are exactly twenty-four distinct "types" of isohedral tilings of the plane by pentagons according to their classification scheme. All use Reinhardt's tiles, usually with additional conditions necessary for the tiling. There are two tilings by all type 2 tiles, and one by all of each of the other four types. Fifteen of the other eighteen tilings are by special cases of type 1 tiles. Nine of the twenty-four tilings are edge-to-edge.
There are also 2-isohedral tilings by special cases of type 1, type 2, and type 4 tiles, and 3-isohedral tilings, all edge-to-edge, by special cases of type 1 tiles. There is no upper bound on k for k-isohedral tilings by certain tiles that are both type 1 and type 2, and hence neither on the number of tiles in a primitive unit.
The wallpaper group symmetry for each tiling is given, with orbifold notation in parentheses. A second lower symmetry group is given if tile chirality exists, where mirror images are considered distinct. These are shown as yellow and green tiles in those cases.
Type 1
There are many tiling topologies that contain type 1 pentagons. Five example topologies are given below.
Type 2
These type 2 examples are isohedral. The second is an edge-to-edge variation. They both have pgg (22×) symmetry. If mirror image tiles (yellow and green) are considered distinct, the symmetry is p2 (2222).
Types 3, 4, and 5
Kershner (1968) Types 6, 7, 8
found three more types of pentagonal tile, bringing the total to eight. He claimed incorrectly that this was the complete list of pentagons that can tile the plane.
These examples are 2-isohedral and edge-to-edge. Types 7 and 8 have chiral pairs of tiles, which are colored as pairs in yellow-green and the other as two shades of blue. The pgg symmetry is reduced to p2 when chiral pairs are considered distinct.
James (1975) Type 10
In 1975 Richard E. James III found a ninth type, after reading about Kershner's results in Martin Gardner's "Mathematical Games" column in Scientific American magazine of July 1975 (reprinted in ). It is indexed as type 10. The tiling is 3-isohedral and non-edge-to-edge.
Rice (1977) Types 9,11,12,13
Marjorie Rice, an amateur mathematician, discovered four new types of tessellating pentagons in 1976 and 1977.
All four tilings are 2-isohedral. The chiral pairs of tiles are colored in yellow and green for one isohedral set, and two shades of blue for the other set. The pgg symmetry is reduced to p2 when the chiral pairs are considered distinct.
The tiling by type 9 tiles is edge-to-edge, but the others are not.
Each primitive unit contains eight tiles.
Stein (1985) Type 14
A 14th convex pentagon type was found by Rolf Stein in 1985.
The tiling is 3-isohedral and non-edge-to-edge. It has completely determined tiles, with no degrees of freedom. The exact proportions are specified by and angle B obtuse with . Other relations can easily be deduced.
The primitive units contain six tiles respectively. It has p2 (2222) symmetry.
Mann/McLoud/Von Derau (2015) Type 15
University of Washington Bothell mathematicians Casey Mann, Jennifer McLoud-Mann, and David Von Derau discovered a 15th monohedral tiling convex pentagon in 2015 using a computer algorithm. It is 3-isohedral and non-edge-to-edge, drawn with 6 colors, 2 shades of 3 colors, representing chiral pairs of the three isohedral positions. The pgg symmetry is reduced to p2 when the chiral pairs are considered distinct. It has completely determined tiles, with no degrees of freedom. The primitive units contain twelve tiles. It has pgg (22×) symmetry, and p2 (2222) if chiral pairs are considered distinct.
No more periodic pentagonal tiling types
In July 2017 Michaël Rao completed a computer-assisted proof showing that there are no other types of convex pentagons that can tile the plane. The complete list of convex polygons that can tile the plane includes the above 15 pentagons, three types of hexagons, and all quadrilaterals and triangles. A consequence of this proof is that no convex polygon exists that tiles the plane only aperiodically, since all of the above types allow for a periodic tiling.
Nonperiodic monohedral pentagonal tilings
Nonperiodic monohedral pentagonal tilings can also be constructed, like the example below with 6-fold rotational symmetry by Michael Hirschhorn. Angles are A = 140°, B = 60°, C = 160°, D = 80°, E = 100°.
In 2016 it could be shown by Bernhard Klaassen that every discrete rotational symmetry type can be represented by a monohedral pentagonal tiling from the same class of pentagons. Examples for 5-fold and 7-fold symmetry are shown below. Such tilings are possible for any type of n-fold rotational symmetry with n>2.
Dual uniform tilings
There are three isohedral pentagonal tilings generated as duals of the uniform tilings, those with 5-valence vertices. They represent special higher symmetry cases of the 15 monohedral tilings above. Uniform tilings and their duals are all edge-to-edge. These dual tilings are also called Laves tilings. The symmetry of the uniform dual tilings is the same as the uniform tilings. Because the uniform tilings are isogonal, the duals are isohedral.
{| class=wikitable
!cmm (2*22)
!p4g (4*2)
!p6 (632)
|- align=center
|
|
|
|- valign=top align=center
|Prismatic pentagonal tilingInstance of type 1
|Cairo pentagonal tilingInstance of type 4Cairo pentagonal tiling generated by a pentagon type 4 query and by a pentagon type 2 tiling query on wolframalpha.com (caution: the wolfram definition of pentagon type 2 tiling does not correspond with type 2 defined by Reinhardt in 1918)
|Floret pentagonal tilingInstance of types 1, 5 and 6|- align=center valign=bottom
|120°, 120°, 120°, 90°, 90° V3.3.3.4.4
|120°, 120°, 90°, 120°, 90°V3.3.4.3.4
|120°, 120°, 120°, 120°, 60°V3.3.3.3.6
|}
Dual k-uniform tilings
The k-uniform tilings with valence-5 vertices also have pentagonal dual tilings, containing the same three shaped pentagons as the semiregular duals above, but contain a mixture of pentagonal types. A k-uniform tiling has a k-isohedral dual tiling and are represented by different colors and shades of colors below.
For example these 2, 3, 4, and 5-uniform duals are all pentagonal:
Pentagonal/hexagonal tessellation
Pentagons have a peculiar relationship with hexagons. As demonstrated graphically below, some types of hexagons can be subdivided into pentagons. For example, a regular hexagon bisects into two type 1 pentagons. Subdivision of convex hexagons is also possible with three (type 3), four (type 4) and nine (type 3) pentagons.
By extension of this relation, a plane can be tessellated by a single pentagonal prototile shape in ways that generate hexagonal overlays. For example:
Non-convex pentagons
With pentagons that are not required to be convex, additional types of tiling are possible. An example is the sphinx tiling, an aperiodic tiling formed by a pentagonal rep-tile. The sphinx may also tile the plane periodically, by fitting two sphinx tiles together to form a parallelogram and then tiling the plane by translation of this parallelogram, a pattern that can be extended to any non-convex pentagon that has two consecutive angles adding to 2.
It is possible to divide an equilateral triangle into three congruent non-convex pentagons, meeting at the center of the triangle, and to tile the plane with the resulting three-pentagon unit.
A similar method can be used to subdivide squares into four congruent non-convex pentagons, or regular hexagons into six congruent non-convex pentagons, and then tile the plane with the resulting unit.
In non-Euclidean geometry
Spherical tiling
A dodecahedron can be considered a regular tiling of 12 pentagons on the surface of a sphere, with Schläfli symbol {5,3}, having three pentagons around each vertex.
One may also consider a degenerate tiling by two hemispheres, with the great circle between them subdivided into five equal arcs, as a pentagonal tiling with Schläfli symbol {5,2}.
Regular hyperbolic tilings
In the hyperbolic plane, one can construct regular pentagons that have any interior angle for . The resulting pentagons tile the plane regularly, with pentagons around each vertex. For instance, the order-4 pentagonal tiling, {5,4}, has four right-angled pentagons around each vertex. A limiting case is the infinite-order pentagonal tiling {5,∞} produced by ideal regular pentagons. These pentagons have ideal points as their vertices, with angle equal to zero.
Irregular hyperbolic tilings
There are an infinite number of dual uniform tilings in hyperbolic plane with isogonal irregular pentagonal faces. They have face configurations as V3.3.p.3.q.
A version of the binary tiling, with its tiles bounded by hyperbolic line segments rather than arcs of horocycles, forms pentagonal tilings that must be non-periodic, in the sense that their symmetry groups can be one-dimensional but not two-dimensional.
References
Bibliography
; Errata, Forma'' 25 (1): 49, 2010,
External links
Pentagon Tilings
The 14 Pentagons that Tile the Plane
15 (monohedral) Tilings with a convex pentagonal tile with k-isohedral colorings
Code to display the 14th pentagon type tiling
Code to display the 15th pentagon type tiling
Tessellation | Pentagonal tiling | [
"Physics",
"Mathematics"
] | 2,874 | [
"Tessellation",
"Planes (geometry)",
"Euclidean plane geometry",
"Symmetry"
] |
4,850,474 | https://en.wikipedia.org/wiki/Pitchfork%20bifurcation | In bifurcation theory, a field within mathematics, a pitchfork bifurcation is a particular type of local bifurcation where the system transitions from one fixed point to three fixed points. Pitchfork bifurcations, like Hopf bifurcations, have two types – supercritical and subcritical.
In continuous dynamical systems described by ODEs—i.e. flows—pitchfork bifurcations occur generically in systems with symmetry.
Supercritical case
The normal form of the supercritical pitchfork bifurcation is
For , there is one stable equilibrium at . For there is an unstable equilibrium at , and two stable equilibria at .
Subcritical case
The normal form for the subcritical case is
In this case, for the equilibrium at is stable, and there are two unstable equilibria at . For the equilibrium at is unstable.
Formal definition
An ODE
described by a one parameter function with satisfying:
(f is an odd function),
has a pitchfork bifurcation at . The form of the pitchfork is given
by the sign of the third derivative:
Note that subcritical and supercritical describe the stability of the outer lines of the pitchfork (dashed or solid, respectively) and are not dependent on which direction the pitchfork faces. For example, the negative of the first ODE above, , faces the same direction as the first picture but reverses the stability.
See also
Bifurcation theory
Bifurcation diagram
References
Steven Strogatz, Non-linear Dynamics and Chaos: With applications to Physics, Biology, Chemistry and Engineering, Perseus Books, 2000.
S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer-Verlag, 1990.
Bifurcation theory | Pitchfork bifurcation | [
"Mathematics"
] | 355 | [
"Bifurcation theory",
"Dynamical systems"
] |
4,852,151 | https://en.wikipedia.org/wiki/Hamilton%27s%20principle | In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action. It states that the dynamics of a physical system are determined by a variational problem for a functional based on a single function, the Lagrangian, which may contain all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and plays an important role in quantum mechanics, quantum field theory and criticality theories.
Mathematical formulation
Hamilton's principle states that the true evolution of a system described by generalized coordinates between two specified states and at two specified times and is a stationary point (a point where the variation is zero) of the action functional
where is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in . The action is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. In terms of functional analysis, Hamilton's principle states that the true evolution of a physical system is a solution of the functional equation
That is, the system takes a path in configuration space for which the action is stationary, with fixed boundary conditions at the beginning and the end of the path.
Euler–Lagrange equations derived from the action integral
Requiring that the true trajectory be a stationary point of the action functional is equivalent to a set of differential equations for (the Euler–Lagrange equations), which may be derived as follows.
Let represent the true evolution of the system between two specified states and at two specified times and , and let be a small perturbation that is zero at the endpoints of the trajectory
To first order in the perturbation , the change in the action functional would be
where we have expanded the Lagrangian L to first order in the perturbation .
Applying integration by parts to the last term results in
The boundary conditions causes the first term to vanish
Hamilton's principle requires that this first-order change is zero for all possible perturbations , i.e., the true path is a stationary point of the action functional (either a minimum, maximum or saddle point). This requirement can be satisfied if and only if
These equations are called the Euler–Lagrange equations for the variational problem.
Canonical momenta and constants of motion
The conjugate momentum for a generalized coordinate is defined by the equation
An important special case of the Euler–Lagrange equation occurs when L does not contain a generalized coordinate explicitly,
that is, the conjugate momentum is a constant of the motion.
In such cases, the coordinate is called a cyclic coordinate. For example, if we use polar coordinates , , to describe the planar motion of a particle, and if does not depend on , the conjugate momentum is the conserved angular momentum.
Example: Free particle in polar coordinates
Trivial examples help to appreciate the use of the action principle via the Euler–Lagrange equations. A free particle (mass m and velocity v) in Euclidean space moves in a straight line. Using the Euler–Lagrange equations, this can be shown in polar coordinates as follows. In the absence of a potential, the Lagrangian is simply equal to the kinetic energy
in orthonormal (x,y) coordinates, where the dot represents differentiation with respect to the curve parameter (usually the time, t). Therefore, upon application of the Euler–Lagrange equations,
And likewise for y. Thus the Euler–Lagrange formulation can be used to derive Newton's laws.
In polar coordinates the kinetic energy and hence the Lagrangian becomes
The radial and components of the Euler–Lagrange equations become, respectively
remembering that r is also dependent on time and the product rule is needed to compute the total time derivative .
The solution of these two equations is given by
for a set of constants , , , determined by initial conditions.
Thus, indeed, the solution is a straight line given in polar coordinates: is the velocity, is the distance of the closest approach to the origin, and is the angle of motion.
Applied to deformable bodies
Hamilton's principle is an important variational principle in elastodynamics. As opposed to a system composed of rigid bodies, deformable bodies have an infinite number of degrees of freedom and occupy continuous regions of space; consequently, the state of the system is described by using continuous functions of space and time. The extended Hamilton Principle for such bodies is given by
where is the kinetic energy, is the elastic energy, is the work done by external loads on the body, and , the initial and final times. If the system is conservative, the work done by external forces may be derived from a scalar potential . In this case,
This is called Hamilton's principle and it is invariant under coordinate transformations.
Comparison with Maupertuis' principle
Hamilton's principle and Maupertuis' principle are occasionally confused and both have been called the principle of least action. They differ in three important ways:
their definition of the action... Maupertuis' principle uses an integral over the generalized coordinates known as the abbreviated action or reduced action where p = (p1, p2, ..., pN) are the conjugate momenta defined above. By contrast, Hamilton's principle uses , the integral of the Lagrangian over time.
the solution that they determine... Hamilton's principle determines the trajectory q(t) as a function of time, whereas Maupertuis' principle determines only the shape of the trajectory in the generalized coordinates. For example, Maupertuis' principle determines the shape of the ellipse on which a particle moves under the influence of an inverse-square central force such as gravity, but does not describe per se how the particle moves along that trajectory. (However, this time parameterization may be determined from the trajectory itself in subsequent calculations using the conservation of energy). By contrast, Hamilton's principle directly specifies the motion along the ellipse as a function of time.
...and the constraints on the variation. Maupertuis' principle requires that the two endpoint states q1 and q2 be given and that energy be conserved along every trajectory (same energy for each trajectory). This forces the endpoint times to be varied as well. By contrast, Hamilton's principle does not require the conservation of energy, but does require that the endpoint times t1 and t2 be specified as well as the endpoint states q1 and q2.
Action principle for fields
Classical field theory
The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravity.
The Einstein equation utilizes the Einstein–Hilbert action as constrained by a variational principle.
The path of a body in a gravitational field (i.e. free fall in space time, a so-called geodesic) can be found using the action principle.
Quantum mechanics and quantum field theory
In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all imaginable paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, that gives the probability amplitudes of the various outcomes.
Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. In particular, it is fully appreciated and best understood within quantum mechanics. Richard Feynman's path integral formulation of quantum mechanics is based on a stationary-action principle, using path integrals. Maxwell's equations can be derived as conditions of stationary action.
See also
Analytical mechanics
Configuration space
Hamilton–Jacobi equation
Phase space
Geodesics as Hamiltonian flows
References
W.R. Hamilton, "On a General Method in Dynamics.", Philosophical Transactions of the Royal Society Part II (1834) pp. 247–308; Part I (1835) pp. 95–144. (From the collection Sir William Rowan Hamilton (1805–1865): Mathematical Papers edited by David R. Wilkins, School of Mathematics, Trinity College, Dublin 2, Ireland. (2000); also reviewed as On a General Method in Dynamics)
Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison Wesley, pp. 35–69.
Landau LD and Lifshitz EM (1976) Mechanics, 3rd. ed., Pergamon Press. (hardcover) and (softcover), pp. 2–4.
Arnold VI. (1989) Mathematical Methods of Classical Mechanics, 2nd ed., Springer Verlag, pp. 59–61.
Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013.
Bedford A.: Hamilton's Principle in Continuum Mechanics. Pitman, 1985. Springer 2001, ISBN 978-3-030-90305-3 ISBN 978-3-030-90306-0 (eBook), https://doi.org/10.1007/978-3-030-90306-0
Lagrangian mechanics
Calculus of variations
Principles
William Rowan Hamilton | Hamilton's principle | [
"Physics",
"Mathematics"
] | 1,968 | [
"Lagrangian mechanics",
"Classical mechanics",
"Dynamical systems"
] |
4,852,393 | https://en.wikipedia.org/wiki/Pentavalent%20antimonial | Pentavalent antimonials (also abbreviated pentavalent Sb or SbV) are a group of compounds used for the treatment of leishmaniasis. They are also called pentavalent antimony compounds.
Types
The first pentavalent antimonial, urea stibamine, was synthesised by the Indian scientist Upendranath Brahmachari in 1922. Though it caused a dramatic decline in deaths due to leishmaniasis, it fell out of favour in the 1950s due to higher toxicity compared to sodium stibogluconate.
The compounds currently available for clinical use are:
sodium stibogluconate (Pentostam; manufactured by GlaxoSmithKline; available in United States [through the Centers for Disease Control only] and UK), which is administered by slow intravenous injection.
meglumine antimoniate (Glucantim; manufactured by Aventis; available in Brazil, France and Italy), which is administered by intramuscular or intravenous injection.
The pentavalent antimonials can only be given by injection: there are no oral preparations available.
Alternatives
In many countries, widespread resistance to antimony has meant that liposomal amphotericin or miltefosine are now used in preference.
Side effects
Cardiotoxicity, reversible kidney failure, pancreatitis, anemia, leukopenia, rash, headache, abdominal pain, nausea, vomiting, arthralgia, myalgia, thrombocytopenia, and transaminase elevation.
References
Antiprotozoal agents
Antimony(V) compounds | Pentavalent antimonial | [
"Biology"
] | 339 | [
"Antiprotozoal agents",
"Biocides"
] |
4,852,466 | https://en.wikipedia.org/wiki/Explosive%20lens | An explosive lens—as used, for example, in nuclear weapons—is a highly specialized shaped charge. In general, it is a device composed of several explosive charges. These charges are arranged and formed with the intent to control the shape of the detonation wave passing through them. The explosive lens is conceptually similar to an optical lens, which focuses light waves. The charges that make up the explosive lens are chosen to have different rates of detonation. In order to convert a spherically expanding wavefront into a spherically converging one using only a single boundary between the constituent explosives, the boundary shape must be a paraboloid; similarly, to convert a spherically diverging front into a flat one, the boundary shape must be a hyperboloid, and so on. Several boundaries can be used to reduce aberrations (deviations from intended shape) of the final wavefront.
Invention
As mentioned by Hans Bethe, the invention of the explosive lens device was contributed and designed by John von Neumann.
Use in nuclear weapons
In a nuclear weapon, an array of explosive lenses is used to change the several approximately spherical diverging detonation waves into a single spherical converging one. The converging wave is then used to collapse the various shells (tamper, reflector, pusher, etc.) and finally compresses the core (pit) of fissionable material to a prompt critical state. They are usually machined from a plastic bonded explosive and an inert insert, called a wave-shaper, which is often a dense foam or plastic, though many other materials can be used. Other, mainly older explosive lenses do not include a wave shaper, but employ two explosive types that have significantly different velocities of detonation (VoD), which are in the range from 5 to 9 km/s. The use of the low- and high-speed explosives again results in a spherical converging detonation wave to compress the physics package. The original Gadget device used in the Trinity test and Fat Man dropped on Nagasaki used Baratol as the low-VoD explosive and Composition B as the fast, but other combinations can be used.
The illustration to the left represents a cross section through a segment of a polygonal wedge. The wedges are fitted together to form a spherical device. The exploding-bridgewire detonator at the far left triggers a semi-spherical detonation wave through the high-speed outer explosive. (It is semi-spherical because the exploding-bridgewire acts as a point-detonator.) As the wave is transferred to the precisely shaped inner explosive, a new spherical wave—centered on the object—is formed. The successful functioning of this device hinges on the simultaneous initiation of the wave in each segment, uniformity and precision in the speed of the wave, and correctness and accuracy in the shape of the interface between the two explosives.
A series of experiments were performed in 1944 and 1945 during the Manhattan Project to develop the lenses for a satisfactory implosion. One of the most important tests was the series of RaLa Experiments.
Initially, a 32 "point" assembly was used (each of which had a pair of exploding-bridgewire detonators).
Later, a 92 "point" assembly was tried, with the objective of obtaining a smaller assembly with improved performance.
Finally, with the success of the Swan test nuclear explosive device, a two "point" assembly became feasible. Swan used an "air lens" system in addition to shaped charges and became the basis of all U.S. successor designs, nuclear and thermonuclear alike, and featured small size, light weight, and exceptional reliability and safety, as well as using the least amount of strategic material of any design.
Other uses
Lenses using alternate design techniques and producing flat "plane wave" outputs are used for high transient pressure physics and materials science experiments.
See also
Impact depth
Nuclear weapon design
Notes
Explosives engineering
Nuclear weapon design
Nuclear weapon implosion | Explosive lens | [
"Engineering"
] | 823 | [
"Explosives engineering"
] |
4,853,266 | https://en.wikipedia.org/wiki/Tire%20Science%20and%20Technology | Tire Science and Technology is a quarterly peer-reviewed scientific journal that publishes original research and reviews on experimental, analytical, and computational aspects of tires. Since 1978, the Tire Society has published the journal. The current editor-in-chief is Michael Kaliske (Dresden University of Technology).
History
The journal was founded in 1973 and was originally published by a committee of the American Society for Testing and Materials until 1977, when the Tire Society was incorporated for the purpose of continuing the journal.
Content
Topics of interest to journal readers include adhesion, aerospace, aging, agriculture, automotive, composite materials, constitutive modeling, contact mechanics, cord mechanics, curing, design theories, durability, elastomers, finite element analysis, force and moment behavior, groove wander, heat build up, hydroplaning, impact, manufacturing, mechanics, military, noise, pavement, performance evaluation, racing, rolling resistance, snow and ice, soil, standing waves, stiffness, strength, traction, vehicle dynamics, vibration, and wear.
Past Editors
1977 –1982: Dan Livingston (Goodyear Tire and Rubber Company)
1983 – 1994: Raouf Ridha (Goodyear Tire and Rubber Company)
1995 – 1999: Jozef DeEskinazi (Continental)
2000 – 2007: Farhad Tabaddor (Michelin)
2008 – 2009: William V. Mars (Cooper Tire)
2010 – present: Michael Kaliske (TU Dresden)
External links
References
Tires
Engineering journals
Materials science journals
Academic journals published by learned and professional societies
Academic journals established in 1973
English-language journals
Quarterly journals | Tire Science and Technology | [
"Materials_science",
"Engineering"
] | 320 | [
"Materials science journals",
"Materials science"
] |
4,854,281 | https://en.wikipedia.org/wiki/Dannie%20Heineman%20Prize%20for%20Mathematical%20Physics | Dannie Heineman Prize for Mathematical Physics is an award given each year since 1959 jointly by the American Physical Society and American Institute of Physics. It is established by the Heineman Foundation in honour of Dannie Heineman. As of 2010, the prize consists of US$10,000 and a certificate citing the contributions made by the recipient plus travel expenses to attend the meeting at which the prize is bestowed.
Past Recipients
Source: American Physical Society
2024 David C. Brydges
2023 Nikita Nekrasov
2022 Antti Kupiainen and Krzysztof Gawędzki
2021 Joel Lebowitz
2020 Svetlana Jitomirskaya
2019 T. Bill Sutherland, Francesco Calogero and Michel Gaudin
2018 Barry Simon
2017 Carl M. Bender
2016 Andrew Strominger and Cumrun Vafa
2015 Pierre Ramond
2014 Gregory W. Moore
2013 Michio Jimbo and Tetsuji Miwa
2012 Giovanni Jona-Lasinio
2011 Herbert Spohn
2010 Michael Aizenman
2009 Carlo Becchi, Alain Rouet, Raymond Stora and Igor Tyutin
2008 Mitchell Feigenbaum
2007 Juan Maldacena and Joseph Polchinski
2006 Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen
2005 Giorgio Parisi
2004 Gabriele Veneziano
2003 Yvonne Choquet-Bruhat and James W. York
2002 Michael B. Green and John Henry Schwarz
2001 Vladimir Igorevich Arnold
2000 Sidney R. Coleman
1999 Barry M. McCoy, Tai Tsun Wu and Alexander B. Zamolodchikov
1998 Nathan Seiberg and Edward Witten
1997 Harry W. Lehmann
1996 Roy J. Glauber
1995 Roman W. Jackiw
1994 Richard Arnowitt, Stanley Deser and Charles W. Misner
1993 Martin C. Gutzwiller
1992 Stanley Mandelstam
1991 Thomas C.Spencer and Jürg Fröhlich
1990 Yakov Sinai
1989 John S. Bell
1988 Julius Wess and Bruno Zumino
1987 Rodney Baxter
1986 Alexander M. Polyakov
1985 David P. Ruelle
1984 Robert B. Griffiths
1983 Martin D. Kruskal
1982 John Clive Ward
1981 Jeffrey Goldstone
1980 James Glimm and Arthur Jaffe
1979 Gerard 't Hooft
1978 Elliott Lieb
1977 Steven Weinberg
1976 Stephen Hawking
1975 Ludwig D. Faddeev
1974 Subrahmanyan Chandrasekhar
1973 Kenneth G. Wilson
1972 James D. Bjorken
1971 Roger Penrose
1970 Yoichiro Nambu
1969 Arthur S. Wightman
1968 Sergio Fubini
1967 Gian Carlo Wick
1966 Nikolai N. Bogoliubov
1965 Freeman Dyson
1964 Tullio Regge
1963 Keith A. Brueckner
1962 Léon Van Hove
1961 Marvin Leonard Goldberger
1960 Aage Bohr
1959 Murray Gell-Mann
See also
Dannie Heineman Prize for Astrophysics
List of mathematics awards
List of physics awards
Prizes named after people
References
External links
Official page at American Physical Society
Awards of the American Physical Society
Awards of the American Institute of Physics
Mathematics awards
Awards established in 1959
Mathematical physics | Dannie Heineman Prize for Mathematical Physics | [
"Physics",
"Mathematics",
"Technology"
] | 627 | [
"Applied mathematics",
"Theoretical physics",
"Mathematics awards",
"Science and technology awards",
"Mathematical physics"
] |
4,855,071 | https://en.wikipedia.org/wiki/Dispersion%20%28chemistry%29 | A dispersion is a system in which distributed particles of one material are dispersed in a continuous phase of another material. The two phases may be in the same or different states of matter.
Dispersions are classified in a number of different ways, including how large the particles are in relation to the particles of the continuous phase, whether or not precipitation occurs, and the presence of Brownian motion. In general, dispersions of particles sufficiently large for sedimentation are called suspensions, while those of smaller particles are called colloids and solutions.
Structure and properties
Dispersions do not display any structure; i.e., the particles (or in case of emulsions: droplets) dispersed in the liquid or solid matrix (the "dispersion medium") are assumed to be statistically distributed. Therefore, for dispersions, usually percolation theory is assumed to appropriately describe their properties.
However, percolation theory can be applied only if the system it should describe is in or close to thermodynamic equilibrium. There are only very few studies about the structure of dispersions (emulsions), although they are plentiful in type and in use all over the world in innumerable applications (see below).
In the following, only such dispersions with a dispersed phase diameter of less than 1 μm will be discussed. To understand the formation and properties of such dispersions (incl emulsions), it must be considered that the dispersed phase exhibits a "surface", which is covered ("wet") by a different "surface" that, hence, are forming an interface (chemistry). Both surfaces have to be created (which requires a huge amount of energy), and the interfacial tension (difference of surface tension) is not compensating the energy input, if at all.
Experimental evidence suggests dispersions have a structure very much different from any kind of statistical distribution (which would be characteristics for a system in thermodynamic equilibrium), but in contrast display structures similar to self-organisation, which can be described by non-equilibrium thermodynamics. This is the reason why some liquid dispersions turn to become gels or even solid at a concentration of a dispersed phase above a critical concentration (which is dependent on particle size and interfacial tension). Also, the sudden appearance of conductivity in a system of a dispersed conductive phase in an insulating matrix has been explained.
Dispersion description
Dispersion is a process by which (in the case of solid dispersing in a liquid) agglomerated particles are separated from each other, and a new interface between the inner surface of the liquid dispersion medium and the surface of the dispersed particles is generated. This process is facilitated by molecular diffusion and convection.
With respect to molecular diffusion, dispersion occurs as a result of an unequal concentration of the introduced material throughout the bulk medium. When the dispersed material is first introduced into the bulk medium, the region at which it is introduced then has a higher concentration of that material than any other point in the bulk. This unequal distribution results in a concentration gradient that drives the dispersion of particles in the medium so that the concentration is constant across the entire bulk. With respect to convection, variations in velocity between flow paths in the bulk facilitate the distribution of the dispersed material into the medium.
Although both transport phenomena contribute to the dispersion of a material into the bulk, the mechanism of dispersion is primarily driven by convection in cases where there is significant turbulent flow in the bulk. Diffusion is the dominant mechanism in the process of dispersion in cases of little to no turbulence in the bulk, where molecular diffusion is able to facilitate dispersion over a long period of time. These phenomena are reflected in common real-world events. The molecules in a drop of food coloring added to water will eventually disperse throughout the entire medium, where the effects of molecular diffusion are more evident. However, stirring the mixture with a spoon will create turbulent flows in the water that accelerate the process of dispersion through convection-dominated dispersion.
Degree of dispersion
The term dispersion also refers to the physical property of the degree to which particles clump together into agglomerates or aggregates. While the two terms are often used interchangeably, according to ISO nanotechnology definitions, an agglomerate is a reversible collection of particles weakly bound, for example by van der Waals forces or physical entanglement, whereas an aggregate is composed of irreversibly bonded or fused particles, for example through covalent bonds. A full quantification of dispersion would involve the size, shape, and number of particles in each agglomerate or aggregate, the strength of the interparticle forces, their overall structure, and their distribution within the system. However, the complexity is usually reduced by comparing the measured size distribution of "primary" particles to that of the agglomerates or aggregates. When discussing suspensions of solid particles in liquid media, the zeta potential is most often used to quantify the degree of dispersion, with suspensions possessing a high absolute value of zeta potential being considered as well-dispersed.
Types of dispersions
A solution describes a homogeneous mixture where the dispersed particles will not settle if the solution is left undisturbed for a prolonged period of time.
A colloid is a heterogeneous mixture where the dispersed particles have at least in one direction a dimension roughly between 1 nm and 1 μm or that in a system discontinuities are found at distances of that order.
A suspension is a heterogeneous dispersion of larger particles in a medium. Unlike solutions and colloids, if left undisturbed for a prolonged period of time, the suspended particles will settle out of the mixture.
Although suspensions are relatively simple to distinguish from solutions and colloids, it may be difficult to distinguish solutions from colloids since the particles dispersed in the medium may be too small to distinguish by the human eye. Instead, the Tyndall effect is used to distinguish solutions and colloids. Due to the various reported definitions of solutions, colloids, and suspensions provided in the literature, it is difficult to label each classification with a specific particle size range. The International Union of Pure and Applied Chemistry attempts to provide a standard nomenclature for colloids as particles in a size range having a dimension roughly between 1 nm and 1 μm.
In addition to the classification by particle size, dispersions can also be labeled by the combination of the dispersed phase and the medium phase that the particles are suspended in. Aerosols are liquids dispersed in a gas, sols are solids in liquids, emulsions are liquids dispersed in liquids (more specifically a dispersion of two immiscible liquids), and gels are liquids dispersed in solids.
Examples of dispersions
Milk is a commonly cited example of an emulsion, a specific type of dispersion of one liquid into another liquid where the two liquids are immiscible. The fat molecules suspended in milk provide a mode of delivery of important fat-soluble vitamins and nutrients from the mother to newborn. The mechanical, thermal, or enzymatic treatment of milk manipulates the integrity of these fat globules and results in a wide variety of dairy products.
Oxide dispersion-strengthened alloy (ODS) is an example of oxide particle dispersion into a metal medium, which improves the high temperature tolerance of the material. Therefore these alloys have several applications in the nuclear energy industry, where materials must withstand extremely high temperatures to maintain operation.
The degradation of coastal aquifers is a direct result of seawater intrusion into the and dispersion into the aquifer following excessive use of the aquifer. When an aquifer is depleted for human use, it is naturally replenished by groundwater moving in from other areas. In the case of coastal aquifers, the water supply is replenished both from the land boundary on one side and the sea boundary on the other side. After excessive discharge, saline water from the sea boundary will enter the aquifer and disperse in the freshwater medium, threatening the viability of the aquifer for human use. Several different solutions to seawater intrusion in coastal aquifers have been proposed, including engineering methods of artificial recharge and implementing physical barriers at the sea boundary.
Chemical dispersants are used in oil spills to mitigate the effects of the spill and promote the degradation of oil particles. The dispersants effectively isolate pools on oil sitting on the surface of the water into smaller droplets that disperse into the water, which lowers the overall concentration of oil in the water to prevent any further contamination or impact on marine biology and coastal wildlife.
References
Colloidal chemistry
Solutions
Chemical mixtures | Dispersion (chemistry) | [
"Chemistry"
] | 1,830 | [
"Colloidal chemistry",
"Colloids",
"Surface science",
"Homogeneous chemical mixtures",
"Chemical mixtures",
"nan",
"Solutions"
] |
4,855,190 | https://en.wikipedia.org/wiki/Meglumine%20antimoniate | Meglumine antimoniate is a medicine used to treat leishmaniasis. This includes visceral, mucocutaneous, and cutaneous leishmaniasis. It is given by injection into a muscle or into the area infected.
Side effects include loss of appetite, nausea, abdominal pain, cough, feeling tired, muscle pain, irregular heartbeat, and kidney problems. It should not be used in people with significant heart, liver, or kidney problems. It is not recommended during breastfeeding. It belongs to a group of medications known as the pentavalent antimonials.
Meglumine antimoniate came into medical use in 1946. It is on the World Health Organization's List of Essential Medicines. It is available in Southern Europe and Latin America but not the United States.
Society and culture
It is manufactured by Aventis and sold as Glucantime in France, and Glucantim in Italy.
See also
Meglumine
References
Antiprotozoal agents
Antimony(V) compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Meglumine antimoniate | [
"Biology"
] | 222 | [
"Antiprotozoal agents",
"Biocides"
] |
22,435,314 | https://en.wikipedia.org/wiki/Pasqual%20Maragall%20Foundation | The Pasqual Maragall Foundation is a private, non-profit foundation dedicated to scientific research of Alzheimer's disease. It was founded in April 2008 in Barcelona as a result of the public commitment of Pasqual Maragall, former Mayor of Barcelona and former President of the Generalitat de Catalunya, who had been diagnosed with this neurodegenerative disease in 2007. Its headquarters are located in the Ciutadella Campus of the Pompeu Fabra University in Barcelona.
Private and independent, the Pasqual Maragall Foundation counts on the economic support of partner enterprises and a network of associates and donors that make the project viable. Regarding its governing bodies, Dr. Jordi Camí is the director of the Foundation, Diana Garrigosa is the president and Pasqual Maragall, is the honorary president.
Objectives
The Foundation’s objective is to promote scientific research in the field of Alzheimer’s disease, related neurodegenerative diseases and neuroscience in general. Another of its objectives is to provide technical support and advisory services, as well as transferring its knowledge in the areas that are specific to it. Another line of action included in its foundational mission is spreading the results of its scientific activities and involving society in relation to the knowledge obtained.
The Barcelonaβeta Brain Research Center (BBRC)
The Barcelonaβeta Brain Research Center (BBRC) is a research center dedicated to the prevention of the Alzheimer’s Disease and the study of the cognitive functions affected in healthy and pathological aging. The Pasqual Maragall Foundation, with the support of the Pompeu Fabra University, was created it in 2012 and it’s currently directed by Dr. Arcadi Navarro.
The center’s mail activity is carried out in the Alzheimer’s Prevention Program, led by Dr. Jose Luís Molinuevo. The program focuses on the pre-clinical phase of the disease, characterized by a series of changes in the brain that can start up to 20 years before the onset of the symptoms, and on the prodromal phase, which occurs when the first symptoms of cognitive impairment appear, but the affected person continues being independent on a day-to-day basis. The program it’s structured in two research groups that collaborate closely from a clinical, cognitive, genetic and biomarker and neuroimaging perspective.
Alfa Study: To identify the early physiopathological events in Alzheimer’s disease and develop primary and secondary prevention programs, the BBRC launched the Alfa Study, together with the Pasqual Maragall Foundation and thanks to support of “la Caixa”. The Alfa Study is a research platform made by 2.700 participants without cognitive alterations, dedicated to the early detection and prevention of Alzheimer’s disease. The participants are between 45 and 71 years old, and majorly descendants of people with Alzheimer’s disease, so the cohort is enriched with genetic factors related to the disease.
Research Projects: The BBRC has several international studies and collaborations, majorly dedicated to the early detection and prevention of Alzheimer’s disease. The Alfa Study +, the Barcelonaβeta Dementia Prevention Research Clinic, the Alfa Genetics, the European Prevention of Alzheimer’s Dementia (EPAD), the Amyloid Imaging to Prevent Alzheimer’s Disease (AMYPAD) and TRIBEKA are some of the research projects that are carried out.
Clinical trials: The BBRC works with the pharmaceutical industry and with public-private projects in Alzheimer’s clinical research in order to test drugs that succeed to avoid or delay the onset of the disease. Clinical trials for the prevention of Alzheimer’s disease with companies such as Novartis, Araclon Blotech and Janssen are being or have been carried out in their facilities.
The BBRC shares its headquarters with the Pasqual Maragall Foundation, at 30 Wellington Street in Barcelona, in the Ciutadella Campus of the Pompeu Fabra University. Inaugurated in 2016, its facilities include a state-of-the-art 3T magnetic resonance dedicated exclusively to research, and the personnel and equipment necessary to carry out clinical trials in human research. The BBRC‘s Neuroimaging Platform offers the scientific community personalized integral service to execute research projects that take into account the acquisition, management and processing of cerebral images by magnetic resonance.
References
External links
Barcelona Beta
Medical research institutes in Spain
Biomedical research foundations
Catalonia | Pasqual Maragall Foundation | [
"Engineering",
"Biology"
] | 889 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
22,437,570 | https://en.wikipedia.org/wiki/Myrmecotrophy | Myrmecotrophy is the ability of plants to obtain nutrients from ants, a form of mutualism. Due to this behaviour the invasion of vegetation into harsh environments is promoted. The dead remains of insects thrown out by the ants are absorbed by the lenticular warts in myrmecophytes like Hydnophytum and Myrmecodia. Myrmecodia uses its lenticular warts to suck nutrients from the insects thrown out by the ants. The ants in turn benefit with a secure location to form their colony. The pitcher plant Nepenthes bicalcarata obtains an estimated 42% of its total foliar nitrogen from ant waste.
References
Myrmecology
Plant physiology | Myrmecotrophy | [
"Biology"
] | 148 | [
"Plant physiology",
"Plants"
] |
22,438,135 | https://en.wikipedia.org/wiki/Transgenic%20hydra | Cnidarians such as Hydra have become attractive model organisms to study the evolution of immunity. However, despite long-term efforts, stably transgenic animals could not be generated, severely limiting the functional analysis of genes. For analytical purposes, therefore, an important technical breakthrough in the field was the development of a transgenic procedure for generation of stably transgenic lines by embryo microinjection.
Uses
Hydra polyps are small and transparent which makes it possible to trace single cells in vivo. In addition, transgenic Hydra provide a ready system for generating gain-of-function phenotypes. With the use of transgenes producing dominant-negative versions of proteins, one should be able to obtain loss-of-function phenotypes as well.
Current technology allows generation of reporter constructs using promoters of various Hydra genes fused to fluorescent proteins.
Since transgenic Hydra lines have become an important tool to dissect molecular mechanisms of development, a “Hydra Transgenic Facility” has been established at the Christian-Albrechts-University of Kiel (Germany).
References
Wittlieb J, Khalturin K, Lohmann JU, Anton-Erxleben F and Bosch TCG (2006): Transgenic Hydra allow in vivo tracking of individual stem cells during morphogenesis. Proc. Natl. Acad. Sci. USA 103;16: 6208-6211
Khalturin K, Anton-Erxleben F, Milde S, Plötz C, Wittlieb J, Hemmrich G and Bosch TCG (2007): Transgenic stem cells in Hydra reveal an early evolutionary origin for key elements controlling self-renewal and differentiation. Developmental Biology, Volume 309, Issue 1, Pages 32–44
Siebert S, Anton-Erxleben F and Bosch TCG (2008): Cell type complexity in the basal metazoan Hydra is maintained by both stem cell based mechanisms and transdifferentiation. Dev. Biol. 313: 13-24
Milde S, G Hemmrich, F Anton-Erxleben, K Khalturin, J Wittlieb, and TCG Bosch (2009): Characterization of taxonomically-restricted genes in a phylum-restricted cell type. Genome Biol. 10(1):R8
External links
Transgenic Hydra facility at the University of Kiel (Germany)
Genetically modified organisms
Molecular biology
Hydridae | Transgenic hydra | [
"Chemistry",
"Engineering",
"Biology"
] | 506 | [
"Biochemistry",
"Genetic engineering",
"Genetically modified organisms",
"Molecular biology"
] |
26,788,131 | https://en.wikipedia.org/wiki/Clinical%20and%20Translational%20Science | Clinical and Translational Science is a bimonthly peer-reviewed open-access medical journal covering translational medicine. It is published by Wiley-Blackwell and is an official journal of the American Society for Clinical Pharmacology and Therapeutics. The journal was established in 2008 and the editor-in-chief is John A. Wagner (Cygnal Therapeutics).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, its 2023 impact factor is 3.1.
References
External links
American Society for Clinical Pharmacology and Therapeutics
General medical journals
Wiley-Blackwell academic journals
Academic journals established in 2008
Bimonthly journals
English-language journals
Translational medicine
Creative Commons Attribution-licensed journals | Clinical and Translational Science | [
"Biology"
] | 154 | [
"Translational medicine"
] |
26,789,010 | https://en.wikipedia.org/wiki/Reliability%2C%20availability%2C%20maintainability%20and%20safety | In engineering, reliability, availability, maintainability and safety (RAMS) is used to characterize a product or system:
Reliability: Ability to perform a specific function and may be given as design reliability or operational reliability
Availability: Ability to keep a functioning state in the given environment
Maintainability: Ability to be timely and easily maintained (including servicing, inspection and check, repair and/or modification)
Safety: Ability not to harm people, the environment, or any assets during a whole life cycle.
See also
Systems engineering
Dependability
Failure mode
Failure rate
Failure mode, effects, and criticality analysis (FMECA)
Hazard analysis and critical control points
High availability
Risk assessment
Reliability-centered maintenance
Safety instrumented system
Safety integrity level
Reliability, availability and serviceability (RAS)
Fault injection
References
Reliability engineering | Reliability, availability, maintainability and safety | [
"Engineering"
] | 158 | [
"Systems engineering",
"Reliability engineering"
] |
26,789,678 | https://en.wikipedia.org/wiki/AllegroGraph | AllegroGraph is a closed source triplestore which is designed to store RDF triples, a standard format for Linked Data.
It also operates as a document store designed for storing, retrieving and managing document-oriented information, in JSON-LD format.
AllegroGraph is currently in use in commercial projects and a US Department of Defense project. It is also the storage component for the TwitLogic project that is bringing the Semantic Web to Twitter data.
Implementation
AllegroGraph was developed to meet W3C standards for the Resource Description Framework, so it is properly considered an RDF Database. It is a reference implementation for the SPARQL protocol. SPARQL is a standard query language for linked data, serving the same purposes for RDF databases that SQL serves for relational databases.
Franz Inc. is the developer of AllegroGraph. It also develops Allegro Common Lisp, an implementation of Common Lisp, a dialect of Lisp (programming language). The functionality of AllegroGraph is made available through Java, Python, Common Lisp and other APIs.
The first version of AllegroGraph was made available at the end of 2004.
Languages
AllegroGraph has client interfaces for Java, Python, Ruby, Perl, C#, Clojure, and Common Lisp. The product is available for Windows, Linux, and Mac OS X platforms, supporting 32 or 64 bits.
For query languages, besides SPARQL, AllegroGraph also supports Prolog and JavaScript.
References
External links
Archived released
Practical Semantic Web and Linked Data Applications — a book by Mark Watson
Graph databases
Triplestores
Ontology (information science)
Proprietary software
Common Lisp (programming language) software | AllegroGraph | [
"Mathematics"
] | 344 | [
"Graph databases",
"Mathematical relations",
"Graph theory"
] |
26,791,094 | https://en.wikipedia.org/wiki/Scavenger%20resin | Scavenger resins are polymers (resins) with bound functional groups that react with specific by-products, impurities, or excess reagents produced in a reaction. Polymer-bound functional groups permit the use of many different scavengers, as the functional groups are confined within a resin or are simply bound to the solid support of a bead. Simply, the functional groups of one scavenger will react minimally with the functional groups of another.
Applications
Employment of scavenger resins has become increasingly popular in solution-phase combinatorial chemistry. Used primarily in the synthesis of medicinal drugs, solution-phase combinatorial chemistry allows for the creation of large libraries of structurally related compounds. When purifying a solution, many approaches can be taken. In general chemical synthesis laboratories, a number of traditional techniques for purification are used as opposed to the employment of scavenger resins. Whether or not scavenger resins are used often depends on the quantity of product desired, how much time you have to produce the wanted product, and the use of the product. Some of the advantages and disadvantages to using scavenger resins as a means for purification are described later. Traditional methods of purification of these compounds becomes time consuming and does not always produce entirely pure products. The ability to specialize a scavenger resin allows for significantly reduce purification times and more pure products. Furthermore, the use of scavenger resins creates a situation where the product can remain in solution and the reaction can be monitored. Conversely, many scavenger resins must be used in large amounts to purify a given product, presenting physical purification issues. Furthermore, when discussing the use of scavenger resins it is important to think about the different types of solid support "beads" that will hold the selected functional group. These polymer beads can be describe most often in two ways, lightly crosslinked and highly crosslinked. The different solid supports are chosen at the preference of the chemist.
Lightly crosslinked resins
Lightly crosslinked refers to the lightly woven polymer portion of the scavenger. This type of resin becomes swollen in a particular solvent, allowing an impurity to react with a specified functional group. In many times single solvents are not sufficient to expand the resin, in which case a second solvent must be added. Examples of a secondary solvent, or co-solvent, would be Tetrahydrofuran, or THF. Typically contain 1–3% of divinylbenzene.
Highly crosslinked resins
Highly crosslinked resins typically swell much less than the latter. The property that allows these types of resins to work efficiently lies in their porous properties. The reacting compound can diffuse through the porous layer of the resin to converge with the scavenger's functional group. These types of resins are utilized in situations where swelling of the resins may cause a physical barrier to reaction purification. Contain much higher content of divinylbenzene.
Commercial use
Organic scavenger resins have been used commercially in water filters as early as 1997. As an alternative to reverse osmosis, organic anion resins (scavenger resins) have been used to remove impurities from drinking water. These types of resins are able to remove the negatively charged organic[verification needed] molecules in water, like bicarbonates, sulfates, and nitrates. It has been estimated that 60–80% of organic impurities in water may be remove using these methods.
Advantages
Rapid purification time: Products can be purified in short periods of time, relative to traditional techniques
Product remains in solution: The product is not removed from solution, as in crystallization techniques.
Reaction may be monitored: The purification process is controlled
Traditional purity techniques may be employed
Can be used in excess
Removed by filtration
Allow for the synthesis of complex compound libraries
Can be customized: Different scavenger resins employed for different impurities.
High solvent compatibility (can be used with many solvents)
Disadvantages
Large quantities are in some cases required to remove impurities
May pose barriers on small scale reactions by "clogging up" the test tube reaction
Dependent upon reagent to be removed
See also
Benzaldehyde
Dynamic combinatorial chemistry
Drug discovery
High-throughput screening
Chemical library
List of purification methods in chemistry
References
External links
High loading polymer reagents based on polycationic Ultraresins. Polymer-supported reductions and oxidations with increased efficiency
Scavenger Resins in Solution-Phase CombiChem
High-loading Scavenger Resins for Combinatorial Chemistry
Combichem Scavenging
Polymer-Bound Scavengers
Resins with Functional Groups as Scavengers
Scavenger Resins and Polymer-Bound Reagents
Scavenger Resins
Combinatorial Chemistry Terms
Polymers
Reagents for organic chemistry
Chemical reactions | Scavenger resin | [
"Chemistry",
"Materials_science"
] | 998 | [
"Polymers",
"nan",
"Polymer chemistry",
"Reagents for organic chemistry"
] |
26,791,258 | https://en.wikipedia.org/wiki/Reciprocal%20length | Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematics, defined as the reciprocal of length.
Common units used for this measurement include the reciprocal metre or inverse metre (symbol: m−1), the reciprocal centimetre or inverse centimetre (symbol: cm−1).
In optics, the dioptre is a unit equivalent to reciprocal metre.
List of quantities
Quantities measured in reciprocal length include:
absorption coefficient or attenuation coefficient, in materials science
curvature of a line, in mathematics
gain, in laser physics
magnitude of vectors in reciprocal space, in crystallography
more generally any spatial frequency e.g. in cycles per unit length
optical power of a lens, in optics
rotational constant of a rigid rotor, in quantum mechanics
wavenumber, or magnitude of a wavevector, in spectroscopy
density of a linear feature in hydrology and other fields; see kilometre per square kilometre
surface area to volume ratio
Measure of energy
In some branches of physics, the universal constants c, the speed of light, and ħ, the reduced Planck constant, are treated as being unity (i.e. that c = ħ = 1), which leads to mass, energy, momentum, frequency and reciprocal length all having the same unit. As a result, reciprocal length is used as a measure of energy. The frequency of a photon yields a certain photon energy, according to the Planck–Einstein relation, and the frequency of a photon is related to its spatial frequency via the speed of light. Spatial frequency is a reciprocal length, which can thus be used as a measure of energy, usually of a particle. For example, the reciprocal centimetre, , is an energy unit equal to the energy of a photon with a wavelength of 1 cm. That energy amounts to approximately or .
The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length units. For example, in terms of energy, one reciprocal metre equals (one hundredth) as much as a reciprocal centimetre. Five reciprocal metres are five times as much energy as one reciprocal metre.
See also
Lineic quantity
Reciprocal area
Reciprocal second
Reciprocal volume
Further reading
SI derived units
Length-specific quantities | Reciprocal length | [
"Physics",
"Mathematics"
] | 456 | [
"Quantity",
"Intensive quantities",
"Physical quantities",
"Length-specific quantities"
] |
26,794,949 | https://en.wikipedia.org/wiki/Kline%E2%80%93Fogleman%20airfoil | The Kline–Fogleman airfoil or KF airfoil is a simple airfoil design with single or multiple steps along the length of the wing. It was originally devised in the 1960s for paper airplanes.
In the 21st century the KF airfoil has found renewed interest among hobbyist builders of radio-controlled aircraft, due to its simplicity of construction. But it has not been adopted for full-size aircraft capable of carrying a pilot, passengers, or other substantial payloads.
History
The KF airfoil was designed by Richard Kline and Floyd Fogleman.
In the early 1960s, Richard Kline wanted to make a paper airplane that could handle strong winds, climb high, level off by itself and then enter a long downwards glide. After many experiments he was able to achieve this goal. He presented the paper airplane to Floyd Fogleman who saw it fly and resist stalling. The two men then filed for a patent on the stepped airfoil.
Further development resulted in two patents and a family of airfoils known as the KF airfoil and KFm airfoils (for Kline–Fogleman modified). The two patents, US Patent # 3,706,430 and US Patent # 4,046,338, refer to the introduction of a step on either the bottom (KFm1) or the top (KFm2) of an airfoil, or on both the top and bottom (KFm4). Variations include airfoils with two steps on the top (KFm3), or two steps on the top and one on the bottom (KFm7).
The purpose of the step, it is claimed, is to allow some of the displaced air to fall into a pocket behind the step and become part of the airfoil shape as a trapped vortex or vortex attachment. This purportedly prevents separation and maintains airflow over the surface of the airfoil.
Reception
Time published an April 2, 1973 article, The Paper-Plane Caper, about the paper airplane and its Kline–Fogleman airfoil.
Also in 1973, CBS 60 Minutes did a 15-minute segment on the KF airfoil. CBS reran the show in 1976.
In 1985, Kline wrote a book entitled The Ultimate Paper Airplane. To publicize the book Kline traveled to Kill Devil Hills, NC, the site where the Wright Brothers first had flown where their first manned powered flight, of . A crew from Good Morning America filmed the event. The longest flight by Kline with his paper airplane traveled .
Independent scientific testing
In 1974, a NASA-funded study prompted by Kline and Fogelman's claims and the resulting national coverage found the airfoil to have worse lift-to-drag ratio than a flat plate airfoil in wind tunnel tests.
In the 1990s, after the original patents expired, researchers returned to the topic of stepped wings. A 1998 study by Fathi Finaish and Stephen Witherspoon at the University of Missouri tested numerous step configurations in a wind tunnel. While many step configurations made wing performance worse, promising results were achieved with backward-facing steps on the lower surface of the wing, in some cases showing considerable enhancement in lift without a significant drag penalty. However, the researchers found that a single configuration could not be the best solution at every angle of attack and flight speed; instead, they concluded that "vastly different configurations may be needed during a single maneuver." The idea works, Finaish and Witherspoon concluded, but only with active automated reconfiguration of the shape of the steps during flight.
A 2008 study by Fabrizio De Gregorio and Giuseppe Fraioli at CIRA and the University of Rome in Italy pursued this idea further. The model aerofoils used in their wind tunnel tests were equipped with numerous small holes through which air could be blown or sucked in an active way. They concluded that the trapped vortex formed by a cavity or step could not be held in place without such active control. Merely relying passively on wing shape wasn't enough – the vortex would detach, possibly yielding worse characteristics than the original unstepped airfoil. But when active controls were used to keep the vortex stably in place, they found the results "really encouraging".
The case study conducted as a part of this research focused on the UAV RQ-2 Pioneer employed in a stepped airfoil configuration by comparing its aerodynamic characteristics with the conventional NACA 4415 airfoil originally used on this aircraft. The main objective of the case study was to identify and outline a step schedule for the flight envelope of the UAV Pioneer using a stepped airfoil configuration at the same time applying active flow control to obtain enhanced aerodynamic performance over conventional NACA 4415 airfoil originally used and hence improve the flight performance characteristics like Range and Endurance of the aircraft.
Applications of the KF airfoil today
Poor lift-to-drag ratio performance in wind tunnel testing has meant that to date the KF airfoil has not been used on any full size aircraft. But the KF airfoil and derivative 'stepped' airfoils have in recent years gained a following in the world of foam constructed radio controlled model aircraft. The low Reynolds numbers allow for the stepped airfoils to produce a significant amount of lift for the drag incurred, making them increasingly popular among RC hobbyists.
The simple KF airfoil shape lends itself well to construction in sheets of various plastic foams, typically expanded polystyrene (EPS) or expanded polypropylene (EPP). The resulting stepped wing can have improved performance and flying characteristics compared to the simpler 'flat plate' wing used in some radio-controlled models. The airfoils illustrated in this article are examples of those used in radio control foam models.
The first man-carrying KF airfoil-based aircraft was successfully flown in 1987 by Richard Wood in Canada. (The Recreational Flyer Magazine November December 1991) Top speed was higher and stall was slower (The Recreational Flyer Magazine. November / December 1991). The airfoil was tested on a Vector 600 ultralight.
The KF airfoil has been applied to the Darrieus wind turbine using a trapped vortex. Experiments have found the KF rotor demonstrates a higher static and dynamic torque with low Reynolds applications and better performance for wind conditions lower than 0.8 m/s. It is seen a potential solution for self-starting in the Darrieus wind turbine.
The first man carrying flight of the Kline Fogelman airfoil was July 7, 1987. in Essex Ontario Canada by Richard (Dick) Wood. This is the first and only known attempt of putting this airfoil on a full sized aircraft. After discussion with the inventor Dick Kline it was decided to have 2 notches per wing. This led to a beneficial side effect for the wing of keeping the airflow attached to the wing at high angles of attack.. The Vector 600 Ultralight was recovered in aircraft fabric and had the notches built in but covered over. First flights were conducted with the notches covered over to produce a regular air foil. Subsequent flights had the notches cut open and exposed. An increase in top speed and a lower stall speed were noted. Regular flight did not seem affected. https://www.youtube.com/watch?v=A1zy57S5DUQ
Patents
AIRFOIL FOR AIRCRAFT, filed March 17, 1970, issued December 1972
Airfoil for aircraft having improved lift generating device, filed October 14, 1975, issued September 6, 1977
See also
Lift (force)
Plasma actuator
Radio-controlled glider
Simple Plastic Airplane Design
Wing
References
External links
KF Airfoil Testing
YouTube videos of enthusiasts building and flying KF airfoil-based craft
Interview with Richard Kline about how he came up with the design
Pictures of aircraft using the KF airfoil
Fancy Flights, by Scott Morris, a 1984 Omni article about US research into the KF airfoil
Aerodynamics
Aircraft wing design | Kline–Fogleman airfoil | [
"Chemistry",
"Engineering"
] | 1,623 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
8,322,679 | https://en.wikipedia.org/wiki/Cyclopentadienylmolybdenum%20tricarbonyl%20dimer | Cyclopentadienylmolybdenum tricarbonyl dimer is the chemical compound with the formula Cp2Mo2(CO)6, where Cp is C5H5. A dark red solid, it has been the subject of much research although it has no practical uses.
Structure and synthesis
The molecule exists in two rotamers, gauche and anti. The six CO ligands are terminal and the Mo-Mo bond distance is 3.2325 Å. The compound is prepared by treatment of molybdenum hexacarbonyl with sodium cyclopentadienide followed by oxidation of the resulting NaMo(CO)3(C5H5). Other methods have been developed starting with Mo(CO)3(CH3CN)3 instead of Mo(CO)6.
Reactions
Thermolysis of this compound in hot solution of diglyme (bis(2-methoxyethyl)ether) results in decarbonylation, giving the tetracarbonyl, which has a formal triple bond between the Mo centers (dMoMo = 2.448 Å):
(C5H5)2Mo2(CO)6 → (C5H5)2Mo2(CO)4 + 2 CO
The resulting cyclopentadienylmolybdenum dicarbonyl dimer in turn binds a variety of substrates across the metal-metal triple bond.
Related compounds
Cyclopentadienyltungsten tricarbonyl dimer
Cyclopentadienylchromium tricarbonyl dimer
References
Organomolybdenum compounds
Carbonyl complexes
Dimers (chemistry)
Half sandwich compounds
Cyclopentadienyl complexes
Chemical compounds containing metal–metal bonds | Cyclopentadienylmolybdenum tricarbonyl dimer | [
"Chemistry",
"Materials_science"
] | 360 | [
"Half sandwich compounds",
"Cyclopentadienyl complexes",
"Dimers (chemistry)",
"Polymer chemistry",
"Organometallic chemistry"
] |
8,323,492 | https://en.wikipedia.org/wiki/Cyclooctadiene%20rhodium%20chloride%20dimer | Cyclooctadiene rhodium chloride dimer is the organorhodium compound with the formula Rh2Cl2(C8H12)2, commonly abbreviated [RhCl(COD)]2 or Rh2Cl2(COD)2. This yellow-orange, air-stable compound is a widely used precursor to homogeneous catalysts.
Preparation and reactions
The synthesis of [RhCl(COD)]2 involves heating a solution of hydrated rhodium trichloride with 1,5-cyclooctadiene in aqueous ethanol in the presence of sodium carbonate:
2 RhCl3·3H2O + 2 COD + 2 CH3CH2OH + 2 Na2CO3 → [RhCl(COD)]2 + 2 CH3CHO + 8 H2O + 2 CO2 + 4 NaCl
[RhCl(COD)]2 is principally used as a source of the electrophile "[Rh(COD)]+."
[RhCl(COD)]2 + L → [LRh(COD)]+Cl− (where L = PR3, alkene, etc. and = 2 or 3)
In this way, chiral phosphines can be attached to Rh. The resulting chiral complexes are capable of asymmetric hydrogenation. A related but still more reactive complex is chlorobis(cyclooctene)rhodium dimer. The dimer reacts with a variety of Lewis bases (L) to form adducts with the stoichiometry RhCl(L)(COD).
Structure
The molecule consists of a pair of square planar Rh centers bound to a 1,5-cyclooctadiene and two chloride ligands that are shared between the Rh centers. The Rh2Cl2 core is also approximately planar, in contrast to the highly bent structure of cyclooctadiene iridium chloride dimer where the dihedral angle is 86°.
References
External links
Organorhodium compounds
Homogeneous catalysis
Cyclooctadiene complexes
Dimers (chemistry)
Chloro complexes
Rhodium(I) compounds | Cyclooctadiene rhodium chloride dimer | [
"Chemistry",
"Materials_science"
] | 463 | [
"Catalysis",
"Homogeneous catalysis",
"Dimers (chemistry)",
"Polymer chemistry"
] |
8,324,345 | https://en.wikipedia.org/wiki/Minimal%20subtraction%20scheme | In quantum field theory, the minimal subtraction scheme, or MS scheme, is a particular renormalization scheme used to absorb the infinities that arise in perturbative calculations beyond leading order, introduced independently by Gerard 't Hooft and Steven Weinberg in 1973. The MS scheme consists of absorbing only the divergent part of the radiative corrections into the counterterms.
In the similar and more widely used modified minimal subtraction, or MS-bar scheme (), one absorbs the divergent part plus a universal constant that always arises along with the divergence in Feynman diagram calculations into the counterterms. When using dimensional regularization, i.e. , it is implemented by rescaling the renormalization scale: , with the Euler–Mascheroni constant.
References
Other
Renormalization group | Minimal subtraction scheme | [
"Physics"
] | 175 | [
"Statistical mechanics stubs",
"Physical phenomena",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics",
"Quantum physics stubs"
] |
8,324,507 | https://en.wikipedia.org/wiki/Renormalon | In physics, a renormalon (a term suggested by 't Hooft) is a particular source of divergence seen in perturbative approximations to quantum field theories (QFT). When a formally divergent series in a QFT is summed using Borel summation, the associated Borel transform of the series can have singularities as a function of the complex transform parameter. The renormalon is a possible type of singularity arising in this complex Borel plane, and is a counterpart of an instanton singularity. Associated with such singularities, renormalon contributions are discussed in the context of quantum chromodynamics (QCD) and usually have the power-like form as functions of the momentum (here is the momentum cut-off). They are cited against the usual logarithmic effects like .
Brief history
Perturbation series in quantum field theory are usually divergent as was firstly indicated by Freeman Dyson. According to the Lipatov method, -th order contribution of perturbation theory into any quantity can be evaluated at large in the saddle-point approximation for functional integrals and is determined by instanton configurations. This contribution behaves usually as in dependence on and is frequently associated with approximately the same () number of Feynman diagrams. Lautrup has noted that there exist individual diagrams giving approximately the same contribution. In principle, it is possible that such diagrams are automatically taken into account in Lipatov's calculation, because its interpretation in terms of diagrammatic technique is problematic. However, 't Hooft put forward a conjecture that Lipatov's and Lautrup's contributions are associated with different types of singularities in the Borel plane, the former with instanton ones and the latter with renormalon ones. Existence of instanton singularities is beyond any doubt, while existence of renormalon ones was never proved rigorously in spite of numerous efforts. Among the essential contributions one should mention the application of the operator product expansion, as was suggested by Parisi.
Recently a proof was suggested for absence of renormalon singularities in theory and a general criterion for their existence was formulated in terms of the asymptotic behavior of the Gell-Mann–Low function . Analytical results for asymptotics of in theory and QED indicate the absence of renormalon singularities in these theories.
References
Quantum chromodynamics | Renormalon | [
"Physics"
] | 497 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
8,327,127 | https://en.wikipedia.org/wiki/Dense-in-itself | In general topology, a subset of a topological space is said to be dense-in-itself or crowded
if has no isolated point.
Equivalently, is dense-in-itself if every point of is a limit point of .
Thus is dense-in-itself if and only if , where is the derived set of .
A dense-in-itself closed set is called a perfect set. (In other words, a perfect set is a closed set without isolated point.)
The notion of dense set is distinct from dense-in-itself. This can sometimes be confusing, as "X is dense in X" (always true) is not the same as "X is dense-in-itself" (no isolated point).
Examples
A simple example of a set that is dense-in-itself but not closed (and hence not a perfect set) is the set of irrational numbers (considered as a subset of the real numbers). This set is dense-in-itself because every neighborhood of an irrational number contains at least one other irrational number . On the other hand, the set of irrationals is not closed because every rational number lies in its closure. Similarly, the set of rational numbers is also dense-in-itself but not closed in the space of real numbers.
The above examples, the irrationals and the rationals, are also dense sets in their topological space, namely . As an example that is dense-in-itself but not dense in its topological space, consider . This set is not dense in but is dense-in-itself.
Properties
A singleton subset of a space can never be dense-in-itself, because its unique point is isolated in it.
The dense-in-itself subsets of any space are closed under unions. In a dense-in-itself space, they include all open sets. In a dense-in-itself T1 space they include all dense sets. However, spaces that are not T1 may have dense subsets that are not dense-in-itself: for example in the dense-in-itself space with the indiscrete topology, the set is dense, but is not dense-in-itself.
The closure of any dense-in-itself set is a perfect set.
In general, the intersection of two dense-in-itself sets is not dense-in-itself. But the intersection of a dense-in-itself set and an open set is dense-in-itself.
See also
Nowhere dense set
Glossary of topology
Dense order
Notes
References
Topology | Dense-in-itself | [
"Physics",
"Mathematics"
] | 516 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
27,138,990 | https://en.wikipedia.org/wiki/Fabasoft%20Folio%20Cloud | Fabasoft Folio Cloud is a cloud computing service developed by Fabasoft in Linz, Austria announced in April 2010. It focuses on enabling secure collaboration and is web-based with iOS and Android apps for use on mobile devices. The software is object-oriented and offers a wide range of sophisticated functionality for document management and global collaboration, which can be extended by specialist cloud applications. Fabasoft places a large amount of focus on usability and accessibility.
Security
Folio Cloud is certified and tested according to the following security standards : ISO 27001:2005, ISO 20000, ISO 9001, SAS 70 Type II. Fabasoft was also the first software manufacturer to receive MoReq2 certification – the European standard for records management.
All Folio Cloud data is saved in data centers in Europe, where European standards for security, reliability and data protection apply. Cloud data is kept permanently synchronized in two mirrored data centers in Austria so that a fail over is possible at any time. A backup of data is constantly maintained in a third data center. Further data center locations are being integrated in Germany and Switzerland and in future users will be able to decide at which data center location their data is stored.
Folio Cloud is based on open source and does not contain any US-owned software. This prevents access to European cloud data by US authorities under the “US Patriot Act”.
All communication and transfer of data within Folio Cloud is encrypted via SSL/TLS. Cloud access is protected by secure forms of authentication including two factor authentication with Motoky or SMS and login via digital ID. Folio Cloud has integrated the new German digital ID card, the Austrian Citizen Card with mobile signature and the SuisseID as forms of digital authentication. Fabasoft is active in the support of the advancement of European cloud infrastructure.
Mobile cloud
Folio Cloud supports all common web browsers, different operating systems and end user devices. Folio Cloud apps are also available on Google Play and the Apple App Store for use on Android and iOS devices. Folio Cloud supports open standards such as WebDAV, CalDAV and CMIS.
Apps
Apps are online applications that extend the functionality of Folio Cloud to fulfill concrete use cases and needs. All Folio Cloud Apps are available in the Fabasoft Cloud App Store.
Fabasoft held its first Cloud Developer Conference (CDC) from December 15–17, 2010 as a free event for Cloud developers. Since then the event has taken place twice a year, once in the summer and once in the winter.
References
External links
Official Folio Cloud Website
Cloud platforms
Centralized computing
Technology companies of Austria | Fabasoft Folio Cloud | [
"Technology"
] | 535 | [
"Cloud platforms",
"Computing platforms",
"IT infrastructure",
"Centralized computing",
"Computer systems"
] |
27,139,928 | https://en.wikipedia.org/wiki/Antozonite | Antozonite (historically known as Stinkspat, Stinkfluss, Stinkstein, Stinkspar and fetid fluorite) is a radioactive fluorite variety first found in Wölsendorf, Bavaria, in 1841, and named in 1862.
It is characterized by the presence of multiple inclusions containing elemental fluorine; when the crystals are crushed or broken, the elemental fluorine is released. It was postulated that beta radiation given by uranium inclusions continuously break down calcium fluoride into calcium and fluorine atoms. Fluorine atoms combine to produce difluoride anions and, upon losing the extra electrons at a defect, fluorine is formed. Fluorine subsequently reacts with atmospheric oxygen and water vapor, producing ozone (whose characteristic smell, originally mistaken for a hypothetical substance called antozone, is responsible for the mineral's name) and hydrogen fluoride.
References
External links
Antozonite at Mindat.org.
Halide minerals
Fluorite
Radiation effects | Antozonite | [
"Physics",
"Materials_science",
"Engineering"
] | 211 | [
"Physical phenomena",
"Materials science",
"Radiation",
"Condensed matter physics",
"Radiation effects"
] |
27,142,068 | https://en.wikipedia.org/wiki/Linda-like%20systems | Linda-like systems are parallel and distributed programming models that use unstructured collections of tuples as a communication mechanism between different processes.
Examples
In addition to proper Linda implementations, these include other systems such as the following:
Intel Concurrent Collections (CnC) is a programming model based on "item collections" which resemble tuple spaces, but are single assignment (tuples may not be removed or replaced). Because of this restriction Concurrent Collections has a deterministic execution semantics, but has difficulties with storage deallocation.
References
Parallel computing | Linda-like systems | [
"Technology"
] | 110 | [
"Computing stubs"
] |
27,143,209 | https://en.wikipedia.org/wiki/Human%20Tissue%20%28Scotland%29%20Act%202006 | The Human Tissue (Scotland) Act 2006 (asp 4) is an Act of the Scottish Parliament to consolidate and overhaul previous legislation regarding the handling of human tissue.
It deals with three distinct uses of human tissue: its donation primarily for the purpose of transplantation, but also for research, education or training and audit; its removal, retention and use following a post-mortem examination; and for the purposes of the Anatomy Act 1984 as amended for Scotland by the 2006 Act.
Its counterpart in the rest of the United Kingdom is the Human Tissue Act 2004.
References
External links
Acts of the Scottish Parliament 2006
Tissue engineering
Science and technology in Scotland
Organ transplantation in the United Kingdom
Health law in Scotland | Human Tissue (Scotland) Act 2006 | [
"Chemistry",
"Engineering",
"Biology"
] | 141 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
27,144,045 | https://en.wikipedia.org/wiki/Thiol-yne%20reaction | In organic chemistry, the thiol-yne reaction (also known as alkyne hydrothiolation) is an organic reaction between a thiol () and an alkyne (). The reaction product is an alkenyl sulfide ().
The reaction was first reported in 1949 with thioacetic acid as reagent and rediscovered in 2009. It is used in click chemistry and in polymerization, especially with dendrimers.
This addition reaction is typically facilitated by a radical initiator or UV irradiation and proceeds through a sulfanyl radical species. With monoaddition a mixture of (E/Z)-alkenes form. The mode of addition is anti-Markovnikov. The radical intermediate can engage in secondary reactions such as cyclisation. With diaddition the 1,2-disulfide or the 1,1- dithioacetal forms. Reported catalysts for radical additions are triethylborane, indium(III) bromide and AIBN. The reaction is also reported to be catalysed by cationic rhodium and iridium complexes, by thorium and uranium complexes, by rhodium complexes, by caesium carbonate and by gold.
Diphenyl disulfide reacts with alkynes to a 1,2-bis(phenylthio)ethylene. Reported alkynes are ynamides. A photoredox thiol-yne reaction has been reported.
Polymer chemistry
In polymer chemistry, systems have been described based on addition polymerization with 1,4-benzenedithiol and 1,4-diethynylbenzene, in the synthesis of other addition polymer systems in the synthesis of dendrimers, in star polymers, in graft polymerization, block copolymers, and in polymer networks. Another reported application is the synthesis of macrocycles via dithiol coupling.
See also
Click chemistry
Michael addition
Radical addition
Thiol-ene reaction
References
Organic reactions | Thiol-yne reaction | [
"Chemistry"
] | 420 | [
"Organic reactions"
] |
27,146,693 | https://en.wikipedia.org/wiki/Physical%20metallurgy | Physical metallurgy is one of the two main branches of the scientific approach to metallurgy, which considers in a systematic way the physical properties of metals and alloys. It is basically the fundamentals and applications of the theory of phase transformations in metal and alloys. While chemical metallurgy involves the domain of reduction/oxidation of metals, physical metallurgy deals mainly with mechanical and magnetic/electric/thermal properties of metals – as described by solid-state physics.
See also
Extractive metallurgy
References
External links
MIT Ocw (MIT OpenCourseWare) Course on Physical Metallurgy
A series of Lectures by Prof. "Harry" Harshad Bhadeshia, University of Cambridge on the Physical Metallurgy of Steels
Additional teaching materials by Prof. "Harry" Harshad Bhadeshia, University of Cambridge, at the Phase Transformations & Complex
Materials science
Metallurgy | Physical metallurgy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 183 | [
"Metallurgy",
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
23,959,374 | https://en.wikipedia.org/wiki/Multiangle%20light%20scattering | Multi-Angle light scattering (MALS) describes a technique for measuring the light scattered by a sample into a plurality of angles. It is used for determining both the absolute molar mass and the average size of molecules in solution, by detecting how they scatter light. A collimated beam from a laser source is most often used, in which case the technique can be referred to as multiangle laser light scattering (MALLS). The insertion of the word laser was intended to reassure those used to making light scattering measurements with conventional light sources, such as Hg-arc lamps that low-angle measurements could now be made.
Until the advent of lasers and their associated fine beams of narrow width, the width of conventional light beams used to make such measurements prevented data collection at smaller scattering angles. In recent years, since all commercial light scattering instrumentation use laser sources, this need to mention the light source has been dropped and the term MALS is used throughout.
The "multi-angle" term refers to the detection of scattered light at different discrete angles as measured, for example, by a single detector moved over a range that includes the particular angles selected or an array of detectors fixed at specific angular locations. A discussion of the physical phenomenon related to this static light scattering, including some applications, data analysis methods and graphical representations associated therewith are presented.
Background
The measurement of scattered light from an illuminated sample forms the basis of the so-called classical light scattering measurement. Historically, such measurements were made using a single detector rotated in an arc about the illuminated sample. The first commercial instrument (formally called a "scattered photometer") was the Brice-Phoenix light scattering photometer introduced in the mid-1950s and followed by the Sofica photometer introduced in the late 1960s.
Measurements were generally expressed as scattered intensities or scattered irradiance. Since the collection of data was made as the detector was placed at different locations on the arc, each position corresponding to a different scattering angle, the concept of placing a separate detector at each angular location of interest was well understood, though not implemented commercially until the late 1970s. Multiple detectors having different quantum efficiency have different response and hence needs to be normalized in this scheme. An interesting system based upon the use of high speed film was developed by Brunsting and Mullaney in 1974. It permitted the entire range of scattered intensities to be recorded on the film with a subsequent densitometer scan providing the relative scattered intensities. The then-conventional use of a single detector rotated about an illuminated sample with intensities collected at specific angles was called differential light scattering after the quantum mechanical term differential cross section, σ(θ) expressed in milli-barns/steradian. Differential cross section measurements were commonly made, for example, to study the structure of the atomic nucleus by scattering from them nucleons, such as neutrons. It is important to distinguish between differential light scattering and dynamic light scattering, both of which are referred to by the initials DLS. The latter refers to a technique that is quite different, measuring the fluctuation of scattered light due to constructive and destructive interference, the frequency being linked to the thermal motion, Brownian motion of the molecules or particles in solution or suspension.
A MALS measurement requires a set of ancillary elements. Most important among them is a collimated or focused light beam (usually from a laser source producing a collimated beam of monochromatic light) that illuminates a region of the sample. In modern instruments, the beam is generally plane-polarized perpendicular to the plane of measurement, though other polarizations may be used especially when studying anisotropic particles. Earlier measurements, before the introduction of lasers, were performed using focused, though unpolarized, light beams from sources such as Hg-arc lamps. Another required element is an optical cell to hold the sample being measured. Alternatively, cells incorporating means to permit measurement of flowing samples may be employed. If single-particles scattering properties are to be measured, a means to introduce such particles one-at-a-time through the light beam at a point generally equidistant from the surrounding detectors must be provided.
Although most MALS-based measurements are performed in a plane containing a set of detectors usually equidistantly placed from a centrally located sample through which the illuminating beam passes, three-dimensional versions also have been developed wherein the detectors lie on the surface of a sphere with the sample controlled to pass through its center where it intersects the path of the incident light beam passing along a diameter of the sphere. The former framework is used for measuring aerosol particles while the latter was used to examine marine organisms such as phytoplankton.
The traditional differential light scattering measurement was virtually identical to the currently used MALS technique. Although the MALS technique generally collects multiplexed data sequentially from the outputs of a set of discrete detectors, the earlier differential light scattering measurement also collected data sequentially as a single detector was moved from one collection angle to the next. The MALS implementation is of course much faster, but the same types of data are collected and are interpreted in the same manner. The two terms thus refer to the same concept. For differential light scattering measurements, the light scattering photometer has a single detector whereas the MALS light scattering photometer generally has a plurality of detectors.
Another type of MALS device was developed in 1974 by Salzmann et al. based on a light pattern detector invented by George et al. for Litton Systems Inc. in 1971. The Litton detector was developed for sampling the light energy distribution in the rear focal-plane of a spherical lens for sampling geometric relationships and the spectral density distribution of objects recorded on film transparencies.
The application of the Litton detector by Salzman et al. provided measurement at 32 small scattering angles between 0° and 30°, and averaging over a broad range of azimuthal angles as the most important angles are the forward angles for static light scattering. By 1980, Bartholi et al. had developed a new approach to measuring the scattering at discrete scattering angles by using an elliptical reflector to permit measurement at 30 polar angles over the range 2.5° ≤ θ ≤ 177.5° with a resolution of 2.1°.
The commercialization of multiangle systems began in 1977 when Science Spectrum, Inc. patented a flow-through capillary system for a customized bioassay system developed for the USFDA. The first commercial MALS instrument incorporating 8 discrete detectors was delivered to S.C. Johnson and Son, by Wyatt Technology Company, in 1983, followed in 1984 with the sale of the first 15 detector flow instrument (Dawn-F) to AMOCO. By 1988, a three-dimensional configuration was introduced specifically to measure the scattering properties of single aerosol particles. At about the same time, the underwater device was built to measure the scattered light properties of single phytoplankton. Signals were collected by optical fibers and transmitted to individual photomultipliers. Around December 2001, an instrument was commercialized, which measures 7 scattering angles using a CCD detector (BI-MwA: Brookhaven Instruments Corp, Hotlsville, NY).
The literature associated with measurements made by MALS photometers is extensive. both in reference to batch measurements of particles/molecules and measurements following fractionation by chromatographic means such as size exclusion chromatography (SEC), reversed phase chromatography (RPC), and field flow fractionation (FFF).
Theory
The interpretation of scattering measurements made at the multiangular locations relies upon some knowledge of the a priori properties of the particles or molecules measured. The scattering characteristics of different classes of such scatterers may be interpreted best by application of an appropriate theory. For example, the following theories are most often applied.
Rayleigh scattering is the simplest and describes elastic scattering of light or other electromagnetic radiation by objects much smaller than the incident wavelength. This type of scattering is responsible for the blue color of the sky during the day and is inversely proportional to the fourth power of wavelength.
The Rayleigh–Gans approximation is a means of interpreting MALS measurements with the assumption that the scattering particles have a refractive index, n1, very close to the refractive index of the surrounding medium, n0. If we set m = n1/n0 and assume that , then such particles may be considered as composed of very small elements, each of which may be represented as a Rayleigh-scattering particle. Thus each small element of the larger particle is assumed to scatter independently of any other.
Lorenz–Mie theory is used to interpret the scattering of light by homogeneous spherical particles. The Rayleigh–Gans approximation and the Lorenz–Mie theory produce identical results for homogeneous spheres in the limit as .
Lorenz–Mie theory may be generalized to spherically symmetric particles per reference.
More general shapes and structures have been treated by Erma.
Scattering data is usually represented in terms of the so-called excess Rayleigh ratio defined as the Rayleigh ratio of the solution or single particle event from which is subtracted the Rayleigh ratio of the carrier fluid itself and other background contributions, if any. The Rayleigh Ratio measured at a detector lying at an angle θ and subtending a solid angle ΔΩ is defined as the intensity of light per unit solid angle per unit incident intensity, I0, per unit illuminated scattering volume ΔV. The scattering volume ΔV from which scattered light reaches the detector is determined by the detector's field of view generally restricted by apertures, lenses and stops. Consider now a MALS measurement made in a plane from a suspension of N identical particles/molecules per ml illuminated by a fine beam of light produced by a laser. Assuming that the light is polarized perpendicular to the plane of the detectors. The scattered light intensity measured by the detector at angle θ in excess of that scattered by the suspending fluid would be
,
where i(θ) is the scattering function of a single particle, k = 2πn0/λ0, n0 is the refractive index of the suspending fluid, and λ0 is the vacuum wavelength of the incident light. The excess Rayleigh ratio, R(θ), is then given by
.
Even for a simple homogeneous sphere of radius a whose refractive index, n, is very nearly the same as the refractive index "n0" of the suspending fluid, i.e. Rayleigh–Gans approximation, the scattering function in the scattering plane is the relatively complex quantity
, where
, ,
and λ0 is the wavelength of the incident light in vacuum.
Applications
Zimm plot and batch collection
MALS is most commonly used for the characterization of mass and size of molecules in solution. Early implementations of MALS such as those discussed by Bruno H. Zimm in his paper "Apparatus and Methods for Measurement and Interpretation of the Angular Variation of Light Scattering; Preliminary Results on Polystyrene Solutions" involved using a single detector rotated about a sample contained within a transparent vessel. MALS measurements from non-flowing samples such as this are commonly referred to as "batch measurements". By creating samples at several known low concentrations and detecting scattered light about the sample at varying angles, one can create a Zimm plot by plotting : vs where c is the concentration of the sample and k is a stretch factor used to put kc and into the same numerical range.
When plotted one can extrapolate to both zero angle and zero concentration, and analysis of the plot will give the mean square radius of the sample molecules from the initial slope of the c=0 line and the molar mass of the molecule at the point where both concentration and angle equal zero. Improvements to the Zimm plot, which incorporate all collected data (commonly referred to as a "global fit"), have largely replaced the Zimm plot in modern batch analyses.
SEC and flow mode
With the advent of size exclusion chromatography (SEC), MALS measurements began to be used in conjunction with an on-line concentration detector to determine absolute molar mass and size of sample fractions eluting from the column, rather than depending on calibration techniques. These flow mode MALS measurements have been extended to other separation techniques such as field flow fractionation, ion exchange chromatography, and reversed-phase chromatography.
The angular dependence of light scattering data is shown below in a figure of mix of polystyrene spheres which was separated by SEC. The two smallest samples (farthest to the right) eluted last and show no angular dependence. The sample, second to the right shows a linear angular variation with the intensity increasing at lower scattering angles. The largest sample, on the left, elutes first and shows non-linear angular variation.
Utility of MALS measurements
Molar mass and size
Coupling MALS with an in-line concentration detector following a sample separation means like SEC permits the calculation of the molar mass of the eluting sample in addition to its root-mean-square radius. The figure below represents a chromatographic separation of BSA aggregates. The 90° light scattering signal from a MALS detector and the molar mass values for each elution slice are shown.
Molecular interactions
As MALS can provide molar mass and size of molecules, it permits study into protein-protein binding, oligomerization and the kinetics of self-assembly, association and dissociation. By comparing the molar mass of a sample to its concentration, one can determine the binding affinity and stoichiometry of interacting molecules.
Branching and molecular conformation
The branching ratio of a polymer relates to the number of branch units in a randomly branched polymer and the number of arms in star-branched polymers and was defined by Zimm and Stockmayer as
Where is the mean square radius of branched and linear macromolecules with identical molar masses. By utilizing MALS in conjunction with a concentration detector as described above, one create a log-log plot of the root-mean-square radius vs molar mass. The slope of this plot yields the branching ratio, g.
In addition to branching, the log-log plot of size vs. molar mass indicates the shape or conformation of a macromolecule. An increase in the slope of the plot indicates a variation in conformation of a polymer from spherical to random coil to linear. Combining the mean-square radius from MALS with the hydrodynamic radius attained from DLS measurements yields the shape factor ρ = , for each macromolecular size fraction.
Other applications
Other MALS applications include nanoparticle sizing, protein aggregation studies, protein-protein interactions, electrophoretic mobility or zeta potential. MALS techniques have been adopted for the study of pharmaceutical drug stability, crystal nucleation and crystallization kinetics and use in nanomedicine.
References
Scattering, absorption and radiative transfer (optics)
Spectroscopy
Colloidal chemistry
Scientific techniques
Scattering | Multiangle light scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,084 | [
"Colloidal chemistry",
"Molecular physics",
"Spectrum (physical sciences)",
" absorption and radiative transfer (optics)",
"Instrumental analysis",
"Colloids",
"Surface science",
"Scattering",
"Particle physics",
"Condensed matter physics",
"Nuclear physics",
"Spectroscopy"
] |
23,960,205 | https://en.wikipedia.org/wiki/Strychnine%20total%20synthesis | Strychnine total synthesis in chemistry describes the total synthesis of the complex biomolecule strychnine. The first reported method by the group of Robert Burns Woodward in 1954 is considered a classic in this research field.
At the time it formed the natural conclusion to an elaborate process of molecular structure elucidation that started with the isolation of strychnine from the beans of Strychnos ignatii by Pierre Joseph Pelletier and Joseph Bienaimé Caventou in 1818. Major contributors to the entire effort were Sir Robert Robinson with over 250 publications and Hermann Leuchs with another 125 papers in a time span of 40 years. Robinson was awarded the Nobel Prize in Chemistry in 1947 for his work on alkaloids, strychnine included.
The process of chemical identification was completed with publications in 1946 by Robinson and later confirmed by Woodward in 1947. X-ray structures establishing the absolute configuration became available between 1947 and 1951 with publications from Johannes Martin Bijvoet and J. H. Robertson
Woodward published a very brief account on the strychnine synthesis in 1954 (just 3 pages) and a lengthy one (42 pages) in 1963.
Many more methods exist and reported by the research groups of Magnus, Overman, Kuehne, Rawal, Bosch, Vollhardt, Mori, Shibasaki, Li, Fukuyama Vanderwal and MacMillan. Synthetic (+)-strychnine is also known. Racemic synthesises were published by Padwa in 2007 and in 2010 by Andrade and by Reissig.
In his 1963 publication Woodward quoted Sir Robert Robinson who said for its molecular size it is the most complex substance known.
The molecule
The C21H22N2O2 strychnine molecule contains 7 rings including an indoline system. It has a tertiary amine group, an amide, an alkene and an ether group. The naturally occurring compound is also chiral with 6 asymmetric carbon atoms including one quaternary one.
Woodward synthesis
Ring II, V synthesis
The synthesis of ring II was accomplished with a Fischer indole synthesis using phenylhydrazine 1 and acetophenone derivative acetoveratrone 2 (catalyst polyphosphoric acid) to give the 2-veratrylindole 3. The veratryl group not only blocks the 2-position for further electrophilic substitution but will also become part of the strychnine skeleton. A Mannich reaction with formaldehyde and dimethylamine) produced gramine 4. Alkylation with iodomethane gave an intermediate quaternary ammonium salt which reacted with sodium cyanide in a nucleophilic substitution to nitrile 5 and then in a reduction with lithium aluminium hydride to tryptamine 6. Amine-carbonyl condensation with ethyl glyoxylate give the imine 7. The reaction of this imine with TsCl in pyridine to the ring-closed N-tosyl compound 8 was described by Woodward as a concerted nucleophilic enamine attack and formally a Pictet–Spengler reaction. This compound should form as a diastereomeric pair but only one compound was found although which one was not investigated. Finally the newly formed double bond was reduced by sodium borohydride to indoline 9 with the C8 hydrogen atom approaching from the least hindered side (this proton is removed later on in the sequence and is of no importance).
Ring III, IV synthesis
Indoline 9 was acetylated to N-acetyl compound 10 (acetic anhydride, pyridine) and then the veratryl group was then ring-opened with ozone in aqueous acetic acid to muconic ester 11 (made possible by the two electron-donating methoxide groups). This is an example of bioinspired synthesis already proposed by Woodward in 1948. Cleavage of the acetyl group and ester hydrolysis with HCl in methanol resulted in formation of pyridone ester 12 with additional isomerization of the exocyclic double bond to an endocyclic double bond (destroying one asymmetric center). Subsequent treatment with hydrogen iodide and red phosphorus removed the tosyl group and hydrolysed both remaining ester groups to form diacid 13. Acetylation and esterification (diazomethane) produced acetyl diester 14 which was then subjected to a Dieckman condensation with sodium methoxide in methanol to enol 15.
Ring VII synthesis
In order to remove the C15 alcohol group, Enol 15 was converted to tosylate 16 (TsCl, pyridine) and then to mercaptoester 17 (sodium benzylmercaptide) which was then reduced to unsaturated ester 18 by Raney nickel and hydrogen. Further reduction with hydrogen / palladium on carbon afforded the saturated ester 19. Alkaline ester hydrolysis to carboxylic acid 20 was accompanied by epimerization at C14.
This particular compound was already known from strychnine degradation studies. Until now all intermediates were racemic but chirality was introduced at this particular stage via chiral resolution using quinidine.
The C20 carbon atom was then introduced by acetic anhydride to form enol acetate 21 and the free aminoketone 22 was obtained by hydrolysis with hydrochloric acid. Ring VII in intermediate 23 was closed by selenium dioxide oxidation, a process accompanied by epimerization again at C14.
The formation of 21 can be envisioned as a sequence of acylation, deprotonation, rearrangement with loss of carbon dioxide and again acylation:
Ring VI synthesis
To diketone 23, sodium acetylide(Alkynylation) was added (bringing in carbon atoms 22 and 23) to give alkyne 24. This compound was reduced to the allyl alcohol 25 using the Lindlar catalyst and lithium aluminium hydride removed the remaining amide group in 26. An allylic rearrangement to alcohol 27 (isostrychnine) was brought about by hydrogen bromide in acetic acid followed by hydrolysis with sulfuric acid. In the final step to (−)-strychnine 28 treatment of 27 with ethanolic potassium hydroxide caused rearrangement of the C12-13 double bond and ring closure in a conjugate addition by the hydroxyl anion.
Magnus synthesis
In this effort one of strychnine's many degradation products was synthesised first (the relay compound), a compound also available in several steps from another degradation product called the Wieland-Gumlich aldehyde. In the final leg strychnine itself was synthesised from the relay compound.
Overman synthesis
The Overman synthesis (1993) took a chiral cyclopentene compound as starting material obtained by enzymatic hydrolysis of cis-1,4-diacetoxycyclopent-2-ene. This starting material was converted in several steps to trialkylstannane 2 which was then coupled with an aryl iodide 1 in a Stille reaction in presence of carbon monoxide (tris(dibenzylideneacetone)dipalladium(0), triphenylarsine). The internal double in 3 was converted to an epoxide using tert-Butyl hydroperoxide, the carbonyl group was then converted to an alkene in a Wittig reaction using Ph3P=CH2 and the TIPS group was hydrolyzed (TBAF) and replaced by a trifluoroacetamide group (NH2COCF3, NaH) in 4. Cyclization (NaH) took place next, opening the epoxide ring and the trifluoroacetyl group was removed using KOH affording azabicyclooctane 5.
The key step was an aza-Cope-Mannich reaction initiated by an amine-carbonyl condensation using formaldehyde and forming 6 in a quantitative yield:
In the final sequence strychnine was obtained through the Wieland-Gumlich aldehyde (10):
Intermediate 6 was acylated using methyl cyanoformate and two protective groups (tert-butyl and ) were removed using HCl / MeOH in 7. The C8C13 double bond was reduced with zinc (MeOH/H+) to saturated ester 8 (mixture). Epimerization at C13 with sodium methoxide in MeOH produced beta-ester 9 which was reduced with diisobutylaluminium hydride to Wieland-Gumlich aldehyde 10. Conversion of this compound with malonic acid to (−)-strychnine 11 was already known as a procedure.
Kuehne synthesis
The 1993 Keuhne synthesis concerns racemic strychnine. Starting compounds tryptamine 1 and 4,4-dimethoxy acrolein 2 were reacted together with boron trifluoride to acetal 3 as a single diastereomer in an amine-carbonyl condensation / sigmatropic rearrangement sequence.
Hydrolysis with perchloric acid afforded aldehyde 4. A Johnson–Corey–Chaykovsky reaction (trimethylsulfonium iodide / n-butyllithium) converted the aldehyde into an epoxide which reacted in situ with the tertiary amine to ammonium salt 5 (contaminated with other cyclization products). Reduction (palladium on carbon/hydrogen) removed the benzyl group to alcohol 6, more reduction (sodium cyanoborohydride) and acylation (acetic anhydride / pyridine) produced 7 as a mixture of epimers (at C17). Ring closure of ring III to 8 was then accomplished with an aldol reaction using lithium bis(trimethylsilyl)amide (using only the epimer with correct configuration). Even more reduction (sodium borohydride) and acylation resulted in epimeric di-acetate 9.
A DBU mediated elimination reaction formed olefinic alcohol 10 and subsequent Swern oxidation have an unstable amino ketone 11. In the final steps a Horner–Wadsworth–Emmons reaction (methyl 2-(diethy1phosphono)acetate) give acrylate ester 12 as a mixture of cis and trans isomers which could be coached into the right (trans) direction by application of light in a photochemical rearrangement, the ester group was reduced (DIBAL / boron trifluoride) to isostrychnine 13 and racemic strychnine 14 was formed by base-catalyzed ring closure as in the Woodward synthesis.
In the 1998 Keuhne synthesis of chiral (−)-strychnine the starting material was derived from chiral tryptophan.
Rawal synthesis
In the Rawal synthesis (1994, racemic) amine 1 and enone 2 were combined in an amine-carbonyl condensation followed by methyl chloroformate quench to triene 3 which was then reacted in a Diels–Alder reaction (benzene 185 °C) to hexene 4. The three ester groups were hydrolyzed using iodotrimethylsilane forming pentacyclic lactam 5 after a methanol quench in a combination of 7 reaction steps (one of them a Dieckmann condensation). The C4 segment 6 was added in an amine alkylation and Heck reaction of 7 formed isostrychnine 8 after TBS deprotection.
The overall yield (10%) is to date the largest of any of the published methods
Bosch synthesis
In the Bosch synthesis of (1999, chiral) the olefin group in dione 1 was converted to an aldehyde by ozonolysis and chiral amine 2 was formed in a double reductive amination with (S)-1-phenethylamine. The phenylethyl substituent was removed using ClCO2CHClCH3 and the enone group was introduced in a Grieco elimination using TMSI, HMDS then PhSeCl then ozone and then diisopropylamine forming carbamate 3. The amino group was deprotected by refluxing in methanol and then alkylated using (Z)-BrCH2CICH=CH2OTBDMS, to tertiary amine 4. A reductive Heck reaction took place next followed by methoxycarbonylation (LiHMDS, NCCO2Me) to tricycle 5. Reaction with zinc dust in 10% sulfuric acid removed the TBDMS protective group, reduced the nitro group and brought about a reductive amino-carbonyl cyclization in a single step to tetracyclic 6 (epimeric mixture). In the final step to the Wieland-Gumlich aldehyde 7 reaction with NaH in MeOH afforded the correct epimer was followed by DIBAH reduction of the methyl ester.
Vollhardt synthesis
The key reaction in the Vollhardt synthesis (2000, racemic) was an alkyne trimerisation of tryptamine derivative 1 with acetylene and organocobalt compound CpCo(C2H4)2 (THF, 0 °C) to tricycle 2 after deprotection of the amine group (KOH, MeOH/H2O reflux). Subsequent reaction with iron nitrate brought about a [1,8]-conjugate addition to tetracycle 3, amine alkylation with (Z)-1-bromo-4-[(tert-butyldimethylsilyl)oxy]-2-iodobut-2-ene (see Rawal synthesis) and lithium carbonate, and isomerization of the diene system (NaOiPr, iPrOH) formed enone 4. A Heck reaction as in the Rawal synthesis (palladium acetate / triphenylphosphine), accompanied by aromatization formed pyridone 5 and lithium aluminium hydride reduction and TBS group deprotection formed isostrychnine 6.
Mori synthesis
The Mori synthesis ((-) chiral, 2003) was the first one containing an asymmetric reaction step. It also features a large number of Pd catalyzed reactions. In it N-tosyl amine 1 reacted with allyl carbonate 2 in an allylic asymmetric substitution using Pd2(dba)3 and asymmetric ligand (S-BINAPO) to chiral secondary amine 3. Desilylation of the TBDMS group next took place by HCl to the hydroxide and then to the nitrile 4 (NaCN) through the bromide (PBr3). Heck reaction (Pd(OAc)2 / Me2PPh) and debromination (Ag2CO3) afforded tricycle 5. LiALH4 Nitrile reduction to the amine and its Boc2O protection to boc amine 6 was then followed by a second allylic oxidation (Pd(OAc)2 / AcOH / benzoquinone / MnO2) to tetracycle 7. Hydroboration-oxidation (9-BBN / H2O2) gave alcohol 8 and subsequent Swern oxidation ketone 9. Reaction with LDA / PhNTf2 gave enol triflate 10 and the triflate group was removed in alkene 11 by reaction with Pd(OAc)2 and PPh3.
Detosylation of 11 (sodium naphthalenide) and amidation with acid chloride 3-bromoacryloyl chloride gave amide 12 and another Heck reaction gave pentacycle 13. double bond isomerization (sodium / iPrOH), Boc group deprotection (triflic acid) and amine alkylation with (Z)-BrCH2CICH=CH2OTBDMS (see Rawal) gave compound 14 (identical to one of the Vollhardt intermediates). A final heck reaction (15) and TBDMS deprotection formed (−)-isostrychnine 16.
Shibasaki synthesis
The Shibasaki synthesis ((-) chiral, 2002) was a second published method in strychnine total synthesis using an asymmetric reaction step. Cyclohexenone 1 was reacted with dimethyl malonate 2 in an asymmetric Michael reaction using AlLibis(binaphthoxide) to form chiral diester 3. Its ketone group was protected as an acetal (2-ethyl-2-methyl-1,3-dioxolane, TsOH) and a carboxyl group was removed (LiCl, DMSO 140 °C) in monoester 4. A C2 fragment was added as Weinreb amide 5 to form PMB ether 6 using LDA. The ketone was then reduced to the alcohol (NaBH3CN, TiCl4) and then water was eliminated (DCC, CuCl) to form alkene 7. After ester reduction (DIBAL) to the alcohol and its TIPS protection (TIPSOTf, triethylamine), the acetal group was removed (catalytic CSA) in ketone 8. Enone 9 was then formed by Saegusa oxidation. The conversion to alcohol 10 was accomplished via a Mukaiyama aldol addition using formaldehyde, iodonation to 11 (iodine, DMAP) was followed by a Stille coupling (Pd2dba3, Ph3As, CuI) incorporating nitrobenzene unit 12. Alcohol 13 was formed after SEM protection (SEMCl,i-Pr2NEt) and TIPS removal (HF).
In the second part of the sequence alcohol 13 was converted to a triflate (triflic anhydride, N,N-diisopropylethylamine), then 2,2-bis(ethylthio)ethylamine 14 was added immediately followed by zinc powder, setting of a tandem reaction with nitro group reduction to the amine, 1,4-addition of the thio-amine group and amine-keto condensation to indole 16. Reaction with DMTSF gave thionium attack at C7 forming 17, the imine group was then reduced (NaBH3CN, TiCl4), the new amino group acylated (acetic anhydride, pyridine), both alcohol protecting groups removed (NaOMe / meOH) and the allyl alcohol group protected again (TIPS). This allowed removal of the ethylthio group (NiCl2, NaBH4, EtOH/MeOH) to 18. The alcohol was oxidized to the aldehyde using a Parikh-Doering oxidation and TIPS group removal gave hemiacetal 19 called (+)-diaboline which is acylated Wieland-Gumlich aldehyde.
Li synthesis
The synthesis reported by Bodwell/Li (racemic, 2002) was a formal synthesis as it produced a compound already prepared by Rawal (no. 5 in the Rawal synthesis). The key step was an inverse electron demand Diels–Alder reaction of cyclophane 1 by heating in N,N-diethylaniline (dinitrogen is expulsed) followed by reduction of double bond in 2 to 3 by sodium borohydride / triflic acid and removal of the carbamate protecting group (PDC / celite) to 4.
The method is disputed by Reissig (see Reissig synthesis).
Fukuyama synthesis
The Fukuyama synthesis (chiral (-), 2004) started from cyclic amine 1. Chirality was at some point introduced into this starting material by enzymatic resolution of one of the precursors. Acyloin 2 was formed by Rubottom oxidation and hydrolysis. Oxidative cleavage by lead acetate formed aldehyde 3, removal of the nosyl group (thiophenol / cesium carbonate) triggered an amine-carbonyl condensation with iminium ion 4 continuing to react in a transannular cyclization to diester 5 which could be converted to the Wieland-Gumlich aldehyde by known chemistry.
Reissig synthesis
The method reported by Beemelmanns & Reissig (racemic, 2010) is another formal synthesis leading to the Rawal pentacycle (see amine 5 in the Rawal method). In this method indole 1 was converted to tetracycle 2 (together with by-product) in a single cascade reaction using samarium diiodide and HMPA. Raney nickel/ H2 reduction gave amine 3 and a one-pot reaction using methyl chloroformate, DMAP and TEA then MsCl, DMAP and TEA and then DBU gave Rawal precursor 4 with key hydrogen atoms in the desired anti configuration.
In an aborted route intermediate 2 was first reduced to imine 5 then converted to carbamate 6, then dehydrated to diene 7 (Burgess reagent) and finally reduced to 8 (sodium cyanoborohydride). The hydrogen atoms in 8 are in an undesired cis-relationship which contradicts the results obtained in 2002 by Bodwell/Li for the same reaction.
Vanderwal synthesis
In 2011, the Vanderwal group reported a concise, longest linear sequence of 6 steps, total synthesis of strychnine. It featured a Zincke aldehyde followed by an anionic bicyclization reaction and a tandem Brook rearrangement / conjugate addition.
External links
Strychnine Total Syntheses @ SynArchive.com
References
Total synthesis | Strychnine total synthesis | [
"Chemistry"
] | 4,656 | [
"Total synthesis",
"Chemical synthesis"
] |
23,960,842 | https://en.wikipedia.org/wiki/Applied%20element%20method | The applied element method (AEM) is a numerical analysis used in predicting the continuum and discrete behavior of structures. The modeling method in AEM adopts the concept of discrete cracking allowing it to automatically track structural collapse behavior passing through all stages of loading: elastic, crack initiation and propagation in tension-weak materials, reinforcement yield, element separation, element contact and collision, as well as collision with the ground and adjacent structures.
History
Exploration of the approach employed in the applied element method began in 1995 at the University of Tokyo as part of Dr. Hatem Tagel-Din's research studies. The term "applied element method" itself, however, was first coined in 2000 in a paper called "Applied element method for structural analysis: Theory and application for linear materials". Since then AEM has been the subject of research by a number of academic institutions and the driving factor in real-world applications. Research has verified its accuracy for: elastic analysis; crack initiation and propagation; estimation of failure loads at reinforced concrete structures; reinforced concrete structures under cyclic loading; buckling and post-buckling behavior; nonlinear dynamic analysis of structures subjected to severe earthquakes; fault-rupture propagation; nonlinear behavior of brick structures; and the analysis of glass reinforced polymers (GFRP) walls under blast loads.
Technical discussion
In AEM, the structure is divided virtually and modeled as an assemblage of relatively small elements. The elements are then connected through a set of normal and shear springs located at contact points distributed along with the element faces. Normal and shear springs are responsible for the transfer of normal and shear stresses from one element to the next.
Element generation and formulation
The modeling of objects in AEM is very similar to modeling objects in FEM. Each object is divided into a series of elements connected and forming a mesh. The main difference between AEM and FEM, however, is how the elements are joined together. In AEM the elements are connected by a series of non-linear springs representing the material behavior.
There are three types of springs used in AEM:
Matrix Springs: Matrix springs connect two elements together representing the main material properties of the object.
Reinforcing Bar Springs: Reinforcement springs are used to implicitly represent additional reinforcement bars running through the object without adding additional elements to the analysis.
Contact Springs: Contact Springs are generated when two elements collide with each other or the ground. When this occurs three springs are generated (Shear Y, Shear X and Normal).
Automatic element separation
When the average strain value at the element face reaches the separation strain, all springs at this face are removed and elements are no longer connected until a collision occurs, at which point they collide together as rigid bodies.
Separation strain represents the strain at which adjacent elements are totally separated at the connecting face. This parameter is not available in the elastic material model. For concrete, all springs between the adjacent faces including reinforcement bar springs are cut. If the elements meet again, they will behave as two different rigid bodies that have now contacted each other. For steel, the bars are cut if the stress point reaches ultimate stress or if the concrete reaches the separation strain.
Automatic element contact/collision
Contact or collision is detected without any user intervention. Elements are able to separate, contract and/or make contact with other elements. In AEM three contact methods include Corner-to-Face, Edge-to-Edge, and Corner-to-Ground.
Stiffness matrix
The spring stiffness in a 2D model can be calculated from the following equations:
Where d is the distance between springs, T is the thickness of the element, a is the length of the representative area, E is the Young's modulus, and G is the shear modulus of the material. The above equation's indicate that each spring represents the stiffness of an area (T·d) within the length of the studied material.
To model reinforcement bars embedded in concrete, a spring is placed inside the element at the location of the bar; the area (T·d) is replaced by the actual cross section area of the reinforcement bar. Similar to modeling embedded steel sections, the area (T·d) may be replaced by the area of the steel section represented by the spring.
Although the element motion moves as a rigid body, its internal deformations are represented by the spring deformation around each element. This means the element shape does not change during analysis, but the behavior of assembly of elements is deformable.
The two elements are assumed to be connected by only one pair of normal and shear springs. To have a general stiffness matrix, the locations of element and contact springs are assumed in a general position. The stiffness matrix components corresponding to each degree of freedom are determined by assuming a unit displacement in the studied direction and by determining forces at the centroid of each element. The 2D element stiffness matrix size is 6 × 6; the components of the upper left quarter of the stiffness matrix are shown below:
The stiffness matrix depends on the contact spring stiffness and the spring location. The stiffness matrix is for only one pair of contact springs. However, the global stiffness matrix is determined by summing up the stiffness matrices of individual pairs of springs around each element. Consequently, the developed stiffness matrix has total effects from all pairs of springs, according to the stress situation around the element. This technique can be used in both load and displacement control cases. The 3D stiffness matrix may be deduced similarly.
Applications
The applied element method is currently being used in the following applications:
Structural vulnerability assessment
Progressive collapse
Blast analysis
Impact analysis
Seismic analysis
Forensic engineering
Performance based design
Demolition analysis
Glass performance analysis
Visual effects
See also
Building implosion
Earthquake engineering
Extreme Loading for Structures
Failure analysis
Multidisciplinary design optimization
Physics engine
Progressive collapse
Shear modulus
Structural engineering
Young's modulus
References
Further reading
Applied Element Method
Extreme Loading for Structures - Applied Element Method
Structural analysis
Structural engineering
Construction
Demolition
Building engineering
Glass engineering and science
Numerical analysis
Scientific simulation software | Applied element method | [
"Materials_science",
"Mathematics",
"Engineering"
] | 1,214 | [
"Structural engineering",
"Glass engineering and science",
"Demolition",
"Building engineering",
"Aerospace engineering",
"Structural analysis",
"Computational mathematics",
"Materials science",
"Construction",
"Civil engineering",
"Mathematical relations",
"Mechanical engineering",
"Numeric... |
20,933,302 | https://en.wikipedia.org/wiki/Star%20coloring | In the mathematical field of graph theory, a star coloring of a graph is a (proper) vertex coloring in which every path on four vertices uses at least three distinct colors. Equivalently, in a star coloring, the induced subgraphs formed by the vertices of any two colors has connected components that are star graphs. Star coloring has been introduced by .
The star chromatic number of is the fewest colors needed to star color .
One generalization of star coloring is the closely related concept of acyclic coloring, where it is required that every cycle uses at least three colors, so the two-color induced subgraphs are forests. If we denote the acyclic chromatic number of a graph by , we have that , and in fact every star coloring of is an acyclic coloring.
The star chromatic number has been proved to be bounded on every proper minor closed class by . This results was further generalized by to all low-tree-depth colorings (standard coloring and star coloring being low-tree-depth colorings with respective parameter 1 and 2).
Complexity
It was demonstrated by that it is NP-complete to determine whether , even when G is a graph that is both planar and bipartite.
showed that finding an optimal star coloring is NP-hard even when G is a bipartite graph.
References
.
.
.
.
.
.
External links
Star colorings and acyclic colorings (1973), present at the Research Experiences for Graduate Students (REGS) at the University of Illinois, 2008.
Graph coloring
NP-complete problems | Star coloring | [
"Mathematics"
] | 322 | [
"Graph coloring",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
20,935,181 | https://en.wikipedia.org/wiki/MacVector | MacVector is a commercial sequence analysis application for Apple Macintosh computers running Mac OS X. It is intended to be used by molecular biologists to help analyze, design, research and document their experiments in the laboratory. MacVector 18.1 is a Universal Binary capable of running on Intel and Apple Silicon Macs.
Features
MacVector is a collection of sequence analysis algorithms linked to various sequence editors, including a single sequence editor, a multiple sequence alignment editor and a contig editor. MacVector tries to use a minimum of windows and steps to access all the functionality. Functions include:
Sequence alignment (ClustalW, Muscle and T-Coffee) and editing.
Subsequence search and open reading frames (ORFs) analysis.
Phylogenetic tree construction UPGMA, Neighbour joining with bootstrapping and consensus trees
Online Database searching - Search public databases at the NCBI such as Genbank, PubMed, and UniProt.
Perform online BLAST searches.
Protein analysis.
Contig assembly and chromatogram editing
Aligning cDNA against genomic templates
Creating dot plots of DNA to DNA, Protein to Protein and DNA to protein.
Restriction analysis - find and view restriction cut sites. Uses digested fragments to clone genes into vectors. Stores a history of digested fragments allowing multi fragment ligations.
PCR Primer design - easy primer design and testing. Also uses primer3
Agarose Gel simulation.
CRISPR INDEL analysis.
MacVector has a contig assembly plugin called Assembler that uses phred, phrap, Bowtie, SPAdes, Velvet and cross match.
As of version 13.0.1 MacVector uses Sparkle for updating between releases.
History
MacVector was originally developed by IBI in 1994. It was acquired by Kodak, and subsequently Oxford Molecular in 1996. Oxford Molecular was merged into Accelrys in 2001. It was acquired by MacVector, Inc on 1 January 2007.
References
External links
MacVector homepage
Bioinformatics software
Computational science | MacVector | [
"Mathematics",
"Biology"
] | 420 | [
"Computational science",
"Applied mathematics",
"Bioinformatics",
"Bioinformatics software"
] |
20,935,448 | https://en.wikipedia.org/wiki/Pho4 | Pho4 is a protein with a basic helix-loop-helix (bHLH) transcription factor. It is found in S. cerevisiae and other yeasts. It functions as a transcription factor to regulate phosphate responsive genes located in yeast cells. The Pho4 protein homodimer is able to do this by binding to DNA sequences containing the bHLH binding site 5'-CACGTG-3'. This sequence is found in the promoters of genes up-regulated in response to phosphate availability such as the PHO5 gene.
Structure
Helical regions are represented by ribbons; non-regular secondary structure elements by thin tubes. Molecules A and B are colored cyan and lime-green, respectively. Helical structure is clearly seen in the loop region. The PHO4 protein consists of 312 amino acid residues (Yoshida et al., 1989) and has four functional domains. PHO4 is one of the regulatory proteins indispensable for transcription of the PHO5, PHO81 and PHO84 genes. The DNA-binding domain of PHO4 consists of two helices, designated H1 and H2, separated by a long loop that contains a novel α-helical region. PHO4 binds to DNA as a homodimer and the two monomers fold into a parallel, left-handed four-helix bundle. PHO4 protein lacks an inner hydrogen network.
Mechanism
Pho4 is a transcription factor that assists in regulating cell growth. When activated, Pho4 is translocated to the nucleus. Pho4 has a Nuclear Exchange factor that an importin protein is able to bind to. The importin protein will bind to its "signal" or nuclear exchange factor and aid in translocating the nuclear exchange factor tagged protein into the nucleus. Additionally, another transcription factor known as Pho2, binds to Pho4 and assists in Pho4's ability to bind tightly to its binding site on its specific target genes. This completes the addition of the binding partners that Pho4 needs in order to be capable of acting as a transcription factor by up-regulating the transcription of phosphate-responsive genes.
Regulation
Suppression
Down regulation of the transcription factor Pho4 is seen when the yeast cell has a phosphate rich environment. Under high phosphate concentrations, it is seen that a cyclin-dependent kinase, known as PHO80-PHO50, phosphorylates PHO4 on its serine residues (O’Neil et al. 209–212). This blocks the binding sites for the importin and Pho2 transcription factor and allows for the receptor Msn5p to assist in the removal of the Pho4 protein from the nucleus and back into the cytoplasmic space. Additionally, because the binding site of importin on the Pho4 protein is blocked from the phosphorylation that PHO4 undergoes, PHO4 is no longer able to be translocated into the nucleus.
Up-regulation
Up-regulation of Pho4 is seen in phosphate deficient yeast cells. This occurs due to cyclin dependent kinase PHO80-PHO85 being inhibited by the cyclin dependent inhibitor PHO81. In low concentrations of phosphate the CDK inhibitor PHO80-PHO85 is able to inhibit PHO80-PHO85 from phosphorylating PHO4 at its serine residues. When this occurs, importin and PHO2 are able to bind to PHO4 and assist in the translocation and tight binding to its binding site on the gene PHO5 which will then become up-regulated.
References
External links
Transcription factors
Saccharomyces cerevisiae genes | Pho4 | [
"Chemistry",
"Biology"
] | 773 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
20,937,099 | https://en.wikipedia.org/wiki/Ornithine%20oxoglutarate | Ornithine oxoglutarate (OGO) or ornithine α-ketoglutarate (OKG) is a drug used in liver therapy. It is the salt formed from ornithine and alpha-ketoglutaric acid. It is also used to improve nutritional health in elderly patients.
References
Amino acids
Combination drugs | Ornithine oxoglutarate | [
"Chemistry"
] | 76 | [
"Amino acids",
"Biomolecules by chemical classification"
] |
20,937,523 | https://en.wikipedia.org/wiki/Hepatitis%20C%20virus%20envelope%20glycoprotein%20E2 | E2 is a viral structural protein found in the hepatitis C virus. It is present on the viral envelope and functions as a host receptor binding protein, mediating entry into host cells. It is a key target for the design of entry inhibitors and vaccine immunogens.
References
Viral structural proteins
Hepatitis C virus | Hepatitis C virus envelope glycoprotein E2 | [
"Biology"
] | 64 | [
"Virus stubs",
"Viruses"
] |
20,937,534 | https://en.wikipedia.org/wiki/Hepatitis%20C%20virus%20nonstructural%20protein%203 | Nonstructural protein 3 (NS3), also known as p-70, is a viral nonstructural protein that is 70 kDa cleavage product of the hepatitis C virus polyprotein. It acts as a serine protease. C-terminal two-thirds of the protein also acts as helicase and nucleoside triphosphatase. First (N-terminal) 180 aminoacids of NS3 has additional role as cofactor domains for NS2 protein.
See also
Boceprevir, sovaprevir, paritaprevir and telaprevir - drugs targeting this protein
References
External links
http://www.uniprot.org/uniprot/Q91RS4
Viral nonstructural proteins
Hepatitis C virus | Hepatitis C virus nonstructural protein 3 | [
"Biology"
] | 167 | [
"Virus stubs",
"Viruses"
] |
20,937,590 | https://en.wikipedia.org/wiki/Hepatitis%20C%20virus%20nonstructural%20protein%204A | Nonstructural protein 4A (NS4A) is a viral protein found in the hepatitis C virus. It acts as a cofactor for the enzyme NS3.
References
Viral nonstructural proteins
Hepatitis C virus | Hepatitis C virus nonstructural protein 4A | [
"Biology"
] | 49 | [
"Virus stubs",
"Viruses"
] |
20,938,051 | https://en.wikipedia.org/wiki/Ji%C5%99%C3%AD%20Matou%C5%A1ek%20%28mathematician%29 | Jiří (Jirka) Matoušek (10 March 1963 – 9 March 2015) was a Czech mathematician working in computational geometry and algebraic topology. He was a professor at Charles University in Prague and the author of several textbooks and research monographs.
Biography
Matoušek was born in Prague. In 1986, he received his Master's degree at Charles University under Miroslav Katětov. From 1986 until his death he was employed at the Department of Applied Mathematics of Charles University, holding a professor position since 2000. He was also a visiting and later full professor at ETH Zurich.
In 1996, he won the European Mathematical Society prize and in 2000 he won the Scientist award of the Learned Society of the Czech Republic. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin. He became a fellow of the Learned Society of the Czech Republic in 2005.
Matoušek's paper on computational aspects of algebraic topology won the Best Paper award at the 2012 ACM Symposium on Discrete Algorithms.
Aside from his own academic writing, he has translated the popularization book Mathematics: A Very Short Introduction by Timothy Gowers into Czech. He was a supporter and signatory of the Cost of Knowledge protest.
Matoušek died in 2015, aged 51. In 2021, a lecture hall at the Faculty of Mathematics and Physics, Charles University, was named after him.
Books
Invitation to Discrete Mathematics (with Jaroslav Nešetřil). Oxford University Press, 1998. . Translated into French by Delphine Hachez as Introduction aux Mathématiques Discrètes, Springer-Verlag, 2004, .
Geometric Discrepancy: An Illustrated Guide. Springer-Verlag, Algorithms and Combinatorics 18, 1999, .
Lectures on Discrete Geometry. Springer-Verlag, Graduate Texts in Mathematics, 2002, .
Using the Borsuk-Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry. Springer-Verlag, 2003. .
Topics in Discrete Mathematics: Dedicated to Jarik Nešetřil on the Occasion of His 60th Birthday (with Martin Klazar, Jan Kratochvíl, Martin Loebl, Robin Thomas, and Pavel Valtr). Springer-Verlag, Algorithms and Combinatorics 26, 2006. .
Understanding and Using Linear Programming (with B. Gärtner). Springer-Verlag, Universitext, 2007, .
Thirty-three miniatures — Mathematical and algorithmic applications of linear algebra. American Mathematical Society, 2010, .
Approximation Algorithms and Semidefinite Programming (with B. Gärtner). Springer Berlin Heidelberg, 2012, .
Mathematics++: Selected Topics Beyond the Basic Courses (with Ida Kantor and Robert Šámal). American Mathematical Society, 2015, .
See also
Ham sandwich theorem
Discrepancy theory
Kneser graph
References
External links
Jiri Matousek home page
1963 births
2015 deaths
Mathematicians from Prague
Charles University alumni
Czech mathematicians
Researchers in geometric algorithms
Academic staff of Charles University
Academic staff of ETH Zurich
Combinatorialists
Topologists | Jiří Matoušek (mathematician) | [
"Mathematics"
] | 608 | [
"Topologists",
"Topology",
"Combinatorialists",
"Combinatorics"
] |
20,939,923 | https://en.wikipedia.org/wiki/Marine%20cloud%20brightening | Marine cloud brightening also known as marine cloud seeding and marine cloud engineering is a proposed solar radiation management technique that would make clouds brighter, reflecting a small fraction of incoming sunlight back into space in order to offset global warming. Along with stratospheric aerosol injection, it is one of the two solar radiation management methods that may most feasibly have a substantial climate impact. The intention is that increasing the Earth's albedo, in combination with greenhouse gas emissions reduction, would reduce climate change and its risks to people and the environment. If implemented, the cooling effect is expected to be felt rapidly and to be reversible on fairly short time scales. However, technical barriers remain to large-scale marine cloud brightening. There are also risks with such modification of complex climate systems.
Basic principles
Marine cloud brightening is based on phenomena that are currently observed in the climate system. Today, emissions particles mix with clouds in the atmosphere and increase the amount of sunlight they reflect, reducing warming. This 'cooling' effect is estimated at between 0.5 and 1.5 °C, and is one of the most important unknowns in climate. Marine cloud brightening proposes to generate a similar effect using benign material (e.g. sea salt) delivered to clouds that are most susceptible to these effects (marine stratocumulus).
Most clouds are quite reflective, redirecting incoming solar radiation back into space. Increasing clouds' albedo would increase the portion of incoming solar radiation that is reflected, in turn cooling the planet. Clouds consist of water droplets, and clouds with smaller droplets are more reflective (because of the Twomey effect). Cloud condensation nuclei are necessary for water droplet formation. The central idea underlying marine cloud brightening is to add aerosols to atmospheric locations where clouds form. These would then act as cloud condensation nuclei, increasing the cloud albedo.
Marine cloud brightening on a small scale already occurs unintentionally due to the aerosols in ships' exhaust, leaving ship tracks. Changes to shipping regulations in enacted by the United Nations’ International Maritime Organization (IMO) to reduce certain aerosols are hypothesized to be leading to reduced cloud cover and increased oceanic warming, providing additional support to the potential effectiveness of marine cloud brightening at modifying ocean temperature. Different cloud regimes are likely to have differing susceptibility to brightening strategies, with marine stratocumulus clouds (low, layered clouds over ocean regions) most sensitive to aerosol changes. These marine stratocumulus clouds are thus typically proposed as the suited target. They are common over the cooler regions of subtropical and midlatitude oceans, where their coverage can exceed 50% in the annual mean.
The leading possible source of additional cloud condensation nuclei is salt from seawater, although there are others.
Even though the importance of aerosols for the formation of clouds is, in general, well understood, many uncertainties remain. In fact, the latest IPCC report considers aerosol-cloud interactions as one of the current major challenges in climate modeling in general. In particular, the number of droplets does not increase proportionally when more aerosols are present and can even decrease. Extrapolating the effects of particles on clouds observed on the microphysical scale to the regional, climatically relevant scale, is not straightforward.
Climatic impacts
Reduction in global warming
The modeling evidence of the global climatic effects of marine cloud brightening remains limited. Current modeling research indicates that marine cloud brightening could substantially cool the planet. One study estimated that it could produce 3.7 W/m2 of globally averaged negative forcing. This would counteract the warming caused by a doubling of the preindustrial atmospheric carbon dioxide concentration, or an estimated 3 degrees Celsius, although models have indicated less capacity. A 2020 study found a substantial increase in cloud reflectivity from shipping in southeast Atlantic basin, suggesting that a regional-scale test of MCB in stratocumulus‐dominated regions could be successful.
The climatic impacts of marine cloud brightening would be rapidly responsive and reversible. If the brightening activity were to change in intensity, or stop altogether, then the clouds' brightness would respond within a few days to weeks, as the cloud condensation nuclei particles precipitate naturally.
Again unlike stratospheric aerosol injection, marine cloud brightening might be able to be used regionally, albeit in a limited manner. Marine stratocumulus clouds are common in particular regions, specifically the eastern Pacific Ocean and the eastern South Atlantic Ocean. A typical finding among simulation studies was a persistent cooling of the Pacific, similar to the “La Niña” phenomenon, and, despite the localized nature of the albedo change, an increase in polar sea ice. Recent studies aim at making simulation findings derived from different models comparable.
Side effects
There is some potential for changes to precipitation patterns and amplitude, although modeling suggests that the changes are likely less than those for stratospheric aerosol injection and considerably smaller than for unabated anthropogenic global warming.
Regional implementations of MCB would need care to avoid causing possibly adverse consequences in areas far away from the region they are aiming to help. For example, a potential Marine Cloud Brightening aimed at cooling Western United States could risk causing increasing heat in Europe, due to climate teleconnections such as unintended perturbation of the Atlantic meridional overturning circulation.
Research
Marine cloud brightening was originally suggested by John Latham in 1990.
Because clouds remain a major source of uncertainty in climate change, some research projects into cloud reflectivity in the general climate change context have provided insight into marine cloud brightening specifically. For example, one project released smoke behind ships in the Pacific Ocean and monitored the particulates' impact on clouds. Although this was done in order to better understand clouds and climate change, the research has implications for marine cloud brightening.
A research coalition called the Marine Cloud Brightening Project was formed in order to coordinate research activities. Its proposed program includes modeling, field experiments, technology development and policy research to study cloud-aerosol effects and marine cloud brightening. The proposed program currently serves as a model for process-level (environmentally benign) experimental programs in the atmosphere. Formed in 2009 by Kelly Wanser with support from Ken Caldeira, the project is now housed at the University of Washington. Its co-principals are Robert Wood, Thomas Ackerman, Philip Rasch, Sean Garner (PARC), and Kelly Wanser (Silver Lining). The project is managed by Sarah Doherty.
The shipping industry may have been carrying out an unintentional experiment in marine cloud brightening due to the emissions of ships and causing a global temperature reduction of as much as 0.25 ˚C lower than they would otherwise have been. A 2020 study found a substantial increase in cloud reflectivity from shipping in southeast Atlantic basin, suggesting that a regional-scale test of MCB in stratocumulus‐dominated regions could be successful.
Marine cloud brightening is being examined as a way to shade and cool coral reefs such as the Great Barrier Reef.
Proposed methods
The leading proposed method for marine cloud brightening is to generate a fine mist of salt from seawater, and to deliver into targeted banks of marine stratocumulus clouds from ships traversing the ocean. This requires technology that can generate optimally-sized (~100 nm) sea-salt particles and deliver them at sufficient force and scale to penetrate low-lying marine clouds. The resulting spray mist must then be delivered continuously into target clouds over the ocean.
In the earliest published studies, John Latham and Stephen Salter proposed a fleet of around 1500 unmanned Rotor ships, or Flettner ships, that would spray mist created from seawater into the air. The vessels would spray sea water droplets at a rate of approximately 50 cubic meters per second over a large portion of Earth's ocean surface. The power for the rotors and the ship could be generated from underwater turbines. Salter and colleagues proposed using active hydro foils with controlled pitch for power.[1]
Subsequent researchers determined that transport efficiency was only relevant for use at scale, and that for research requirements, standard ships could be used for transport. (Some researchers considered aircraft as an option, but concluded that it would be too costly.) Droplet generation and delivery technology is critical to progress, and technology research has been focused on solving this challenging problem.
Other methods were proposed and discounted, including:
Using small droplets of seawater into the air through ocean foams. When bubbles in the foams burst, they loft small droplets of seawater.
Using piezoelectric transducer. This would create faraday waves at a free surface. If the waves are steep enough, droplets of sea water will be thrown from the crests and the resulting salt particles can enter into the clouds. However, a significant amount of energy is required.
Electrostatic atomization of seawater drops. This technique would utilize mobile spray platforms that move to adjust to changing weather conditions. These too could be on unmanned ships.
Using engine or smoke emissions as a source for CCN. Paraffin oil particles have also been proposed, though their viability has been discounted.
Costs
The costs of marine cloud brightening remain largely unknown. One academic paper implied annual costs of approximately 50 to 100 million UK pounds (roughly 75 to 150 million US dollars). A report of the US National Academies suggested roughly five billion US dollars annually for a large deployment program (reducing radiative forcing by 5 W/m2).
Governance
Marine cloud brightening would be governed primarily by international law because it would likely take place outside of countries' territorial waters, and because it would affect the environment of other countries and of the oceans. For the most part, the international law governing solar radiation management in general would apply. For example, according to customary international law, if a country were to conduct or approve a marine cloud brightening activity that would pose significant risk of harm to the environments of other countries or of the oceans, then that country would be obligated to minimize this risk pursuant to a due diligence standard. In this, the country would need to require authorization for the activity (if it were to be conducted by a private actor), perform a prior environmental impact assessment, notify and cooperate with potentially affected countries, inform the public, and develop plans for a possible emergency.
Marine cloud brightening activities would be furthered governed by the international law of sea, and particularly by the United Nations Convention on the Law of the Sea (UNCLOS). Parties to the UNCLOS are obligated to "protect and preserve the marine environment," including by preventing, reducing, and controlling pollution of the marine environment from any source. The "marine environment" is not defined but is widely interpreted as including the ocean's water, lifeforms, and the air above. "Pollution of the marine environment" is defined in a way that includes global warming and greenhouse gases. The UNCLOS could thus be interpreted as obligating the involved Parties to use methods such as marine cloud brightening if these were found to be effective and environmentally benign. Whether marine cloud brightening itself could be such pollution of the marine environment is unclear. At the same time, in combating pollution, Parties are "not to transfer, directly or indirectly, damage or hazards from one area to another or transform one type of pollution into another." If marine cloud brightening were found to cause damage or hazards, the UNCLOS could prohibit it. If marine cloud brightening activities were to be "marine scientific research"—also an undefined term—then UNCLOS Parties have a right to conduct the research, subject to some qualifications. Like all other ships, those that would conduct marine cloud brightening must bear the flag of the country that has given them permission to do so and to which the ship has a genuine link, even if the ship is unmanned or automated. The flagged state must exercise its jurisdiction over those ships. The legal implications would depend on, among other things, whether the activity were to occur in territorial waters, an exclusive economic zone (EEZ), or the high seas; and whether the activity was scientific research or not. Coastal states would need to approve any marine cloud brightening activities in their territorial waters. In the EEZ, the ship must comply with the coastal state's laws and regulations. It appears that the state conducting marine cloud brightening activities in another state's EEZ would not need the latter's permission, unless the activity were marine scientific research. In that case, the coastal state should grant permission in normal circumstances. States would be generally free to conduct marine cloud brightening activities on the high seas, provided that this is done with "due regard" for other states' interests. There is some legal unclarity regarding unmanned or automated ships.
Advantages and disadvantages
Marine cloud brightening appears to have most of the advantages and disadvantages of solar radiation management in general. For example, it presently appears to be inexpensive relative to suffering climate change damages and greenhouse gas emissions abatement, fast acting, and reversible in its direct climatic effects. Some advantages and disadvantages are specific to it, relative to other proposed solar radiation management techniques.
Compared with other proposed solar radiation management methods, such as stratospheric aerosols injection, marine cloud brightening may be able to be partially localized in its effects. This could, for example, be used to stabilize the West Antarctic Ice Sheet. Furthermore, marine cloud brightening, as it is currently envisioned, would use only natural substances sea water and wind, instead of introducing human-made substances into the environment.
Potential disadvantages include that specific MCB implementations could have a varying effect across time; the same intervention might even become a net contributor to global warming some years after being first launched, though this could be avoided with careful planning.
See also
Climate engineering
Solar radiation management
Stratospheric sulfate aerosols (geoengineering)
Cirrus cloud thinning
References
Climate change policy
Planetary engineering
Climate engineering | Marine cloud brightening | [
"Engineering"
] | 2,886 | [
"Planetary engineering",
"Geoengineering"
] |
3,595,285 | https://en.wikipedia.org/wiki/Maximum%20power%20principle | The maximum power principle or Lotka's principle has been proposed as the fourth principle of energetics in open system thermodynamics. According to American ecologist Howard T. Odum, "The maximum power principle can be stated: During self-organization, system designs develop and prevail that maximize power intake, energy transformation, and those uses that reinforce production and efficiency."
History
Chen (2006) has located the origin of the statement of maximum power as a formal principle in a tentative proposal by Alfred J. Lotka (1922a, b). Lotka's statement sought to explain the Darwinian notion of evolution with reference to a physical principle. Lotka's work was subsequently developed by the systems ecologist Howard T. Odum in collaboration with the chemical engineer Richard C. Pinkerton, and later advanced by the engineer Myron Tribus.
While Lotka's work may have been a first attempt to formalise evolutionary thought in mathematical terms, it followed similar observations made by Leibniz and Volterra and Ludwig Boltzmann, for example, throughout the sometimes controversial history of natural philosophy. In contemporary literature it is most commonly associated with the work of Howard T. Odum.
The significance of Odum's approach was given greater support during the 1970s, amid times of oil crisis, where, as Gilliland (1978, pp. 100) observed, there was an emerging need for a new method of analysing the importance and value of energy resources to economic and environmental production. A field known as energy analysis, itself associated with net energy and EROEI, arose to fulfill this analytic need. However, in energy analysis intractable theoretical and practical difficulties arose when using the energy unit to understand, a) the conversion among concentrated fuel types (or energy types), b) the contribution of labour, and c) the contribution of the environment.
Philosophy and theory
Lotka said (1922b: 151):
Gilliland noted that these difficulties in analysis in turn required some new theory to adequately explain the interactions and transactions of these different energies (different concentrations of fuels, labour and environmental forces). Gilliland (Gilliland 1978, p. 101) suggested that Odum's statement of the maximum power principle (H.T.Odum 1978, pp. 54–87) was, perhaps, an adequate expression of the requisite theory:
This theory Odum called maximum power theory. In order to formulate maximum power theory Gilliland observed that Odum had added another law (the maximum power principle) to the already well established laws of thermodynamics. In 1978 Gilliland wrote that Odum's new law had not yet been validated (Gilliland 1978, p. 101). Gilliland stated that in maximum power theory the second law efficiency of thermodynamics required an additional physical concept: "the concept of second law efficiency under maximum power" (Gilliland 1978, p. 101):
In this way the concept of maximum power was being used as a principle to quantitatively describe the selective law of biological evolution. Perhaps H.T.Odum's most concise statement of this view was (1970, p. 62):
The Odum–Pinkerton approach to Lotka's proposal was to apply Ohm's law – and the associated maximum power theorem (a result in electrical power systems) – to ecological systems. Odum and Pinkerton defined "power" in electronic terms as the rate of work, where Work is understood as a "useful energy transformation". The concept of maximum power can therefore be defined as the maximum rate of useful energy transformation. Hence the underlying philosophy aims to unify the theories and associated laws of electronic and thermodynamic systems with biological systems. This approach presupposed an analogical view which sees the world as an ecological-electronic-economic engine.
Proposals for maximum power principle as 4th thermodynamic law
Lotka underscored the centrality of available energy in the struggle for survival and evolution. By asserting that organisms with more efficient energy-capturing mechanisms gain an advantage, Lotka essentially aligned with the MPP. The MPP posits that biological systems, like any other complex systems, tend to evolve in ways that maximize their power intake or energy flux. In this context, organisms that effectively capture and utilize energy resources are more likely to thrive and propagate, driving evolutionary processes. Lotka's observation provides an early theoretical foundation for understanding the role of energy dynamics in biological evolution.Odum's proposition of linking Darwinian natural selection with the fourth law of thermodynamics is significant. It suggests that the principles governing biological evolution are intimately connected with those of thermodynamics, particularly the concepts of energy flow and entropy production. By considering natural selection as a thermodynamic process, Odum implies that organisms evolve traits and behaviors that enhance their energy capture and utilization, consistent with the MPP. Furthermore, by emphasizing the regulation of heat generation and efficiency in biological processes, Odum highlights the importance of optimizing energy utilization for survival and reproduction, echoing the principles of the MPP.Odum's advocacy for recognizing the MPP as the fourth thermodynamic law represents a culmination of earlier insights into the relationship between thermodynamics and biology. By elevating the MPP to the status of a fundamental thermodynamic law, Odum underscores its universal applicability across various complex systems, including biological ones. This perspective emphasizes that biological systems, driven by the imperatives of survival and reproduction, tend to evolve in ways that maximize their power output, thereby enhancing their capacity to exploit available energy resources. By acknowledging the MPP as a governing principle, Odum highlights its explanatory power in understanding ecosystem dynamics, population interactions, and evolutionary trajectories.
Definition in words
Odum et al. viewed the maximum power theorem as a principle of power-efficiency reciprocity selection with wider application than just electronics. For example, Odum saw it in open systems operating on solar energy, like both photovoltaics and photosynthesis (1963, p. 438). Like the maximum power theorem, Odum's statement of the maximum power principle relies on the notion of 'matching', such that high-quality energy maximizes power by matching and amplifying energy (1994, pp. 262, 541): "in surviving designs a matching of high-quality energy with larger amounts of low-quality energy is likely to occur" (1994, p. 260). As with electronic circuits, the resultant rate of energy transformation will be at a maximum at an intermediate power efficiency. In 2006, T.T. Cai, C.L. Montague and J.S. Davis said that, "The maximum power principle is a potential guide to understanding the patterns and processes of ecosystem development and sustainability. The principle predicts the selective persistence of ecosystem designs that capture a previously untapped energy source." (2006, p. 317). In several texts H.T. Odum gave the Atwood machine as a practical example of the 'principle' of maximum power.
Mathematical definition
The mathematical definition given by H.T. Odum is formally analogous to the definition provided on the maximum power theorem article. (For a brief explanation of Odum's approach to the relationship between ecology and electronics see Ecological Analog of Ohm's Law)
Contemporary ideas
Whether or not the principle of maximum power efficiency can be considered the fourth law of thermodynamics and the fourth principle of energetics is moot. Nevertheless, H.T. Odum also proposed a corollary of maximum power as the organisational principle of evolution, describing the evolution of microbiological systems, economic systems, planetary systems, and astrophysical systems. He called this corollary the maximum empower principle. This was suggested because, as S.E. Jorgensen, M.T. Brown, H.T. Odum (2004) note,
C. Giannantoni may have confused matters when he wrote "The "Maximum Em-Power Principle" (Lotka–Odum) is generally considered the "Fourth Thermodynamic Principle" (mainly) because of its practical validity for a very wide class of physical and biological systems" (C. Giannantoni 2002, § 13, p. 155). Nevertheless, Giannantoni has proposed the Maximum Em-Power Principle as the fourth principle of thermodynamics (Giannantoni 2006).
The preceding discussion is incomplete. The "maximum power" was discovered several times independently, in physics and engineering, see: Novikov (1957), El-Wakil (1962), and Curzon and Ahlborn (1975). The incorrectness of this analysis and design evolution conclusions was demonstrated by Gyftopoulos (2002).
See also
Maximum power theorem
Maximum entropy thermodynamics
Entropy production
Exergy efficiency
Energy conversion efficiency
Energy rate density
Exergy
Jeremy England
Free energy
Emergy
Systems ecology
Ecological economics
References
T.T. Cai, C.L. Montague and J.S. Davis (2006) 'The maximum power principle: An empirical investigation', Ecological Modelling, Volume 190, Issues 3–4, Pages 317–335
G.Q. Chen (2006) 'Scarcity of exergy and ecological evaluation based on embodied exergy', Communications in Nonlinear Science and Numerical Simulation, Volume 11, Issue 4, July, Pages 531–552.
R.Costanza, J.H.Cumberland, H.E.Daly, R.Goodland and R.B.Norgaard (1997) An Introduction to Ecological Economics, CRC Press – St. Lucie Press, First Edition.
F.L.Curzon and B.Ahlborn (1975) 'Efficiency of a Carnot engine at maximum power output', Am J Phys, 43, pp. 22–24.
C.Giannantoni (2002) The Maximum Em-Power Principle as the basis for Thermodynamics of Quality, Servizi Grafici Editoriali, Padova.
C.Giannantoni (2006) Mathematics for generative processes: Living and non-living systems, Journal of Computational and Applied Mathematics, Volume 189, Issue 1–2, Pages 324–340.
M.W.Gilliland ed. (1978) Energy Analysis: A New Public Policy Tool, AAA Selected Symposia Series, Westview Press, Boulder, Colorado.
C.A.S.Hall (1995) Maximum Power: The ideas and applications of H.T.Odum, Colorado University Press.
C.A.S.Hall (2004) 'The continuing importance of maximum power', Ecological Modelling, Volume 178, Issue 1–2, 15, Pages 107–113
H.W. Jackson (1959) Introduction to Electronic Circuits, Prentice–Hall.
S.E.Jorgensen, M.T.Brown, H.T.Odum (2004) 'Energy hierarchy and transformity in the universe', Ecological Modelling, 178, pp. 17–28
A.L.Lehninger (1973) Bioenergetics, W.A. Benjamin inc.
A.J.Lotka (1922a) 'Contribution to the energetics of evolution' [PDF]. Proc Natl Acad Sci, 8: pp. 147–51.
A.J.Lotka (1922b) 'Natural selection as a physical principle' [PDF]. Proc Natl Acad Sci, 8, pp 151–4.
H.T.Odum (1963) 'Limits of remote ecosystems containing man', The American Biology Teacher, Volume 25, No. 6, pp. 429–443.
H.T.Odum (1970) Energy Values of Water Sources. in 19th Southern Water Resources and Pollution Control Conference.
H.T.Odum (1978) 'Energy Quality and the Environment', in M.W.Gilliland ed. (1978) Energy Analysis: A New Public Policy Tool, AAA Selected Symposia Series, Westview Press, Boulder, Colorado.
H.T.Odum (1994) Ecological and General Systems: An Introduction to Systems Ecology, Colorado University Press.
H.T.Odum (1995) 'Self-Organization and Maximum Empower', in C.A.S.Hall (ed.) Maximum Power: The Ideas and Applications of H.T.Odum, Colorado University Press, Colorado.
H.T.Odum and R.C.Pinkerton (1955) 'Time's speed regulator: The optimum efficiency for maximum output in physical and biological systems ', Am. Sci., 43 pp. 331–343.
H.T.Odum and M.T.Brown (2007) Environment, Power and Society for the Twenty-First Century: The Hierarchy of Energy, Columbia University Press.
M.Tribus (1961) § 16.11 'Generalized Treatment of Linear Systems Used for Power Production', Thermostatics and Thermodynamics, Van Nostrand, University Series in Basic Engineering, p. 619.
Novikov I. I., (1958). The efficiency of atomic power stations. J. Nuclear Energy II, Vol. 7, pp. 125–128; translated from Atomnaya Energia, Vol. 3, (1957), No. 11, p. 409
El-Wakil, M. M. (1962) Nuclear Power Engineering, McGraw-Hill, New York, pp. 162–165.
Curzon F. L., Ahlborn B., (1975) Efficiency of a Carnot engine at maximum power, American Journal of Physics, Vol. 43, pp. 22–24.
Gyftopoulos E. P., (2002). On the Curzon-Ahlborn efficiency and its lack of connection to power producing processes, Energy Conversion and Management, Vol. 43, pp. 609–615.
Energy
Thermodynamics
Principles | Maximum power principle | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,957 | [
"Physical quantities",
"Energy (physics)",
"Energy",
"Thermodynamics",
"Dynamical systems"
] |
3,596,006 | https://en.wikipedia.org/wiki/Schoof%27s%20algorithm | Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve.
The algorithm was published by René Schoof in 1985 and it was a theoretical breakthrough, as it was the first deterministic polynomial time algorithm for counting points on elliptic curves. Before Schoof's algorithm, approaches to counting points on elliptic curves such as the naive and baby-step giant-step algorithms were, for the most part, tedious and had an exponential running time.
This article explains Schoof's approach, laying emphasis on the mathematical ideas underlying the structure of the algorithm.
Introduction
Let be an elliptic curve defined over the finite field , where for a prime and an integer . Over a field of characteristic an elliptic curve can be given by a (short) Weierstrass equation
with . The set of points defined over consists of the solutions satisfying the curve equation and a point at infinity . Using the group law on elliptic curves restricted to this set one can see that this set forms an abelian group, with acting as the zero element.
In order to count points on an elliptic curve, we compute the cardinality of .
Schoof's approach to computing the cardinality makes use of Hasse's theorem on elliptic curves along with the Chinese remainder theorem and division polynomials.
Hasse's theorem
Hasse's theorem states that if is an elliptic curve over the finite field , then satisfies
This powerful result, given by Hasse in 1934, simplifies our problem by narrowing down to a finite (albeit large) set of possibilities. Defining to be , and making use of this result, we now have that computing the value of modulo where , is sufficient for determining , and thus . While there is no efficient way to compute directly for general , it is possible to compute for a small prime, rather efficiently. We choose to be a set of distinct primes such that . Given for all , the Chinese remainder theorem allows us to compute .
In order to compute for a prime , we make use of the theory of the Frobenius endomorphism and division polynomials. Note that considering primes is no loss since we can always pick a bigger prime to take its place to ensure the product is big enough. In any case Schoof's algorithm is most frequently used in addressing the case since there are more efficient, so called adic algorithms for small-characteristic fields.
The Frobenius endomorphism
Given the elliptic curve defined over we consider points on over , the algebraic closure of ; i.e. we allow points with coordinates in . The Frobenius endomorphism of over extends to the elliptic curve by .
This map is the identity on and one can extend it to the point at infinity , making it a group morphism from to itself.
The Frobenius endomorphism satisfies a quadratic polynomial which is linked to the cardinality of by the following theorem:
Theorem: The Frobenius endomorphism given by satisfies the characteristic equation
where
Thus we have for all that , where + denotes addition on the elliptic curve and and
denote scalar multiplication of by and of by .
One could try to symbolically compute these points , and as functions in the coordinate ring of
and then search for a value of which satisfies the equation. However, the degrees get very large and this approach is impractical.
Schoof's idea was to carry out this computation restricted to points of order for various small primes .
Fixing an odd prime , we now move on to solving the problem of determining , defined as , for a given prime .
If a point is in the -torsion subgroup , then where is the unique integer such that and .
Note that and that for any integer we have . Thus will have the same order as . Thus for belonging to , we also have if . Hence we have reduced our problem to solving the equation
where and have integer values in .
Computation modulo primes
The th division polynomial is such that its roots are precisely the coordinates of points of order . Thus, to restrict the computation of to the -torsion points means computing these expressions as functions in the coordinate ring of and modulo the th division polynomial. I.e. we are working in . This means in particular that the degree of and defined via is at most 1 in and at most
in .
The scalar multiplication can be done either by double-and-add methods or by using the th division polynomial. The latter approach gives:
where is the th division polynomial. Note that
is a function in only and denote it by .
We must split the problem into two cases: the case in which , and the case in which . Note that these equalities are checked modulo .
Case 1:
By using the addition formula for the group we obtain:
Note that this computation fails in case the assumption of inequality was wrong.
We are now able to use the -coordinate to narrow down the choice of to two possibilities, namely the positive and negative case. Using the -coordinate one later determines which of the two cases holds.
We first show that is a function in alone. Consider .
Since is even, by replacing by , we rewrite the expression as
and have that
Here, it seems not right, we throw away ?
Now if for one then satisfies
for all -torsion points .
As mentioned earlier, using and we are now able to determine which of the two values of ( or ) works. This gives the value of . Schoof's algorithm stores the values of in a variable for each prime considered.
Case 2:
We begin with the assumption that . Since is an odd prime it cannot be that and thus . The characteristic equation yields that . And consequently that .
This implies that is a square modulo . Let . Compute in and check whether . If so, is depending on the y-coordinate.
If turns out not to be a square modulo or if the equation does not hold for any of and , our assumption that is false, thus . The characteristic equation gives .
Additional case
If you recall, our initial considerations omit the case of .
Since we assume to be odd, and in particular, if and only if has an element of order 2. By definition of addition in the group, any element of order 2 must be of the form . Thus if and only if the polynomial has a root in , if and only if .
The algorithm
Input:
1. An elliptic curve .
2. An integer for a finite field with .
Output:
The number of points of over .
Choose a set of odd primes not containing
.
All computations in the loop below are performed
else
else if is a square modulo then
else
else
Use the Chinese Remainder Theorem to compute modulo
from the equations , where .
Output .
Complexity
Most of the computation is taken by the evaluation of and , for each prime , that is computing , , , for each prime . This involves exponentiation in the ring and requires multiplications. Since the degree of is , each element in the ring is a polynomial of degree . By the prime number theorem, there are around primes of size , giving that is and we obtain that . Thus each multiplication in the ring requires multiplications in which in turn requires bit operations. In total, the number of bit operations for each prime is . Given that this computation needs to be carried out for each of the primes, the total complexity of Schoof's algorithm turns out to be . Using fast polynomial and integer arithmetic reduces this to .
Improvements to Schoof's algorithm
In the 1990s, Noam Elkies, followed by A. O. L. Atkin, devised improvements to Schoof's basic algorithm by restricting the set of primes considered before to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime is called an Elkies prime if the characteristic equation: splits over , while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials, which come from the study of modular forms and an interpretation of elliptic curves over the complex numbers as lattices. Once we have determined which case we are in, instead of using division polynomials, we are able to work with a polynomial that has lower degree than the corresponding division polynomial: rather than . For efficient implementation, probabilistic root-finding algorithms are used, which makes this a Las Vegas algorithm rather than a deterministic algorithm.
Under the heuristic assumption that approximately half of the primes up to an bound are Elkies primes, this yields an algorithm that is more efficient than Schoof's, with an expected running time of using naive arithmetic, and using fast arithmetic. Although this heuristic assumption is known to hold for most elliptic curves, it is not known to hold in every case, even under the GRH.
Implementations
Several algorithms were implemented in C++ by Mike Scott and are available with source code. The implementations are free (no terms, no conditions), and make use of the MIRACL library which is distributed under the AGPLv3.
Schoof's algorithm implementation for with prime .
Schoof's algorithm implementation for .
See also
Elliptic curve cryptography
Counting points on elliptic curves
Division Polynomials
Frobenius endomorphism
References
R. Schoof: Elliptic Curves over Finite Fields and the Computation of Square Roots mod p. Math. Comp., 44(170):483–494, 1985. Available at http://www.mat.uniroma2.it/~schoof/ctpts.pdf
R. Schoof: Counting Points on Elliptic Curves over Finite Fields. J. Theor. Nombres Bordeaux 7:219–254, 1995. Available at http://www.mat.uniroma2.it/~schoof/ctg.pdf
G. Musiker: Schoof's Algorithm for Counting Points on . Available at http://www.math.umn.edu/~musiker/schoof.pdf
V. Müller : Die Berechnung der Punktanzahl von elliptischen kurven über endlichen Primkörpern. Master's Thesis. Universität des Saarlandes, Saarbrücken, 1991. Available at http://lecturer.ukdw.ac.id/vmueller/publications.php
A. Enge: Elliptic Curves and their Applications to Cryptography: An Introduction. Kluwer Academic Publishers, Dordrecht, 1999.
L. C. Washington: Elliptic Curves: Number Theory and Cryptography. Chapman & Hall/CRC, New York, 2003.
N. Koblitz: A Course in Number Theory and Cryptography, Graduate Texts in Math. No. 114, Springer-Verlag, 1987. Second edition, 1994
Asymmetric-key algorithms
Elliptic curve cryptography
Elliptic curves
Group theory
Finite fields
Number theory | Schoof's algorithm | [
"Mathematics"
] | 2,371 | [
"Group theory",
"Fields of abstract algebra",
"Discrete mathematics",
"Number theory"
] |
3,596,468 | https://en.wikipedia.org/wiki/Einzel%20lens | An einzel lens (from – single lens), or unipotential lens, is a charged particle electrostatic lens that focuses without changing the energy of the beam. It consists of three or more sets of cylindrical or rectangular apertures or tubes in series along an axis. It is used in ion optics to focus ions in flight, which is accomplished through manipulation of the electric field in the path of the ions.
The electrostatic potential in the lens is symmetric, so the ions will regain their initial energy on exiting the lens, although the velocity of the outer particles will be altered such that they converge on to the axis. This causes the outer particles to arrive at the focus intersection slightly later than the ones that travel along a straight path, as they have to travel an extra distance.
Theory
The equation for the change in radial velocity for a particle as it passes between any pair of cylinders in the lens is:
with z axis passing through the middle of the lens, and r being the direction normal to z. If the lens is constructed with cylindrical electrodes, the field is symmetrical around z. is the magnitude of the electric field in the radial direction for a particle at a particular radial distance and distance across the gap, is the mass of the particle passing through the field, is the velocity of the particle and q is the charge of the particle. The integral occurs over the gap between the plates. This is also the interval where the lensing occurs.
The pair of plates is also called an electrostatic immersion lens, thus an einzel lens can be described as two or more electrostatic immersion lenses. Solving the equation above twice to find the change in radial velocity for each pair of plates can be used to calculate the focal length of the lens.
Application to television tubes
The einzel lens principle in a simplified form was also used as a focusing mechanism in display and television cathode ray tubes, and has the advantage of providing a good sharply focused spot throughout the useful life of the tube's electron gun, with minimal or no readjustment needed (many monochrome TVs did not have or need focus controls), although in high-resolution monochrome displays and all colour CRT displays a (technician-adjustable) focus potentiometer control is provided.
See also
Time of flight
Mass spectrometry
Cathode ray tube
Wehnelt cylinder
References
Electrostatics
Mass spectrometry | Einzel lens | [
"Physics",
"Chemistry"
] | 485 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
3,597,775 | https://en.wikipedia.org/wiki/Energy%20principles%20in%20structural%20mechanics | Energy principles in structural mechanics express the relationships between stresses, strains or deformations, displacements, material properties, and external effects in the form of energy or work done by internal and external forces. Since energy is a scalar quantity, these relationships provide convenient and alternative means for formulating the governing equations of deformable bodies in solid mechanics. They can also be used for obtaining approximate solutions of fairly complex systems, bypassing the difficult task of solving the set of governing partial differential equations.
General principles
Virtual work principle
Principle of virtual displacements
Principle of virtual forces
Unit dummy force method
Modified variational principles
Elastic systems
Minimum total potential energy principle
Principle of stationary total complementary potential energy
Castigliano's first theorem (for forces)
Linear elastic systems
Castigliano's second theorem (for displacements)
Betti's reciprocal theorem
Müller-Breslau's principle
Applications
Governing equations by variational principles
Approximate solution methods
Finite element method in structural mechanics
Bibliography
Charlton, T.M.; Energy Principles in Theory of Structures, Oxford University Press, 1973.
Dym, C. L. and I. H. Shames; Solid Mechanics: A Variational Approach, McGraw-Hill, 1973.
Hu, H. Variational Principles of Theory of Elasticity With Applications; Taylor & Francis, 1984.
Langhaar, H. L.; Energy Methods in Applied Mechanics, Krieger, 1989.
Moiseiwitsch, B. L.; Variational Principles, John Wiley and Sons, 1966.
Mura, T.; Variational Methods in Mechanics, Oxford University Press, 1992.
Reddy, J.N.; Energy Principles and Variational Methods in Applied Mechanics, John Wiley, 2002.
Shames, I. H. and Dym, C. L.; Energy and Finite Element Methods in Structural Mechanics, Taylor & Francis, 1995,
Tauchert, T.R.; Energy Principles in Structural Mechanics, McGraw-Hill, 1974.
Washizu, K.; Variational Methods in Elasticity and Plasticity, Pergamon Pr, 1982.
Wunderlich, W.; Mechanics of Structures: Variational and Computational Methods, CRC, 2002.
Structural analysis
Calculus of variations | Energy principles in structural mechanics | [
"Engineering"
] | 451 | [
"Structural engineering",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering"
] |
3,597,879 | https://en.wikipedia.org/wiki/Inherent%20safety | In the chemical and process industries, a process has inherent safety if it has a low level of danger even if things go wrong. Inherent safety contrasts with other processes where a high degree of hazard is controlled by protective systems. As perfect safety cannot be achieved, common practice is to talk about inherently safer design.
“An inherently safer design is one that avoids hazards instead of controlling them, particularly by reducing the amount of hazardous material and the number of hazardous operations in the plant.”
Origins
The concept of reducing rather than controlling hazards stems from British chemical engineer Trevor Kletz in 1978 paper "What You Don’t Have, Can’t Leak" on lessons from the Flixborough disaster, and the expression "inherent safety" from a book that was an expanded version of the article. A greatly revised and retitled 1991 version mentioned the techniques which are generally quoted. (Kletz originally used the term intrinsically safe in 1978, but as this had already been used for the special case of electronic equipment in potentially flammable atmospheres, only the term inherent was adopted. Intrinsic safety may be considered a special subset of inherent safety). In 2010 the American Institute of Chemical Engineers published its own definition of inherently safer technology (IST).
Principles
The terminology of inherent safety has developed since 1991, with some slightly different words but the same intentions as Kletz. The four main methods for achieving inherently safer design are:
Minimize: Reducing the amount of hazardous material present at any one time, e.g. by using smaller batches.
Substitute: Replacing one material with another of less hazard, e.g. cleaning with water and detergent rather than a flammable solvent
Moderate: Reducing the strength of an effect, e.g. having a cold liquid instead of a gas at high pressure, or using material in a dilute rather than concentrated form
Simplify: Eliminating problems by design rather than adding additional equipment or features to deal with them. Only fitting options and using complex procedures if they are really necessary.
Two further principles are used by some:
Error tolerance: Equipment and processes can be designed to be capable of withstanding possible faults or deviations from design. A very simple example is making piping and joints capable of withstanding the maximum possible pressure, if outlets are closed.
Limit effects by design, location or transportation of equipment so that the worst possible condition produces less danger, e.g. gravity will take a leak to a safe place, the use of bunds.
In terms of making plants more user-friendly Kletz added the following:
Avoiding knock-on effects;
Making incorrect assembly impossible;
Making status clear;
Ease of control;
Software and management procedures.
The opportunity to adopt an inherently safer design is ideal at the research and conceptual design stages; such opportunity decreases and the project cost increases if changes are made during the subsequent design stages. Once a conceptual design is completed, the other safety strategies should be applied along with the inherently safer design concept. However, in this case, the project cost would significantly increase to have the same risk level at the same reliability relative to if ISD (inherently safer design) was adopted during the conceptual design stage.
Official status
Inherent safety has been recognised as a desirable principle by a number of national authorities, including the US Nuclear Regulatory Commission and the UK Health and Safety Executive (HSE). In assessing COMAH (Control of Major Accident Hazards Regulations) sites the HSE states “Major accident hazards should be avoided or reduced at source through the application of principles of inherent safety”. The European Commission in its Guidance Document on the Seveso II Directive states “Hazards should be possibly avoided or reduced at source through the application of inherently safe practices.”
In California, Contra Costa County requires chemical plants and petroleum refineries to implement inherent safety reviews and make changes based on these reviews. After a 2008 methyl isocyanate explosion at the Bayer CropScience chemical production plant in Institute, West Virginia, the US Chemical Safety Board commissioned a study by the National Academy of Sciences (NAS) how the concept of “Inherent Safety” could be applied, published in a report and video in 2012.
After the Bhopal disaster in 1984, the US state of New Jersey adopted the Toxic Catastrophe Prevention Act(TCPA) from 1985. In 2003 its rules were revised to include inherently safer technologies (IST). In 2005, the New Jersey Domestic Security Preparedness Task Force established a new “Best Practices Standards” program, in which it required chemical facilities to conduct inherently safer technologies (IST) reviews. In 2008, the TCPA program was expanded to require all TCPA facilities to
conduct IST reviews on both new and existing processes. The State of New Jersey created its own definition of IST for regulatory purposes and stretched the definition of IST to include passive, active, and procedural controls.
Under Executive Order 13650 the U.S. Environmental Protection Agency (EPA) has been considering a proposal to “nationalize” the New Jersey inherently safer technologies program, inviting comments until end of October 2014. The American Chemistry Council lists disadvantages.
Quantification
The Dow Fire and Explosion Index is essentially a measure of inherent danger and is the most widely used quantification of inherent safety. A more specific index of inherently safe design has been proposed by Heikkilä, and variations of this have been published. However all of these are much more complex than the Dow F & E Index.
See also
Fail-safe
Generation IV reactor
Intrinsic safety
Passive nuclear safety
Prevention through design
Notes and references
Further reading
Kletz, Trevor (1998) Process Plants: A Handbook for Inherently Safer Design CRC
Dow's Fire & Explosion Index Hazard Classification Guide, 7th Edition (1994) American Institute of Chemical Engineers (AIChE)
Center for Chemical Process Safety (2009) Inherently Safer Chemical Processes: A Life Cycle Approach 2nd edn Wiley
Howat, C. S. (2002) Introduction to Inherently Safer Chemical Processes
Mansfield, D., Poulter, L., & Kletz, T., (1996) Improving Inherent Safety HMSO
Mary Kay O’Connor Process Safety Center (2002) Challenges in Implementing Inherent Safety Principles in New and Existing Chemical Processes
M. Gentile (2004) Development of a Hierarchical Fuzzy Model for the Evaluation of Inherent Safety
Safer Design Front Loading Safety in Design
Process safety
de:Inhärenz#Technik | Inherent safety | [
"Chemistry",
"Engineering"
] | 1,284 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
18,685,301 | https://en.wikipedia.org/wiki/Sediment%20basin | A sediment basin is a temporary pond built on a construction site to capture eroded or disturbed soil that is washed off during rain storms, and protect the water quality of a nearby stream, river, lake, or bay. The sediment-laden soil settles in the pond before the runoff is discharged. Sediment basins are typically used on construction sites of or more, where there is sufficient room. They are often used in conjunction with erosion controls and other sediment control practices. On smaller construction sites, where a basin is not practical, sediment traps may be used.
Essential sediment abundance is prevalent in the construction industry which gives insight to future endeavors.
On some construction projects, the sediment basin is cleaned out after the soil disturbance (earth-moving) phase of the project, and modified to function as a permanent stormwater management system for the completed site, either as a detention basin or a retention basin.
Sediment trap
A sediment trap is a temporary settling basin installed on a construction site to capture eroded or disturbed soil that is washed off during rain storms, and protect the water quality of a nearby stream, river, lake, or bay. The trap is basically an embankment built along a waterway or low-lying area on the site. They are typically installed at the perimeter of a site and above storm drain inlets, to keep sediment from entering the drainage system. Sediment traps are commonly used on small construction sites, where a sediment basin is not practical. Sediment basins are typically used on construction sites of or more, where there is sufficient room.
Sediment traps are installed before land disturbance (earth moving, grading) begins on a construction site. The traps are often used in conjunction with erosion controls and other sediment control practices.
See also
Erosion control
Sediment control
Sediment transport
Silt fence
References
External links
Erosion Control - a trade magazine for the erosion control and construction industries
International Erosion Control Association - Professional Association, Publications, Training
"Developing Your Stormwater Pollution Prevention Plan: A Guide for Construction Sites." - U.S. EPA
Environmental soil science
Excavations
Ponds
Stormwater management
Water treatment | Sediment basin | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 410 | [
"Water treatment",
"Stormwater management",
"Water pollution",
"Environmental engineering",
"Water technology",
"Environmental soil science"
] |
18,688,564 | https://en.wikipedia.org/wiki/Rs1801133 | C677T or rs1801133 is a genetic variation—a single nucleotide polymorphism (SNP)—in the MTHFR gene.
Among Americans the frequency of T-homozygosity ranges from 1% or less among people of sub-Saharan African descent to 20% or more among Italians and Hispanics.
It has been related to
schizophrenia
Alzheimer's disease
depression
autism
spina bifida.
In 2000 association studies on oral clefts, Down syndrome, and fetal anticonvulsant syndrome were either unreplicated or had yielded conflicting results.
Related genetic variants
A1298C is a SNP in the same gene.
Studies have investigated the combined effect of C677T and A1298C.
References
SNPs on chromosome 1 | Rs1801133 | [
"Biology"
] | 165 | [
"SNPs on chromosome 1",
"Single-nucleotide polymorphisms"
] |
18,689,198 | https://en.wikipedia.org/wiki/The%20Gene%20Revolution | The Gene Revolution: GM Crops and Unequal Development is a 2006 book by Professor Sakiko Fukuda-Parr.
While some people do not support genetic manipulation (GM), others view it as an important technological solution to limited agricultural output, increasing populations, and climate change. The book provides a detailed analysis of debate about GM adoption in developing countries, which are dealing with poverty and trying to better compete in the global economy. Per the introduction, the book focuses on five countries' use of GM technology, Argentina, Brazil, China, India, and South Africa.
The Gene Revolution refers to a phase following the Green Revolution during which agricultural biotechnology was heavily implemented.
See also
Genetically modified food controversies
Gene Revolution
References
Environmental non-fiction books
2006 non-fiction books
2006 in the environment
Genetic engineering and agriculture
Technology development
Books about globalization
Routledge books | The Gene Revolution | [
"Engineering",
"Biology"
] | 171 | [
"Genetic engineering and agriculture",
"Genetic engineering"
] |
18,690,388 | https://en.wikipedia.org/wiki/Cutoff%20voltage | In electronics,the cut-off voltage is the voltage at which a battery is considered fully discharged, beyond which further discharge could cause harm. Some electronic devices, such as cell phones, will automatically shut down when the cut-off voltage has been reached.
Batteries
In batteries, the cut-off (final) voltage is the prescribed lower-limit voltage at which battery discharge is considered complete. The cut-off voltage is usually chosen so that the maximum useful capacity of the battery is achieved. The cut-off voltage is different from one battery to the other and it is highly dependent on the type of battery and the kind of service in which the battery is used. When testing the capacity of a NiMH or NiCd battery a cut-off voltage of 1.0 V per cell is normally used, whereas 0.9 V is normally used as the cut-off voltage of an alkaline cell. Devices that have too high cut-off voltages may stop operating while the battery still has significant capacity remaining.
Voltage cut-off in portable electronics
Some portable equipment does not fully utilise the low-end voltage spectrum of a battery. The power to the equipment cuts off before a relatively large portion of the battery life has been used.
A high cut-off voltage is more widespread than perhaps assumed. For example, a certain brand of mobile phone that is powered with a single-cell Lithium-ion battery cuts off at 3.3 V. The Li‑ion can be discharged to 3 V and lower; however, with a discharge to 3.3 V (at room temperature), about 92–98% of the capacity is used. Importantly, particularly in the case of lithium-ion batteries, which are used in the vast majority of portable electronics today, a voltage cut-off below 3.2 V can lead to chemical instability in the cell, with the result being a reduced battery lifetime. For this reason, electronics manufacturers tend to use higher cut-off voltages, removing the need for consumers to buy battery replacements before other failure mechanisms in a device take effect .
See also
Cut-off (electronics)
- includes cutoff voltages for various battery chemistries
References
External links
Samples of Low Voltage Cut-Off Relay Circuits
Effect of discharge cut off voltage on cycle life of MgNi-based electrode for rechargeable Ni-MH batteries
Electrical parameters | Cutoff voltage | [
"Engineering"
] | 480 | [
"Electrical engineering",
"Electrical parameters"
] |
18,690,547 | https://en.wikipedia.org/wiki/Megohmmeter | A Megohmmeter or insulation resistance tester, is a special type of ohmmeter used to measure the electrical resistance of insulators. Insulating components, for example cable jackets, must be tested for their insulation strength at the time of commissioning and as part of maintenance of high voltage electrical equipment and installations.
For this purpose, megohmmeters, which can provide high DC voltages (typically in ranges from 500 V to 5 kV, some are up to 15 kV) at specified current capacity, are used. Acceptable insulator resistance values are typically 1 to 10 megohms, depending on the standards referenced.
Operation
Resistance to be measured is connected across the terminals i.e. connected in series with the deflecting coil and across the generator. When the current is supplied to the coils then they have torque in opposite directions.
If resistance to be measured is high, no current will flow through the deflecting coil, the controlling coil, will, therefore, set itself perpendicular to the magnetic axis and hence set the pointer to the infinity.
If the resistance to be measured is small, high current flow through the deflecting coil and the resulting torque sets the pointer to zero.
For the intermediate value of resistance, depending upon the torque production, the pointer is set at a point between zero and infinity.
The hand-driven generator is of the permanent magnet type and it is designed to generate from 500 to 2500 volts. Testing voltage is produced by the rotation of the crank in a hand-operated megger, or by the battery in the case of an electronic-type megger. For testing a range up to 440V, equipment requires 550V DC is sufficient. The current coil or deflecting coil is series-connected and allows the electric current to flow through the circuit being tested.
Control and deflecting coil has a current limiting resistor connected in series so as to protect the external circuit in case of damages caused due to very low resistance. Testing voltage is produced by electromagnetic induction in case of hand-operated megger and by a battery in case of electronic type megger. The deflection of pointer increases with the increases in voltage in the external circuit and also decreases with the increase in current. That is, resultant torque is inversely related to current and directly related to voltage. While the electrical circuit that is being tested is open, the resultant torque caused due to voltage coil is maximum and the deflection pointer shows the value of ‘infinity’ which means the circuit does not have any shorting present and resistance is the maximum within the circuit being tested. In the case of a short circuit, the deflection pointer shows ‘zero’ which indicates ‘no’ resistance in the circuit being tested.
A Megger consists of an EMF source and voltmeter. The scale of the voltmeter is calibrated in ohms (kilo-ohms or megohms, as the case may be). In measurements, the EMF of the self-contained source must be equal to that of the source used in calibration.
See also
Hipot
References
Measuring instruments | Megohmmeter | [
"Technology",
"Engineering"
] | 634 | [
"Measuring instruments"
] |
25,290,996 | https://en.wikipedia.org/wiki/Elapsed%20real%20time | In computing, elapsed real time, real time, wall-clock time, wall time, or walltime is the actual time taken from the start of a computer program to the end. In other words, it is the difference between the time at which a task finishes and the time at which the task started.
Wall time is thus different from CPU time, which measures only the time during which the processor is actively working on a certain task or process. The difference between the two can arise from architecture and run-time dependent factors, e.g. programmed delays or waiting for system resources to become available. Consider the example of a mathematical program that reports that it has used "CPU time 0m0.04s, Wall time 6m6.01s". This means that while the program was active for six minutes and one second, during that time the computer's processor spent only 4/100 of a second performing calculations for the program.
Conversely, programs running in parallel on more than one processing unit can spend CPU time many times beyond their elapsed time. Since in concurrent computing the definition of elapsed time is non-trivial, the conceptualization of the elapsed time as measured on a separate, independent wall clock is convenient.
Another definition of "wall time" is the measurement of time via a separate, independent clock as opposed to the local system time (internal), i.e. with regard to the difference between the two.
In simulation
The term wall-clock time has also found widespread adoption in computer simulation, to distinguish between (1) the (often compressed or expanded) simulation time, and (2) the time as it passes for the user of the simulation tool.
References
Computing terminology
Durations | Elapsed real time | [
"Physics",
"Technology"
] | 354 | [
"Temporal quantities",
"Computing terminology",
"Physical quantities",
"Time",
"Time stubs",
"Computing stubs",
"Spacetime",
"Durations"
] |
25,292,015 | https://en.wikipedia.org/wiki/Medical%20technology%20assessment | Medical technology assessment (MTA) is the objective evaluation of a medical technology regarding its safety and performance, its (future) impact on clinical and non-clinical patient outcomes as well as its interactive effects on economical, organizational, social, juridical and ethical aspects of healthcare. Medical technologies are assessed both in absolute terms and in comparison to other (combinations of) medical technologies, procedures, treatments or ‘doing-nothing’.
The aim of MTA is to provide objective, high-quality information that relevant stakeholders use for decision-making about for example development, pricing, market access and reimbursement of new medical technologies. As such, MTA is similar to health technology assessment (HTA), except that HTA has a wider scope and may include assessments of for example organizational or financial interventions.
The classical approach of MTA is to evaluate technologies after they enter the marketplace. Yet, a growing number of researchers and policy-makers argue that new technologies should be evaluated before they diffuse into routine clinical practice. MTA of biomedical innovations in a very early stage of development could improve health outcomes, minimise wrong investment and prevent social and ethical conflicts.
One particular method within the area of early MTA is constructive technology assessment (CTA). CTA is particularly appropriate for the early assessment of dynamic technologies that are implemented under uncertain circumstances. CTA is based on the idea that during the course of technology development, choices are constantly being made about the form, the function, and the use of that technology. Especially in early stages, technologies are not always stable, nor are its specifications and neither is its use, as both technology and environment will mutually influence each other. In recent years, CTA has developed from assessing the (clinical) impact of a new technology to a much broader approach, including the analysis of design, development, and implementation of that new technology.
In the Netherlands, the department Health Technology and Services Research (HTSR) of the University of Twente and the institute for Medical Technology Assessment (iMTA) of the Erasmus University Rotterdam perform early MTA and CTA in collaboration with technology users (patients, healthcare professionals), technology developers (academic and industrial), technology investors (venture capitalists, government, etc.) technology procurers (hospitals, patients, etc.) and decision-makers in healthcare (patients, policy-makers etc.) By performing excellent scientific research, that is valuable and relevant for society, HTSR and iMTA aim to support decisions about early development and implementation of health care technology in order to achieve high quality healthcare for individual patients. Examples of the research of HTSR include the early economic evaluation of neuromuscular electrical stimulation in the treatment of shoulder pain and early phase technology assessment of nanotechnology in oncology. Examples of the work if iMTA include the development of the widely used cost-effectiveness acceptability curves (CEACs), the introduction of the friction cost method, the valuation if informal care with the CarerQoL instrument and the estimation of indirect medical costs.
References
External links
Health Technology and Services Research – University of Twente (NL)
Eucomed
NICE Evaluation Pathway Programme for Medical Technologies
Multidisciplinary Assessment of Technology Centre for Healthcare
Canadian Agency for Drugs and Technologies in Health – Medical Devices and Health Systems projects
Health Technology Assessment International
International Society for Pharmacoeconomics and Outcomes Research
Medical technology | Medical technology assessment | [
"Biology"
] | 691 | [
"Medical technology"
] |
25,292,663 | https://en.wikipedia.org/wiki/Integer%20points%20in%20convex%20polyhedra | The study of integer points in convex polyhedra is motivated by questions such as "how many nonnegative integer-valued solutions does a system of linear equations with nonnegative coefficients have" or "how many solutions does an integer linear program have". Counting integer points in polyhedra or other questions about them arise in representation theory, commutative algebra, algebraic geometry, statistics, and computer science.
The set of integer points, or, more generally, the set of points of an affine lattice, in a polyhedron is called Z-polyhedron, from the mathematical notation or Z for the set of integer numbers.
Properties
For a lattice Λ, Minkowski's theorem relates the number d(Λ) (the volume of a fundamental parallelepiped of the lattice) and the volume of a given symmetric convex set S to the number of lattice points contained in S.
The number of lattice points contained in a polytope all of whose vertices are elements of the lattice is described by the polytope's Ehrhart polynomial. Formulas for some of the coefficients of this polynomial involve d(Λ) as well.
Applications
Loop optimization
In certain approaches to loop optimization, the set of the executions of the loop body is viewed as the set of integer points in a polyhedron defined by loop constraints.
See also
Convex lattice polytope
Pick's theorem
References and notes
Further reading
Lattice points
Linear algebra
Linear programming
Polytopes | Integer points in convex polyhedra | [
"Mathematics"
] | 294 | [
"Algebra",
"Linear algebra",
"Lattice points",
"Number theory"
] |
25,293,652 | https://en.wikipedia.org/wiki/Tufting%20%28composites%29 | In the field of composite materials, tufting is an experimental technology to locally reinforce continuous fibre-reinforced plastics along the z-direction, with the objective of enhancing the shear and delamination resistance of the structure.
It consists of inserting a thread through a layered dry fabric, using a needle that, after insertion, moves back along the same trajectory leaving a loop of the thread on the bottom of the structure. It is a technology developed for and used within the thermoset resin injection manufacturing route, however it is currently being debated whether pre-pregs can also be successfully tufted. Tufting is considered a more economical and flexible method compared to 3D weaving or 3D braiding to include z-fibres in laminated composites. It resembles stitching, but it is different in that tufting only requires access from one side of the preform. Depending on the equipment used, all shapes and forms may potentially be reinforced by tufting. The density of z-fibres inserted can vary according to the expected loading pattern. On the other hand, the increase of z-properties in the dry preform is comparably low because tufting comprises no force-fit. Consequently, before consolidation, tufted preforms are not easier to handle than unreinforced ones. In fact the loops can represent an added complexity for the resin infusion process as they can complicate the consolidation of the structure.
See also
Z-pinning
Plastics
References
External links
Cranfield University page with tufting unit description
Composite material fabrication techniques | Tufting (composites) | [
"Physics"
] | 313 | [
"Materials stubs",
"Materials",
"Matter"
] |
25,303,792 | https://en.wikipedia.org/wiki/Curvaton | The curvaton is a hypothetical elementary particle which mediates a scalar field in early universe cosmology. It can generate fluctuations during inflation, but does not itself drive inflation, instead it generates curvature perturbations at late times after the inflaton field has decayed and the decay products have redshifted away, when the curvaton is the dominant component of the energy density. It is used to generate a flat spectrum of CMB perturbations in models of inflation where the potential is otherwise too steep or in alternatives to inflation like the pre-Big Bang scenario.
The model was proposed by three groups shortly after one another in 2001: Kari Enqvist and Martin S. Sloth (Sep, 2001), David Wands and David H. Lyth (Oct, 2001), Takeo Moroi and Tomo Takahashi (Oct, 2001).
See also
Expansion of the universe
Hubble's law
Big Bang
Cosmological constant
Inflaton
Cosmological perturbation theory
Structure formation
Kari Enqvist
David Wands
David H. Lyth
Notes
Physical cosmology
Inflation (cosmology)
Hypothetical elementary particles | Curvaton | [
"Physics",
"Astronomy"
] | 236 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Relativity stubs",
"Theory of relativity",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Physical cosmology"
] |
1,903,345 | https://en.wikipedia.org/wiki/Chladni%27s%20law | Chladni's law, named after Ernst Chladni, relates the frequency of modes of vibration for flat circular surfaces with fixed center as a function of the numbers m of diametric (linear) nodes and n of radial (circular) nodes. It is stated as the equation
where C and p are coefficients which depend on the properties of the plate.
For flat circular plates, p is roughly 2, but Chladni's law can also be used to describe the vibrations of cymbals, handbells, and church bells in which case p can vary from 1.4 to 2.4. In fact, p can even vary for a single object, depending on which family of modes is being examined.
References
External links
A Study of Vibrating Plates by Derek Kverno and Jim Nolen (Archived 27 July 2011)
Waves
Quantum mechanics | Chladni's law | [
"Physics",
"Mathematics"
] | 174 | [
"Physical phenomena",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Waves",
"Motion (physics)",
"Applied mathematics stubs"
] |
1,903,855 | https://en.wikipedia.org/wiki/Sensory%20substitution | Sensory substitution is a change of the characteristics of one sensory modality into stimuli of another sensory modality.
A sensory substitution system consists of three parts: a sensor, a coupling system, and a stimulator. The sensor records stimuli and gives them to a coupling system which interprets these signals and transmits them to a stimulator. In case the sensor obtains signals of a kind not originally available to the bearer it is a case of sensory augmentation. Sensory substitution concerns human perception and the plasticity of the human brain; and therefore, allows us to study these aspects of neuroscience more through neuroimaging.
Sensory substitution systems may help people by restoring their ability to perceive certain defective sensory modality by using sensory information from a functioning sensory modality.
History
The idea of sensory substitution was introduced in the 1980s by Paul Bach-y-Rita as a means of using one sensory modality, mainly tactition, to gain environmental information to be used by another sensory modality, mainly vision. Thereafter, the entire field was discussed by Chaim-Meyer Scheff in "Experimental model for the study of changes in the organization of human sensory information processing through the design and testing of non-invasive prosthetic devices for sensory impaired people". The first sensory substitution system was developed by Bach-y-Rita et al. as a means of brain plasticity in congenitally blind individuals. After this historic invention, sensory substitution has been the basis of many studies investigating perceptive and cognitive neuroscience. Sensory substitution is often employed to investigate predictions of the embodied cognition framework. Within the theoretical framework specifically the concept of sensorimotor contingencies is investigated utilizing sensory substitution. Furthermore, sensory substitution has contributed to the study of brain function, human cognition and rehabilitation.
Physiology
When a person becomes blind or deaf they generally do not lose the ability to hear or see; they simply lose their ability to transmit the sensory signals from the periphery (retina for visions and cochlea for hearing) to brain. Since the vision processing pathways are still intact, a person who has lost the ability to retrieve data from the retina can still see subjective images by using data gathered from other sensory modalities such as touch or audition.
In a regular visual system, the data collected by the retina is converted into an electrical stimulus in the optic nerve and relayed to the brain, which re-creates the image and perceives it. Because it is the brain that is responsible for the final perception, sensory substitution is possible. During sensory substitution an intact sensory modality relays information to the visual perception areas of the brain so that the person can perceive sight. With sensory substitution, information gained from one sensory modality can reach brain structures physiologically related to other sensory modalities. Touch-to-visual sensory substitution transfers information from touch receptors to the visual cortex for interpretation and perception. For example, through fMRI, one can determine which parts of the brain are activated during sensory perception. In blind persons, it is seen that while they are only receiving tactile information, their visual cortex is also activated as they perceive sight objects. Touch-to-touch sensory substitution is also possible, wherein information from touch receptors of one region of the body can be used to perceive touch in another region. For example, in one experiment by Bach-y-Rita, touch perception was able to be restored in a patient who lost peripheral sensation due to leprosy.
Technological support
In order to achieve sensory substitution and stimulate the brain without intact sensory organs to relay the information, machines can be used to do the signal transduction, rather than the sensory organs. This brain–machine interface collects external signals and transforms them into electrical signals for the brain to interpret. Generally, a camera or a microphone is used to collect visual or auditory stimuli that are used to replace lost sight and hearing, respectively. The visual or auditory data collected from the sensors is transformed into tactile stimuli that are then relayed to the brain for visual and auditory perception. Crucially, this transformation sustains the sensorimotor contingency inherent to the respective sensory modality. This and all types of sensory substitution are only possible due to neuroplasticity.
Brain plasticity
Brain plasticity refers to the brain's ability to adapt to a changing environment, for instance to the absence or deterioration of a sense. It is conceivable that cortical remapping or reorganization in response to the loss of one sense may be an evolutionary mechanism that allows people to adapt and compensate by using other senses better. Brain imaging studies have shown that upon visual impairments and blindness (especially in the first 12–16 years of life) the visual cortices undergo a huge functional reorganization such that they are activated by other sensory modalities.
Such cross-modal plasticity was also found through functional imaging of congenitally blind patients which showed a cross-modal recruitment of the occipital cortex during perceptual tasks such as Braille reading, tactile perception, tactual object recognition, sound localization, and sound discrimination. This may suggest that blind people can use their occipital lobe, generally used for vision, to perceive objects through the use of other sensory modalities. This cross modal plasticity may explain the often described tendency of blind people to show enhanced ability in the other senses.
Perception versus sensing
While considering the physiological aspects of sensory substitution, it is essential to distinguish between sensing and perceiving. The general question posed by this differentiation is: Are blind people seeing or perceiving to see by putting together different sensory data? While sensation comes in one modality – visual, auditory, tactile etc. – perception due to sensory substitution is not one modality but a result of cross-modal interactions. It is therefore concluded that while sensory substitution for vision induces visual-like perception in sighted individuals, it induces auditory or tactile perception in blind individuals. In short, blind people perceive to see through touch and audition with sensory substitution.
Through experiments with a Tactile-visual sensory substitution (TVSS) device developed by Bach-y-Rita subjects described the perceptual experience of the TVSS as particularly visual, such that objects were perceived as if located in the external space and not on the back or skin. Further studies using the TVSS showed that such perceptual changes were only possible when the participants could actively explore their environment with the TVSS. These results have been underpinned by many other studies testing different substitution systems with blind subjects such as vision-to-tactile substitution, vision-to-auditory substitution and vision-to-vestibular substitution Such results are also reported in sighted subjects, when blindfolded and deliver further support for the sensorimotor contingency theory.
Different applications
Applications are not restricted to disabled persons, but also include artistic presentations, games, and augmented reality. Some examples are substitution of visual stimuli to audio or tactile, and of audio stimuli to tactile. Some of the most popular are probably Paul Bach-y-Rita's Tactile Vision Sensory Substitution (TVSS), developed with Carter Collins at Smith-Kettlewell Institute and Peter Meijer's Seeing with Sound approach (The vOICe). Technical developments, such as miniaturization and electrical stimulation help the advance of sensory substitution devices.
In sensory substitution systems, we generally have sensors that collect the data from the external environment. This data is then relayed to a coupling system that interprets and transduces the information and then replays it to a stimulator. This stimulator ultimately stimulates a functioning sensory modality. After training, people learn to use the information gained from this stimulation to experience a perception of the sensation they lack instead of the actually stimulated sensation. For example, a leprosy patient, whose perception of peripheral touch was restored, was equipped with a glove containing artificial contact sensors coupled to skin sensory receptors on the forehead (which was stimulated). After training and acclimation, the patient was able to experience data from the glove as if it was originating in the fingertips while ignoring the sensations in the forehead.
Tactile systems
To understand tactile sensory substitution it is essential to understand some basic physiology of the tactile receptors of the skin. There are five basic types of tactile receptors: Pacinian corpuscle, Meissner's corpuscle, Ruffini endings, Merkel nerve endings, and free nerve endings. These receptors are mainly characterized by which type of stimuli best activates them, and by their rate of adaptation to sustained stimuli. Because of the rapid adaptation of some of these receptors to sustained stimuli, those receptors require rapidly changing tactile stimulation systems in order to be optimally activated. Among all these mechanoreceptors Pacinian corpuscle offers the highest sensitivity to high frequency vibration starting from a few tens of Hz to a few kHz with the help of its specialized mechanotransduction mechanism.
There have been two different types of stimulators: electrotactile or vibrotactile. Electrotactile stimulators use direct electrical stimulation of the nerve ending in the skin to initiate the action potentials; the sensation triggered, burn, itch, pain, pressure etc. depends on the stimulating voltage. Vibrotactile stimulators use pressure and the properties of the mechanoreceptors of the skin to initiate action potentials. There are advantages and disadvantages for both these stimulation systems. With the electrotactile stimulating systems a lot of factors affect the sensation triggered: stimulating voltage, current, waveform, electrode size, material, contact force, skin location, thickness and hydration. Electrotactile stimulation may involve the direct stimulation of the nerves (percutaneous), or through the skin (transcutaneous). Percutaneous application causes additional distress to the patient, and is a major disadvantage of this approach. Furthermore, stimulation of the skin without insertion leads to the need for high voltage stimulation because of the high impedance of the dry skin, unless the tongue is used as a receptor, which requires only about 3% as much voltage. This latter technique is undergoing clinical trials for various applications, and been approved for assistance to the blind in the UK. Alternatively, the roof of the mouth has been proposed as another area where low currents can be felt.
Electrostatic arrays are explored as human–computer interaction devices for touch screens. These are based on a phenomenon called electrovibration, which allows microamperre-level currents to be felt as roughness on a surface.
Vibrotactile systems use the properties of mechanoreceptors in the skin so they have fewer parameters that need to be monitored as compared to electrotactile stimulation. However, vibrotactile stimulation systems need to account for the rapid adaptation of the tactile sense.
Another important aspect of tactile sensory substitution systems is the location of the tactile stimulation. Tactile receptors are abundant on the fingertips, face, and tongue while sparse on the back, legs and arms. It is essential to take into account the spatial resolution of the receptor as it has a major effect on the resolution of the sensory substitution. A high resolution pin-arrayed display is able to present spatial information via tactile symbols, such as city maps and obstacle maps.
Below you can find some descriptions of current tactile substitution systems.
Tactile–visual
One of the earliest and most well known form of sensory substitution devices was Paul Bach-y-Rita's TVSS that converted the image from a video camera into a tactile image and coupled it to the tactile receptors on the back of his blind subject. Recently, several new systems have been developed that interface the tactile image to tactile receptors on different areas of the body such as the on the chest, brow, fingertip, abdomen, and forehead. The tactile image is produced by hundreds of activators placed on the person. The activators are solenoids of one millimeter diameter. In experiments, blind (or blindfolded) subjects equipped with the TVSS can learn to detect shapes and to orient themselves. In the case of simple geometric shapes, it took around 50 trials to achieve 100 percent correct recognition. To identify objects in different orientations requires several hours of learning.
A system using the tongue as the human–machine interface is most practical. The tongue–machine interface is both protected by the closed mouth and the saliva in the mouth provides a good electrolytic environment that ensures good electrode contact. Results from a study by Bach-y-Rita et al. show that electrotactile stimulation of the tongue required 3% of the voltage required to stimulate the finger. Also, since it is more practical to wear an orthodontic retainer holding the stimulation system than an apparatus strapped to other parts of the body, the tongue–machine interface is more popular among TVSS systems.
This tongue TVSS system works by delivering electrotactile stimuli to the dorsum of the tongue via a flexible electrode array placed in the mouth. This electrode array is connected to a Tongue Display Unit [TDU] via a ribbon cable passing out of the mouth. A video camera records a picture, transfers it to the TDU for conversion into a tactile image. The tactile image is then projected onto the tongue via the ribbon cable where the tongue's receptors pick up the signal. After training, subjects are able to associate certain types of stimuli to certain types of visual images. In this way, tactile sensation can be used for visual perception.
Sensory substitutions have also been successful with the emergence of wearable haptic actuators like vibrotactile motors, solenoids, peltier diodes, etc. At the Center for Cognitive Ubiquitous Computing at Arizona State University, researchers have developed technologies that enable people who are blind to perceive social situational information using wearable vibrotactile belts (Haptic Belt) and gloves (VibroGlove). Both technologies use miniature cameras that are mounted on a pair of glasses worn by the user who is blind. The Haptic Belt provides vibrations that convey the direction and distance at which a person is standing in front of a user, while the VibroGlove uses spatio-temporal mapping of vibration patterns to convey facial expressions of the interaction partner. Alternatively, it has been shown that even very simple cues indicating the presence or absence of obstacles (through small vibration modules located at strategic places in the body) can be useful for navigation, gait stabilization and reduced anxiety when evolving in an unknown space. This approach, called the "Haptic Radar" has been studied since 2005 by researchers at the University of Tokyo in collaboration with the University of Rio de Janeiro. Similar products include the Eyeronman vest and belt, and the forehead retina system.
Tactile–auditory
Neuroscientist David Eagleman presented a new device for sound-to-touch hearing at TED in 2015; his laboratory research then expanded into a company based in Palo Alto, California, called Neosensory. Neosensory devices capture sound and turn them into high-dimensional patterns of touch on the skin.
Experiments by Schurmann et al. show that tactile senses can activate the human auditory cortex. Currently vibrotactile stimuli can be used to facilitate hearing in normal and hearing-impaired people. To test for the auditory areas activated by touch, Schurmann et al. tested subjects while stimulating their fingers and palms with vibration bursts and their fingertips with tactile pressure. They found that tactile stimulation of the fingers lead to activation of the auditory belt area, which suggests that there is a relationship between audition and tactition. Therefore, future research can be done to investigate the likelihood of a tactile–auditory sensory substitution system. One promising invention is the 'Sense organs synthesizer' which aims at delivering a normal hearing range of nine octaves via 216 electrodes to sequential touch nerve zones, next to the spine.
Tactile–vestibular
Some people with balance disorders or adverse reactions to antibiotics develop bilateral vestibular damage (BVD). They experience difficulty maintaining posture, unstable gait, and oscillopsia. Tyler et al. studied the restitution of postural control through a tactile for vestibular sensory substitution. Because BVD patients cannot integrate visual and tactile cues, they have a lot of difficulty standing. Using a head-mounted accelerometer and a brain–computer interface that employs electrotactile stimulation on the tongue, information about head-body orientation was relayed to the patient so that a new source of data is available to orient themselves and maintain good posture.
Tactile–tactile to restore peripheral sensation
Touch to touch sensory substitution is where information from touch receptors of one region can be used to perceive touch in another. For example, in one experiment by Bach-y-Rita, the touch perception was restored in a patient who lost peripheral sensation from leprosy. For example, this leprosy patient was equipped with a glove containing artificial contact sensors coupled to skin sensory receptors on the forehead (which was stimulated). After training and acclimation, the patient was able to experience data from the glove as if it was originating in the fingertips while ignoring the sensations in the forehead. After two days of training one of the leprosy subjects reported "the wonderful sensation of touching his wife, which he had been unable to experience for 20 years."
Tactile feedback system for prosthetic limbs
The development of new technologies has now made it plausible to provide patients with prosthetic arms with tactile and kinesthetic sensibilities. While this is not purely a sensory substitution system, it uses the same principles to restore perception of senses. Some tactile feedback methods of restoring a perception of touch to amputees would be direct or micro stimulation of the tactile nerve afferents.
Other applications of sensory substitution systems can be seen in function robotic prostheses for patients with high level quadriplegia. These robotic arms have several mechanisms of slip detection, vibration and texture detection that they relay to the patient through feedback. After more research and development, the information from these arms can be used by patients to perceive that they are holding and manipulating objects while their robotic arm actually accomplishes the task.
Auditory systems
Auditory sensory substitution systems like the tactile sensory substitution systems aim to use one sensory modality to compensate for the lack of another in order to gain a perception of one that is lacking. With auditory sensory substitution, visual or tactile sensors detect and store information about the external environment. This information is then transformed by interfaces into sound. Most systems are auditory-vision substitutions aimed at using the sense of hearing to convey visual information to the blind.
The vOICe Auditory Display
"The vOICe" converts live camera views from a video camera into soundscapes, patterns of scores of different tones at different volumes and pitches emitted simultaneously. The technology of the vOICe was invented in the 1990s by Peter Meijer and uses general video to audio mapping by associating height to pitch and brightness with loudness in a left-to-right scan of any video frame.
EyeMusic
The EyeMusic user wears a miniature camera connected to a small computer (or smartphone) and stereo headphones. The images are converted into "soundscapes". The high locations on the image are projected as high-pitched musical notes on a pentatonic scale, and low vertical locations as low-pitched musical notes.
The EyeMusic conveys color information by using different musical instruments for each of the following five colors: white, blue, red, green, yellow. The EyeMusic employs an intermediate resolution of 30×50 pixels.
LibreAudioView
This project, presented in 2015, proposes a new versatile mobile device and a sonification method specifically designed to the pedestrian locomotion of the visually impaired. It sonifies in real-time spatial information from a video stream acquired at a standard frame rate. The device is composed of a miniature camera integrated into a glasses frame which is connected to a battery-powered minicomputer worn around the neck with a strap. The audio signal is transmitted to the user via running headphones. This system has two operating modes. With the first mode, when the user is static, only the edges of the moving objects are sonified. With the second mode, when the user is moving, the edges of both static and moving objects are sonified. Thus, the video stream is simplified by extracting only the edges of objects that can become dangerous obstacles. The system enables the localization of moving objects, the estimation of trajectories, and the detection of approaching objects.
PSVA
Another successful visual-to-auditory sensory substitution device is the Prosthesis Substituting Vision for Audition (PSVA). This system utilizes a head-mounted TV camera that allows real-time, online translation of visual patterns into sound. While the patient moves around, the device captures visual frames at a high frequency and generates the corresponding complex sounds that allow recognition. Visual stimuli are transduced into auditory stimuli with the use of a system that uses pixel to frequency relationship and couples a rough model of the human retina with an inverse model of the cochlea.
The Vibe
The sound produced by this software is a mixture of sinusoidal sounds produced by virtual "sources", corresponding each to a "receptive field" in the image. Each receptive field is a set of localized pixels. The sound's amplitude is determined by the mean luminosity of the pixels of the corresponding receptive field. The frequency and the inter-aural disparity are determined by the center of gravity of the co-ordinates of the receptive field's pixels in the image (see "There is something out there: distal attribution in sensory substitution, twenty years later"; Auvray M., Hanneton S., Lenay C., O'Regan K. Journal of Integrative Neuroscience 4 (2005) 505–21). The Vibe is an Open Source project hosted by SourceForge.
Other systems
Other approaches to the substitution of hearing for vision use binaural directional cues, much as natural human echolocation does. An example of the latter approach is the "SeeHear" chip from Caltech.
Other visual-auditory substitution devices deviate from the vOICe's greyscale mapping of images. Zach Capalbo's Kromophone uses a basic color spectrum correlating to different sounds and timbres to give users perceptual information beyond the vOICe's capabilities.
Nervous system implants
By means of stimulating electrodes implanted into the human nervous system, it is possible to apply current pulses to be learned and reliably recognized by the recipient. It has been shown successfully in experimentation, by Kevin Warwick, that signals can be employed from force/touch indicators on a robot hand as a means of communication.
Terminology
It has been argued that the term "substitution" is misleading, as it is merely an "addition" or "supplementation" not a substitution of a sensory modality.
Sensory augmentation
Building upon the research conducted on sensory substitution, investigations into the possibility of augmenting the body's sensory apparatus are now beginning. The intention is to extend the body's ability to sense aspects of the environment that are not normally perceivable by the body in its natural state. Moreover, such new informations about the environment could be used not to directly replace a sensory organ but to offer a sensory information usually perceived via another, potentially harmed, sensory modality. Thus, also sensory augmentation is widely used for rehabilitation purposes as well as for investigating perceptive and cognitive neuroscience
Active work in this direction is being conducted by, among others, the e-sense project of the Open University and Edinburgh University, the feelSpace project of the University of Osnabrück, and the hearSpace project at University of Paris.
The findings of research into sensory augmentation (as well as sensory substitution in general) that investigate the emergence of perceptual experience (qualia) from the activity of neurons have implications for the understanding of consciousness.
See also
Biological neural network
Brain implant
Human echolocation, blind people navigating by listening to the echo of sounds
References
External links
Tongue display for sensory substitution
The vOICe auditory display for sensory substitution.
Artificial Retinas
Sensory Substitution:limits and perspectives C. Lenay et al.
The Vibe
feelSpace - The Magnetic Perception Group of the University of Osnabrück
The Kromophone
Sensory Substitution For Blind (Nihat Erim İnceoğlu)
Sensory augmentation: integration of an auditory compass signal into human perception of space
Cognitive neuroscience
Biomedical engineering
Neural engineering
Neuroprosthetics | Sensory substitution | [
"Engineering",
"Biology"
] | 5,110 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
1,904,003 | https://en.wikipedia.org/wiki/Magnetosonic%20wave | In physics, magnetosonic waves, also known as magnetoacoustic waves, are low-frequency compressive waves driven by mutual interaction between an electrically conducting fluid and a magnetic field. They are associated with compression and rarefaction of both the fluid and the magnetic field, as well as with an effective tension that acts to straighten bent magnetic field lines. The properties of magnetosonic waves are highly dependent on the angle between the wavevector and the equilibrium magnetic field and on the relative importance of fluid and magnetic processes in the medium. They only propagate with frequencies much smaller than the ion cyclotron or ion plasma frequencies of the medium, and they are nondispersive at small amplitudes.
There are two types of magnetosonic waves, fast magnetosonic waves and slow magnetosonic waves, which—together with Alfvén waves—are the normal modes of ideal magnetohydrodynamics. The fast and slow modes are distinguished by magnetic and gas pressure oscillations that are either in-phase or anti-phase, respectively. This results in the phase velocity of any given fast mode always being greater than or equal to that of any slow mode in the same medium, among other differences.
Magnetosonic waves have been observed in the Sun's corona and provide an observational foundation for coronal seismology.
Characteristics
Magnetosonic waves are a type of low-frequency wave present in electrically conducting, magnetized fluids, such as plasmas and liquid metals. They exist at frequencies far below the cyclotron and plasma frequencies of both ions and electrons in the medium (see ).
In an ideal, homogeneous, electrically conducting, magnetized fluid of infinite extent, there are two magnetosonic modes: the fast and slow modes. They form, together with the Alfvén wave, the three basic linear magnetohydrodynamic (MHD) waves. In this regime, magnetosonic waves are nondispersive at small amplitudes.
Dispersion relation
The fast and slow magnetosonic waves are defined by a bi-quadratic dispersion relation that can be derived from the linearized MHD equations.
Phase and group velocities
The phase velocities of the fast and slow magnetosonic waves depend on the angle between the wavevector and the equilibrium magnetic field as well as the equilibrium density, pressure, and magnetic field strength. From the roots of the magnetosonic dispersion relation, the associated phase velocities can be expressed as
where the upper sign gives the phase velocity of the fast mode and the lower sign gives the phase velocity of the slow mode.
The phase velocity of the fast mode is always greater than or equal to , which is greater than or equal to that of the slow mode, . This is due to the differences in the signs of the thermal and magnetic pressure perturbations associated with each mode. The magnetic pressure perturbation can be expressed in terms of the thermal pressure perturbation and phase velocity as
For the fast mode , so magnetic and thermal pressure perturbations have matching signs. Conversely, for the slow mode , so magnetic and thermal pressure perturbations have opposite signs. In other words, the two pressure perturbations reinforce one another in the fast mode, but oppose one another in the slow mode. As a result, the fast mode propagates at a faster speed than the slow mode.
The group velocity of fast and slow magnetosonic waves is defined by
where and are local orthogonal unit vector in the direction of and in the direction of increasing , respectively. In a spherical coordinate system with a -axis along the unperturbed magnetic field, these unit vectors correspond to those in the direction of increasing radial distance and increasing polar angle.
Limiting cases
Incompressible fluid
In an incompressible fluid, the density and pressure perturbations vanish, and , resulting in the sound speed tending to infinity, . In this case, the slow mode propagates with the Alfvén speed, , and the fast mode disappears from the system, .
Cold limit
Under the assumption that the background temperature is zero, it follows from the ideal gas law that the thermal pressure is also zero, , and, as a result, that the sound speed vanishes, . In this case, the slow mode disappears from the system, , and the fast mode propagates isotropically with the Alfvén speed, . In this limit, the fast mode is sometimes referred to as a compressional Alfvén wave.
Parallel propagation
When the wavevector and the equilibrium magnetic field are parallel, , the fast and slow modes propagate as either a pure sound wave or pure Alfvén wave, with the fast mode identified with the larger of the two speeds and the slow mode identified with the smaller.
Perpendicular propagation
When the wavevector and the equilibrium magnetic field are perpendicular, , the fast mode propagates as a longitudinal wave with phase velocity equal to the magnetosonic speed, and the slow mode propagates as a transverse wave with phase velocity approaching zero.
Inhomogeneous fluid
In the case of an inhomogeneous fluids (that is, a fluid where at least one of the background quantities is not constant) the MHD waves lose their defining nature and get mixed properties. In some setups, such as the axisymmetric waves in a straight cylinder with a circular basis (one of the simplest models for a coronal loop), the three MHD waves can still be clearly distinguished. But in general, the pure Alfvén and fast and slow magnetosonic waves don't exist, and the waves in the fluid are coupled to each other in intricate ways.
Observations
Both fast and slow magnetosonic waves have been observed in the solar corona providing an observational foundation for the technique for coronal plasma diagnostics, coronal seismology.
See also
Waves in plasmas
Alfvén wave
Ion acoustic wave
Coronal seismology
Magnetogravity wave
References
Waves in plasmas | Magnetosonic wave | [
"Physics"
] | 1,228 | [
"Waves in plasmas",
"Waves",
"Physical phenomena",
"Plasma phenomena"
] |
1,905,371 | https://en.wikipedia.org/wiki/Schr%C3%B6dinger%E2%80%93Newton%20equation | The Schrödinger–Newton equation, sometimes referred to as the Newton–Schrödinger or Schrödinger–Poisson equation, is a nonlinear modification of the Schrödinger equation with a Newtonian gravitational potential, where the gravitational potential emerges from the treatment of the wave function as a mass density, including a term that represents interaction of a particle with its own gravitational field. The inclusion of a self-interaction term represents a fundamental alteration of quantum mechanics. It can be written either as a single integro-differential equation or as a coupled system of a Schrödinger and a Poisson equation. In the latter case it is also referred to in the plural form.
The Schrödinger–Newton equation was first considered by Ruffini and Bonazzola in connection with self-gravitating boson stars. In this context of classical general relativity it appears as the non-relativistic limit of either the Klein–Gordon equation or the Dirac equation in a curved space-time together with the Einstein field equations.
The equation also describes fuzzy dark matter and approximates classical cold dark matter described by the Vlasov–Poisson equation in the limit that the particle mass is large.
Later on it was proposed as a model to explain the quantum wave function collapse by Lajos Diósi and Roger Penrose, from whom the name "Schrödinger–Newton equation" originates. In this context, matter has quantum properties, while gravity remains classical even at the fundamental level. The Schrödinger–Newton equation was therefore also suggested as a way to test the necessity of quantum gravity.
In a third context, the Schrödinger–Newton equation appears as a Hartree approximation for the mutual gravitational interaction in a system of a large number of particles. In this context, a corresponding equation for the electromagnetic Coulomb interaction was suggested by Philippe Choquard at the 1976 Symposium on Coulomb Systems in Lausanne to describe one-component plasmas. Elliott H. Lieb provided the proof for the existence and uniqueness of a stationary ground state and referred to the equation as the Choquard equation.
Overview
As a coupled system, the Schrödinger–Newton equations are the usual Schrödinger equation with a self-interaction gravitational potential
where is an ordinary potential, and the gravitational potential representing the interaction of the particle with its own gravitational field, satisfies the Poisson equation
Because of the back coupling of the wave-function into the potential, it is a nonlinear system.
Replacing with the solution to the Poisson equation produces the integro-differential form of the Schrödinger–Newton equation:
It is obtained from the above system of equations by integration of the Poisson equation under the assumption that the potential must vanish at infinity.
Mathematically, the Schrödinger–Newton equation is a special case of the Hartree equation for . The equation retains most of the properties of the linear Schrödinger equation. In particular, it is invariant under constant phase shifts, leading to conservation of probability and exhibits full Galilei invariance. In addition to these symmetries, a simultaneous transformation
maps solutions of the Schrödinger–Newton equation to solutions.
The stationary equation, which can be obtained in the usual manner via a separation of variables, possesses an infinite family of normalisable solutions of which only the stationary ground state is stable.
Relation to semi-classical and quantum gravity
The Schrödinger–Newton equation can be derived under the assumption that gravity remains classical, even at the fundamental level, and that the right way to couple quantum matter to gravity is by means of the semiclassical Einstein equations. In this case, a Newtonian gravitational potential term is added to the Schrödinger equation, where the source of this gravitational potential is the expectation value of the mass density operator or mass flux-current.
In this regard, if gravity is fundamentally classical, the Schrödinger–Newton equation is a fundamental one-particle equation, which can be generalised to the case of many particles (see below).
If, on the other hand, the gravitational field is quantised, the fundamental Schrödinger equation remains linear. The Schrödinger–Newton equation is then only valid as an approximation for the gravitational interaction in systems of a large number of particles and has no effect on the centre of mass.
Many-body equation and centre-of-mass motion
If the Schrödinger–Newton equation is considered as a fundamental equation, there is a corresponding N-body equation that was already given by Diósi and can be derived from semiclassical gravity in the same way as the one-particle equation:
The potential contains all the mutual linear interactions, e.g. electrodynamical Coulomb interactions, while the gravitational-potential term is based on the assumption that all particles perceive the same gravitational potential generated by all the marginal distributions for all the particles together.
In a Born–Oppenheimer-like approximation, this N-particle equation can be separated into two equations, one describing the relative motion, the other providing the dynamics of the centre-of-mass wave-function. For the relative motion, the gravitational interaction does not play a role, since it is usually weak compared to the other interactions represented by . But it has a significant influence on the centre-of-mass motion. While only depends on relative coordinates and therefore does not contribute to the centre-of-mass dynamics at all, the nonlinear Schrödinger–Newton interaction does contribute. In the aforementioned approximation, the centre-of-mass wave-function satisfies the following nonlinear Schrödinger equation:
where is the total mass, is the relative coordinate, the centre-of-mass wave-function, and is the mass density of the many-body system (e.g. a molecule or a rock) relative to its centre of mass.
In the limiting case of a wide wave-function, i.e. where the width of the centre-of-mass distribution is large compared to the size of the considered object, the centre-of-mass motion is approximated well by the Schrödinger–Newton equation for a single particle. The opposite case of a narrow wave-function can be approximated by a harmonic-oscillator potential, where the Schrödinger–Newton dynamics leads to a rotation in phase space.
In the context where the Schrödinger–Newton equation appears as a Hartree approximation, the situation is different. In this case the full N-particle wave-function is considered a product state of N single-particle wave-functions, where each of those factors obeys the Schrödinger–Newton equation. The dynamics of the centre-of-mass, however, remain strictly linear in this picture. This is true in general: nonlinear Hartree equations never have an influence on the centre of mass.
Significance of effects
A rough order-of-magnitude estimate of the regime where effects of the Schrödinger–Newton equation become relevant can be obtained by a rather simple reasoning. For a spherically symmetric Gaussian,
the free linear Schrödinger equation has the solution
The peak of the radial probability density can be found at
Now we set the acceleration
of this peak probability equal to the acceleration due to Newtonian gravity:
using that at time . This yields the relation
which allows us to determine a critical width for a given mass value and conversely. We also recognise the scaling law mentioned above. Numerical simulations show that this equation gives a rather good estimate of the mass regime above which effects of the Schrödinger–Newton equation become significant.
For an atom the critical width is around 1022 metres, while it is already down to 10−31 metres for a mass of one microgram. The regime where the mass is around 1010 atomic mass units while the width is of the order of micrometers is expected to allow an experimental test of the Schrödinger–Newton equation in the future. A possible candidate are interferometry experiments with heavy molecules, which currently reach masses up to atomic mass units.
Quantum wave function collapse
The idea that gravity causes (or somehow influences) the wavefunction collapse dates back to the 1960s and was originally proposed by Károlyházy.
The Schrödinger–Newton equation was proposed in this context by Diósi. There the equation provides an estimation for the "line of demarcation" between microscopic (quantum) and macroscopic (classical) objects. The stationary ground state has a width of
For a well-localised homogeneous sphere, i.e. a sphere with a centre-of-mass wave-function that is narrow compared to the radius of the sphere, Diósi finds as an estimate for the width of the ground-state centre-of-mass wave-function
Assuming a usual density around 1000 kg/m3, a critical radius can be calculated for which . This critical radius is around a tenth of a micrometer.
Roger Penrose proposed that the Schrödinger–Newton equation mathematically describes the basis states involved in a gravitationally induced wavefunction collapse scheme. Penrose suggests that a superposition of two or more quantum states having a significant amount of mass displacement ought to be unstable and reduce to one of the states within a finite time. He hypothesises that there exists a "preferred" set of states that could collapse no further, specifically, the stationary states of the Schrödinger–Newton equation. A macroscopic system can therefore never be in a spatial superposition, since the nonlinear gravitational self-interaction immediately leads to a collapse to a stationary state of the Schrödinger–Newton equation. According to Penrose's idea, when a quantum particle is measured, there is an interplay of this nonlinear collapse and environmental decoherence. The gravitational interaction leads to the reduction of the environment to one distinct state, and decoherence leads to the localisation of the particle, e.g. as a dot on a screen.
Problems and open matters
Three major problems occur with this interpretation of the Schrödinger–Newton equation as the cause of the wave-function collapse:
Excessive residual probability far from the collapse point
Lack of any apparent reason for the Born rule
Promotion of the previously strictly hypothetical wave function to an observable (real) quantity.
First, numerical studies agreeingly find that when a wave packet "collapses" to a stationary solution, a small portion of it seems to run away to infinity. This would mean that even a completely collapsed quantum system still can be found at a distant location. Since the solutions of the linear Schrödinger equation tend towards infinity even faster, this only indicates that the Schrödinger–Newton equation alone is not sufficient to explain the wave-function collapse. If the environment is taken into account, this effect might disappear and therefore not be present in the scenario described by Penrose.
A second problem, also arising in Penrose's proposal, is the origin of the Born rule: To solve the measurement problem, a mere explanation of why a wave-function collapses – e.g., to a dot on a screen – is not enough. A good model for the collapse process also has to explain why the dot appears on different positions of the screen, with probabilities that are determined by the squared absolute-value of the wave-function. It might be possible that a model based on Penrose's idea could provide such an explanation, but there is as yet no known reason that Born's rule would naturally arise from it.
Thirdly, since the gravitational potential is linked to the wave-function in the picture of the Schrödinger–Newton equation, the wave-function must be interpreted as a real object. Therefore, at least in principle, it becomes a measurable quantity. Making use of the nonlocal nature of entangled quantum systems, this could be used to send signals faster than light, which is generally thought to be in contradiction with causality. It is, however, not clear whether this problem can be resolved by applying the right collapse prescription, yet to be found, consistently to the full quantum system. Also, since gravity is such a weak interaction, it is not clear that such an experiment can be actually performed within the parameters given in our universe (see the referenced discussion
about a similar thought experiment proposed by Eppley & Hannah).
See also
Nonlinear Schrödinger equation
Semiclassical gravity
Penrose interpretation
Poisson's equation
References
Gravity
Quantum gravity
Equations
Nonlinear partial differential equations
Schrödinger equation | Schrödinger–Newton equation | [
"Physics",
"Mathematics"
] | 2,570 | [
"Equations of physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Eponymous equations of physics",
"Mathematical objects",
"Equations",
"Quantum gravity",
"Schrödinger equation",
"Physics beyond the Standard Model"
] |
1,906,133 | https://en.wikipedia.org/wiki/Open-loop%20gain | The open-loop gain of an electronic amplifier is the gain obtained when no overall feedback is used in the circuit.
The open-loop gain of many electronic amplifiers is exceedingly high (by design) – an ideal operational amplifier (op-amp) has infinite open-loop gain. Typically an op-amp may have a maximal open-loop gain of around , or 100 dB. An op-amp with a large open-loop gain offers high precision when used as an inverting amplifier.
Normally, negative feedback is applied around an amplifier with high open-loop gain, to reduce the gain of the complete circuit to a desired value.
Definition
The definition of open-loop gain (at a fixed frequency) is
where is the input voltage difference that is being amplified. (The dependence on frequency is not displayed here.)
Role in non-ideal gain
The open-loop gain is a physical attribute of an operational amplifier that is often finite in comparison to the ideal gain. While open-loop gain is the gain when there is no feedback in a circuit, an operational amplifier will often be configured to use a feedback configuration such that its gain will be controlled by the feedback circuit components.
Take the case of an inverting operational amplifier configuration. If the resistor between the single output node and the inverting input node is and the resistor between a source voltage and the inverting input node is , then the calculated gain of such a circuit at the output terminal is defined, assuming infinite gain in the amplifier, is:
However, including the finite open-loop gain reduces the gain slightly, to:
For example, if and , then −1.9994 instead of exactly −2.
(The second equation becomes effectively the same as the first equation as approaches infinity.)
The open-loop gain can be important for computing the actual gain of an operational amplifier network, where the assumption of infinite open-loop gain is inaccurate.
Operational amplifiers
The open-loop gain of an operational amplifier falls very rapidly with increasing frequency. Along with slew rate, this is one of the reasons why operational amplifiers have limited bandwidth.
See also
Gain–bandwidth product
Loop gain (includes both the open-loop gain and the feedback attenuation)
Summary of negative feedback amplifier terms
References
Electrical parameters | Open-loop gain | [
"Engineering"
] | 461 | [
"Electrical engineering",
"Electrical parameters"
] |
2,641,081 | https://en.wikipedia.org/wiki/Sigma%20model | In physics, a sigma model is a field theory that describes the field as a point particle confined to move on a fixed manifold. This manifold can be taken to be any Riemannian manifold, although it is most commonly taken to be either a Lie group or a symmetric space. The model may or may not be quantized. An example of the non-quantized version is the Skyrme model; it cannot be quantized due to non-linearities of power greater than 4. In general, sigma models admit (classical) topological soliton solutions, for example, the skyrmion for the Skyrme model. When the sigma field is coupled to a gauge field, the resulting model is described by Ginzburg–Landau theory. This article is primarily devoted to the classical field theory of the sigma model; the corresponding quantized theory is presented in the article titled "non-linear sigma model".
Overview
The name has roots in particle physics, where a sigma model describes the interactions of pions. Unfortunately, the "sigma meson" is not described by the sigma-model, but only a component of it.
The sigma model was introduced by ; the name σ-model comes from a field in their model corresponding to a spinless meson called , a scalar meson introduced earlier by Julian Schwinger. The model served as the dominant prototype of spontaneous symmetry breaking of O(4) down to O(3): the three axial generators broken are the simplest manifestation of chiral symmetry breaking, the surviving unbroken O(3) representing isospin.
In conventional particle physics settings, the field is generally taken to be SU(N), or the vector subspace of quotient of the product of left and right chiral fields. In condensed matter theories, the field is taken to be O(N). For the rotation group O(3), the sigma model describes the isotropic ferromagnet; more generally, the O(N) model shows up in the quantum Hall effect, superfluid Helium-3 and spin chains.
In supergravity models, the field is taken to be a symmetric space. Since symmetric spaces are defined in terms of their involution, their tangent space naturally splits into even and odd parity subspaces. This splitting helps propel the dimensional reduction of Kaluza–Klein theories.
In its most basic form, the sigma model can be taken as being purely the kinetic energy of a point particle; as a field, this is just the Dirichlet energy in Euclidean space.
In two spatial dimensions, the O(3) model is completely integrable.
Definition
The Lagrangian density of the sigma model can be written in a variety of different ways, each suitable to a particular type of application. The simplest, most generic definition writes the Lagrangian as the metric trace of the pullback of the metric tensor on a Riemannian manifold. For a field over a spacetime , this may be written as
where the is the metric tensor on the field space , and the are the derivatives on the underlying spacetime manifold.
This expression can be unpacked a bit. The field space can be chosen to be any Riemannian manifold. Historically, this is the "sigma" of the sigma model; the historically-appropriate symbol is avoided here to prevent clashes with many other common usages of in geometry. Riemannian manifolds always come with a metric tensor . Given an atlas of charts on , the field space can always be locally trivialized, in that given in the atlas, one may write a map giving explicit local coordinates on that patch. The metric tensor on that patch is a matrix having components
The base manifold must be a differentiable manifold; by convention, it is either Minkowski space in particle physics applications, flat two-dimensional Euclidean space for condensed matter applications, or a Riemann surface, the worldsheet in string theory. The is just the plain-old covariant derivative on the base spacetime manifold When is flat, is just the ordinary gradient of a scalar function (as is a scalar field, from the point of view of itself.) In more precise language, is a section of the jet bundle of .
Example: O(n) non-linear sigma model
Taking the Kronecker delta, i.e. the scalar dot product in Euclidean space, one gets the non-linear sigma model. That is, write to be the unit vector in , so that , with the ordinary Euclidean dot product. Then the -sphere, the isometries of which are the rotation group . The Lagrangian can then be written as
For , this is the continuum limit of the isotropic ferromagnet on a lattice, i.e. of the classical Heisenberg model. For , this is the continuum limit of the classical XY model. See also the n-vector model and the Potts model for reviews of the lattice model equivalents. The continuum limit is taken by writing
as the finite difference on neighboring lattice locations Then in the limit , and after dropping the constant terms (the "bulk magnetization").
In geometric notation
The sigma model can also be written in a more fully geometric notation, as a fiber bundle with fibers over a differentiable manifold . Given a section , fix a point The pushforward at is a map of tangent bundles
taking
where is taken to be a local orthonormal vector space basis on and the vector space basis on . The is a differential form. The sigma model action is then just the conventional inner product on vector-valued k-forms
where the is the wedge product, and the is the Hodge star. This is an inner product in two different ways. In the first way, given any two differentiable forms in , the Hodge dual defines an invariant inner product on the space of differential forms, commonly written as
The above is an inner product on the space of square-integrable forms, conventionally taken to be the Sobolev space In this way, one may write
This makes it explicit and plainly evident that the sigma model is just the kinetic energy of a point particle. From the point of view of the manifold , the field is a scalar, and so can be recognized just the ordinary gradient of a scalar function. The Hodge star is merely a fancy device for keeping track of the volume form when integrating on curved spacetime. In the case that is flat, it can be completely ignored, and so the action is
which is the Dirichlet energy of . Classical extrema of the action (the solutions to the Lagrange equations) are then those field configurations that minimize the Dirichlet energy of . Another way to convert this expression into a more easily-recognizable form is to observe that, for a scalar function one has and so one may also write
where is the Laplace–Beltrami operator, i.e. the ordinary Laplacian when is flat.
That there is another, second inner product in play simply requires not forgetting that is a vector from the point of view of itself. That is, given any two vectors , the Riemannian metric defines an inner product
Since is vector-valued on local charts, one also takes the inner product there as well. More verbosely,
The tension between these two inner products can be made even more explicit by noting that
is a bilinear form; it is a pullback of the Riemann metric . The individual can be taken as vielbeins. The Lagrangian density of the sigma model is then
for the metric on Given this gluing-together, the can be interpreted as a solder form; this is articulated more fully, below.
Motivations and basic interpretations
Several interpretational and foundational remarks can be made about the classical (non-quantized) sigma model. The first of these is that the classical sigma model can be interpreted as a model of non-interacting quantum mechanics. The second concerns the interpretation of energy.
Interpretation as quantum mechanics
This follows directly from the expression
given above. Taking , the function can be interpreted as a wave function, and its Laplacian the kinetic energy of that wave function. The is just some geometric machinery reminding one to integrate over all space. The corresponding quantum mechanical notation is In flat space, the Laplacian is conventionally written as . Assembling all these pieces together, the sigma model action is equivalent to
which is just the grand-total kinetic energy of the wave-function , up to a factor of . To conclude, the classical sigma model on can be interpreted as the quantum mechanics of a free, non-interacting quantum particle. Obviously, adding a term of to the Lagrangian results in the quantum mechanics of a wave-function in a potential. Taking is not enough to describe the -particle system, in that particles require distinct coordinates, which are not provided by the base manifold. This can be solved by taking copies of the base manifold.
The solder form
It is very well-known that the geodesic structure of a Riemannian manifold is described by the Hamilton–Jacobi equations. In thumbnail form, the construction is as follows. Both and are Riemannian manifolds; the below is written for , the same can be done for . The cotangent bundle , supplied with coordinate charts, can always be locally trivialized, i.e.
The trivialization supplies canonical coordinates on the cotangent bundle. Given the metric tensor on , define the Hamiltonian function
where, as always, one is careful to note that the inverse of the metric is used in this definition: Famously, the geodesic flow on is given by the Hamilton–Jacobi equations
and
The geodesic flow is the Hamiltonian flow; the solutions to the above are the geodesics of the manifold. Note, incidentally, that along geodesics; the time parameter is the distance along the geodesic.
The sigma model takes the momenta in the two manifolds and and solders them together, in that is a solder form. In this sense, the interpretation of the sigma model as an energy functional is not surprising; it is in fact the gluing together of two energy functionals. Caution: the precise definition of a solder form requires it to be an isomorphism; this can only happen if and have the same real dimension. Furthermore, the conventional definition of a solder form takes to be a Lie group. Both conditions are satisfied in various applications.
Results on various spaces
The space is often taken to be a Lie group, usually SU(N), in the conventional particle physics models, O(N) in condensed matter theories, or as a symmetric space in supergravity models. Since symmetric spaces are defined in terms of their involution, their tangent space (i.e. the place where lives) naturally splits into even and odd parity subspaces. This splitting helps propel the dimensional reduction of Kaluza–Klein theories.
On Lie groups
For the special case of being a Lie group, the is the metric tensor on the Lie group, formally called the Cartan tensor or the Killing form. The Lagrangian can then be written as the pullback of the Killing form. Note that the Killing form can be written as a trace over two matrices from the corresponding Lie algebra; thus, the Lagrangian can also be written in a form involving the trace. With slight re-arrangements, it can also be written as the pullback of the Maurer–Cartan form.
On symmetric spaces
A common variation of the sigma model is to present it on a symmetric space. The prototypical example is the chiral model, which takes the product
of the "left" and "right" chiral fields, and then constructs the sigma model on the "diagonal"
Such a quotient space is a symmetric space, and so one can generically take where is the maximal subgroup of that is invariant under the Cartan involution. The Lagrangian is still written exactly as the above, either in terms of the pullback of the metric on to a metric on or as a pullback of the Maurer–Cartan form.
Trace notation
In physics, the most common and conventional statement of the sigma model begins with the definition
Here, the is the pullback of the Maurer–Cartan form, for , onto the spacetime manifold. The is a projection onto the odd-parity piece of the Cartan involution. That is, given the Lie algebra of , the involution decomposes the space into odd and even parity components corresponding to the two eigenstates of the involution. The sigma model Lagrangian can then be written as
This is instantly recognizable as the first term of the Skyrme model.
Metric form
The equivalent metric form of this is to write a group element as the geodesic of an element of the Lie algebra . The are the basis elements for the Lie algebra; the are the structure constants of .
Plugging this directly into the above and applying the infinitesimal form of the Baker–Campbell–Hausdorff formula promptly leads to the equivalent expression
where is now obviously (proportional to) the Killing form, and the are the vielbeins that express the "curved" metric in terms of the "flat" metric . The article on the Baker–Campbell–Hausdorff formula provides an explicit expression for the vielbeins. They can be written as
where is a matrix whose matrix elements are .
For the sigma model on a symmetric space, as opposed to a Lie group, the are limited to span the subspace instead of all of . The Lie commutator on will not be within ; indeed, one has and so a projection is still needed.
Extensions
The model can be extended in a variety of ways. Besides the aforementioned Skyrme model, which introduces quartic terms, the model may be augmented by a torsion term to yield the Wess–Zumino–Witten model.
Another possibility is frequently seen in supergravity models. Here, one notes that the Maurer–Cartan form looks like "pure gauge". In the construction above for symmetric spaces, one can also consider the other projection
where, as before, the symmetric space corresponded to the split . This extra term can be interpreted as a connection on the fiber bundle (it transforms as a gauge field). It is what is "left over" from the connection on . It can be endowed with its own dynamics, by writing
with . Note that the differential here is just "d", and not a covariant derivative; this is not the Yang–Mills stress-energy tensor. This term is not gauge invariant by itself; it must be taken together with the part of the connection that embeds into , so that taken together, the , now with the connection as a part of it, together with this term, forms a complete gauge invariant Lagrangian (which does have the Yang–Mills terms in it, when expanded out).
References
Quantum field theory
Equations of physics | Sigma model | [
"Physics",
"Mathematics"
] | 3,111 | [
"Quantum field theory",
"Equations of physics",
"Mathematical objects",
"Quantum mechanics",
"Equations"
] |
2,641,435 | https://en.wikipedia.org/wiki/Quantum%20noise | Quantum noise is noise arising from the indeterminate state of matter in accordance with fundamental principles of quantum mechanics, specifically the uncertainty principle and via zero-point energy fluctuations. Quantum noise is due to the apparently discrete nature of the small quantum constituents such as electrons, as well as the discrete nature of quantum effects, such as photocurrents.
Quantified noise is similar to classical noise theory and will not always return an asymmetric spectral density.
Shot noise as coined by J. Verdeyen is a form of quantum noise related to the statistics of photon counting, the discrete nature of electrons, and intrinsic noise generation in electronics. In contrast to shot noise, the quantum mechanical uncertainty principle sets a lower limit to a measurement. The uncertainty principle requires any amplifier or detector to have noise.
Macroscopic manifestations of quantum phenomena are easily disturbed, so quantum noise is mainly observed in systems where conventional sources of noise are suppressed. In general, noise is uncontrolled random variation from an expected value and is typically unwanted. General causes are thermal fluctuations, mechanical vibrations, industrial noise, fluctuations of voltage from a power supply, thermal noise due to Brownian motion, instrumentation noise, a laser's output mode deviating from the desired mode of operation, etc. If present, and unless carefully controlled, these other noise sources typically dominate and mask quantum noise.
In astronomy, a device which pushes against the limits of quantum noise is the LIGO gravitational wave observatory.
A Heisenberg microscope
Quantum noise can be illustrated by considering a Heisenberg microscope where an atom's position is measured from the scattering of photons. The uncertainty principle is given as,
Where the is the uncertainty in an atom's position, and the is the uncertainty of the momentum or sometimes called the backaction (momentum transferred to the atom) when near the quantum limit. The precision of the position measurement can be increased at the expense of knowing the atom's momentum. When the position is precisely known enough backaction begins to affect the measurement in two ways. First, it will impart momentum back onto the measuring devices in extreme cases. Secondly, we have decreasing future knowledge of the atom's future position. Precise and sensitive instrumentation will approach the uncertainty principle at sufficiently control environments.
Basics of noise theory
Noise is of practical concern for precision engineering and engineered systems approaching the standard quantum limit. Typical engineered consideration of quantum noise is for quantum nondemolition measurement and quantum point contact. So quantifying noise is useful.
A signal's noise is quantified as the Fourier transform of its autocorrelation.
The autocorrelation of a signal is given as
which measures when our signal is positively, negatively or not correlated at different times and .
The time average, , is zero and our is a voltage signal. Its Fourier transform is
because we measure a voltage over a finite time window. The Wiener–Khinchin theorem generally states that a noise's power spectrum is given as the autocorrelation of a signal, i.e.,
The above relation is sometimes called the power spectrum or spectral density.
In the above outline, we assumed that
Our noise is stationary or the probability does not change over time. Only the time difference matters.
Noise is due to a very large number of fluctuating charge so that the central limit theorem applied, i.e., the noise is Gaussian or normally distributed.
decays to zero rapidly over some time .
We sample over a sufficiently large time, , that our integral scales as a random walk . So our is independent of measured time for . Said in another way, as .
One can show that an ideal "top-hat" signal, which may correspond to a finite measurement of a voltage over some time, will produce noise across its entire spectrum as a sinc function. Even in the classical case, noise is produced.
Classical to quantum noise
To study quantum noise, one replaces the corresponding classical measurements with quantum operators, e.g.,
where are the quantum statistical average using the density matrix in the Heisenberg picture.
Quantum noise and the uncertainty principle
The Heisenberg uncertainty implies the existence of noise. An operator with a hermitian conjugate follows the relationship, . Define as where is real. The and are the quantum operators. We can show the following,
where the are the averages over the wavefunction and other statistical properties. The left terms are the uncertainty in and , the second term on the right is to covariance or which arises from coupling to an external source or quantum effects. The first term on the right corresponds to the Commutator relation and would cancel out if the x and y commuted. That is the origin of our quantum noise.
It is demonstrative to let and correspond to position and momentum that meets the well known commutator relation, . Then our new expression is,
Where the is the correlation. If the second term on the right vanishes, then we recover the Heisenberg uncertainty principle.
Harmonic motion and weakly coupled heat bath
Consider the motion of a simple harmonic oscillator with mass, , and frequency, , coupled to some heat bath which keeps the system in equilibrium. The equations of motion are given as,
The quantum autocorrelation is then,
Classically, there is no correlation between position and momentum. The uncertainty principle requires the second term to be nonzero. It goes to .
We can take the equipartition theorem or the fact that in equilibrium the energy is equally shared among a molecule/atoms degrees of freedom in thermal equilibrium, i.e.,
In the classical autocorrelation, we have
while in the quantum autocorrelation we have
Where the fraction terms in parentheses is the zero-point energy uncertainty. The is the Bose-Einstein population distribution. Notice that the quantum is asymmetric in the due to the imaginary autocorrelation. As we increase to higher temperature that corresponds to taking the limit of . One can show that the quantum approaches the classical . This allows
Physical interpretation of spectral density
Typically, the positive frequency of the spectral density corresponds to the flow of energy into the oscillator (for example, the photons' quantized field), while the negative frequency corresponds to the emitted of energy from the oscillator. Physically, an asymmetric spectral density would correspond to either the net flow of energy from or to our oscillator model.
Linear gain and quantum uncertainty
Most optical communications use amplitude modulation where the quantum noise is predominantly the shot noise. A laser's quantum noise, when not considering shot noise, is the uncertainty of its electric field's amplitude and phase. That uncertainty becomes observable when a quantum amplifier preserves phase. The phase noise becomes important when the energy of the frequency modulation or phase modulation is comparable to the energy of the signal (frequency modulation is more robust than amplitude modulation due to the additive noise intrinsic to amplitude modulation).
Linear amplification
An ideal noiseless gain cannot exit. Consider the amplification of stream of photons, an ideal linear noiseless gain, and the Energy-Time uncertainty relation.
The photons, ignoring the uncertainty in frequency, will have an uncertainty in its overall phase and number, and assume a known frequency, i.e., and . We can substitute these relations into our energy-time uncertainty equation to find the number-phase uncertainty relation or the uncertainty in the phase and photon numbers.
Let an ideal linear noiseless gain, , act on the photon stream. We also assume a unity quantum efficiency, or every photon is converted to a photocurrent. The output will be following with no noise added.
The phase will be modified too,
where the is the overall accumulated phase as the photons traveled through the gain medium.
Substituting our output gain and phase uncertainties, gives us
Our gain is , which is a contradiction to our uncertainty principles. So a linear noiseless amplifier cannot increase its signal without noise.
A deeper analysis done by H. Heffner
showed the minimum noise power output required to meet the Heisenberg uncertainty principle is given as
where is half of the full width at half max, the frequency of the photons, and is the Planck constant. The term with is sometimes called quantum noise
Shot noise and instrumentation
In precision optics with highly stabilized lasers and efficient detectors, quantum noise refers to the fluctuations of signal.
The random error of interferometric measurements of position, due to the discrete character of photons measurement, is another quantum noise. The uncertainty of position of a probe in probe microscopy may also attributable to quantum noise; but not the dominant mechanism governing resolution.
In an electric circuit, the random fluctuations of a signal due to the discrete character of electrons can be called quantum noise.
An experiment by S. Saraf, et .al.
demonstrated shot noise limited measurements as a demonstration of quantum noise measurements. Generally speaking, they amplified a Nd:YAG free space laser with minimal noise addition as it transitioned from linear to nonlinear amplification. The experiment required Fabry-Perot for filtering laser mode noises and selecting frequencies, two separate but identical probe and saturating beams to ensure uncorrelated beams, a zigzag slab gain medium, and a balanced detector for measuring quantum noise or shot-noise limited noise.
Shot Noise Power
The theory behind noise analysis of photon statistics (sometimes called the forward Kolmogorov equation) starts from the Masters equation from Shimoda et al.
where corresponds to the emission cross section and upper population number product , and the is the absorption cross section . The above relation is describing the probability of finding photons in radiation mode . The dynamic only considers neighboring modes and as the photons travel through a medium of excited and ground state atoms from position to . This gives us a total of 4 photon transitions associated to one photon energy level. Two photon number adding to the field and leaving an atom, and and two photons leaving a field to the atom and . Its noise power is given as,
Where,
is the power at the detector,
is the power limited shot noise,
the unsaturated gain and is also true for saturated gain,
is the efficiency factor. That is the product of transmission window efficiency to our photodetector, and quantum efficiency.
is the spontaneous emission factor that typically corresponds relative strength of spontaneous emission to stimulated emission. A value of unity would mean all doped ions are in the excited state.
Sarif, et al. demonstrated quantum noise or shot noise limited measurements over a wide range of power gain that agreed with theory.
Zero-point fluctuations
The existence of zero-point energy fluctuations is well-established in the theory of the quantised electromagnetic field.
Generally speaking, at the lowest energy excitation of a quantized field that permeates all space (i.e. the field mode being in the vacuum state), the root-mean-square fluctuation of field strength is non-zero. This accounts for vacuum fluctuations that permeate all space.
This vacuum fluctuation or quantum noise will effect classical systems. This manifest as quantum decoherence in an entangled system, normally attributed to thermal differences in the conditions surrounding each entangled particle. Because entanglement is studied intensely in simple pairs of entangled photons, for example, decoherence observed in experiments could well be synonymous with "quantum noise" as to the source of the decoherence. Vacuum fluctuation is a possible causes for a quanta of energy to spontaneously appear in a given field or spacetime, then thermal differences must be associated with this event. Hence, it would cause decoherence in an entangled system in proximity of the event.
Coherent states and noise of a quantum amplifier
A laser is described by the coherent state of light, or the superposition of harmonic oscillators eigenstates. Erwin Schrödinger first derived the coherent state for the Schrödinger equation to meet the correspondence principle in 1926.
The laser is a quantum mechanical phenomena (see Maxwell–Bloch equations, rotating wave approximation, and semi-classical model of a two level atom). The Einstein coefficients and the laser rate equations are adequate if one is interested in the population levels and one does not need to account for population quantum coherences (the off diagonal terms in a density matrix). Photons of the order of 108 corresponds to a moderate energy. The relative error of measurement of the intensity due to the quantum noise is on the order of 10−5. This is considered to be of good precision for most of applications.
Quantum amplifier
A quantum amplifier is an amplifier which operates close to the quantum limit. Quantum noise becomes important when a small signal is amplified. A small signal's quantum uncertainties in its quadrature are also amplified; this sets a lower limit to the amplifier. A quantum amplifier's noise is its output amplitude and phase. Generally, a laser is amplified across a spread of wavelengths around a central wavelength, some mode distribution, and polarization spread. But one can consider a single mode amplification and generalize to many different modes. A phase-invariant amplifier preserves the phase of the input gain without drastic changes to the output phase mode.
Quantum amplification can be represented with a unitary operator, , as stated in D. Kouznetsov 1995 paper.
See also
Quantum error correction
Quantum optics
Quantum limit
Shot noise
Quantum harmonic oscillator
References
Further reading
Clerk, Aashish A. Quantum Noise and quantum measurement. Oxford University Press.
Clerk, Aashish A., et al. Introduction to Quantum Noise, measurement, and amplification,Reviews of Modern Physics 82, 1155-1208.
Gardiner, C. W. and Zoller, P. Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics, Springer, 2004, 978-3540223016
Sources
C. W. Gardiner and Peter Zoller, Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics, Springer-Verlag (1991, 2000, 2004).
Quantum optics
Laser science | Quantum noise | [
"Physics"
] | 2,887 | [
"Quantum optics",
"Quantum mechanics"
] |
2,641,938 | https://en.wikipedia.org/wiki/Applied%20physics | Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
References
Engineering disciplines | Applied physics | [
"Physics",
"Engineering"
] | 286 | [
"Applied and interdisciplinary physics",
"nan"
] |
2,642,301 | https://en.wikipedia.org/wiki/Dianin%27s%20compound | Dianin's compound (4-p-hydroxyphenyl-2,2,4-trimethylchroman) was first prepared by Aleksandr Dianin in 1914. This compound is a condensation isomer of bisphenol A and acetone and of special importance in host–guest chemistry because it can form a large variety of clathrates with suitable guest molecules. One example is the clathrate of Dianin's compound with morpholine. Slow evaporation of a solution containing both organic compounds yields crystals. Each asymmetric unit cell making up the crystal contains six chroman molecules of which two are deprotonated and two protonated morpholine molecules. The six chroman molecules are racemate pairs.
References
Clathrates
4-Hydroxyphenyl compounds
Russian inventions
Chromanes | Dianin's compound | [
"Chemistry"
] | 180 | [
"Clathrates"
] |
2,642,796 | https://en.wikipedia.org/wiki/DEEP2%20Redshift%20Survey | The DEEP2 Survey or DEEP2 was a two-phased Redshift survey of the Redshift z=~1 universe (where z= a measure of speed and by extension, the distance from earth). It used the twin 10 metre Keck telescopes in Hawaii (the world's second largest optical telescope) to measure the spectra and hence the redshifts of approximately 50,000 galaxies. It was the first project to study galaxies in the distant Universe with the resolution of local surveys like the Sloan Digital Sky Survey and was completed in 2013.
References
Observational astronomy
Astronomical surveys
Physical cosmology | DEEP2 Redshift Survey | [
"Physics",
"Astronomy"
] | 125 | [
"Galaxy stubs",
"Astronomical surveys",
"Theoretical physics",
"Observational astronomy",
"Works about astronomy",
"Astrophysics",
"Astronomy stubs",
"Physical cosmology",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
2,643,179 | https://en.wikipedia.org/wiki/Synchronoptic%20view | A synchronoptic view is a graphic display of a number of entities as they proceed through time. A synchronoptic view can be used for many purposes, but is best suited as visual displays of history. A number of related timelines can be drawn on a single chart, showing which events and lives are contemporary and which are unconnected.
A synchronoptic view has important educational advantages. Visible information is much more easily learned, than when it is presented only in pure text form. History is an ideal subject for a synchronoptic view. Multiple timelines are able to show how events interacted. Multiple lifelines can show which people were contemporaries.
(See example)
A combination of maps is also synchronoptic when it displays successive moments in time.
Etymology
The concept in question is made visual — hence optic.
The elements are displayed synchronously: i.e., which events in one area happened at the same time as events in another seemingly unrelated area.
Thus synchron-optic.
Synchronoptic also means visible at the same time", or "with parallel views". i.e., The user gets a view of all the information in one go.
Carte chronographique
Jacques Barbeu-Dubourg (1709–1779) was the first to develop a synchronoptical visualisation with his Chronographie universelle & details qui en dependent pour la Chronologie & les Genealogies (1753) abbreviated to Carte chronographique. The chronograph chart consisted of 35 prints which were designed to be stuck together in a row, enabling 6,500 years to be represented in . The horizontal axis representing the passage of time was consistent throughout, but the vertical axis was varied depending on the categories Barbeu-Dubourg considered relevant for that period of history.
References
External links
Hyper History Online. A good example of a synchronoptic view of history
Adams Synchronological Chart
See also
Adams Synchronological Chart
Chronology
Visualization (graphics) | Synchronoptic view | [
"Physics"
] | 427 | [
"Chronology",
"Physical quantities",
"Time",
"Time stubs",
"Spacetime"
] |
2,644,968 | https://en.wikipedia.org/wiki/Acidogenesis | Acidogenesis is the second stage in the four stages of anaerobic digestion:
Hydrolysis: A chemical reaction where particulates are solubilized and large polymers converted into simpler monomers;
Acidogenesis: A biological reaction where simple monomers are converted into volatile fatty acids;
Acetogenesis: A biological reaction where volatile fatty acids are converted into acetic acid, carbon dioxide, and hydrogen
Methanogenesis: A biological reaction where acetates are converted into methane and carbon dioxide, while hydrogen is consumed.
Anaerobic digestion is a complex biochemical process of biologically mediated reactions by a consortium of microorganisms to convert organic compounds into methane and carbon dioxide. It is a stabilization process that reduces odor, pathogens, and waste volume.
Hydrolytic bacteria form a variety of reduced end-products from the fermentation of a given substrate. One fundamental question that arises concerns the metabolic features that control carbon and electron flow to a given reduced end-product during pure culture and mixed methanogenic cultures of hydrolytic bacteria. Thermoanaerobium brockii is a representative thermophilic, hydrolytic bacterium, which ferments glucose, via the Embden–Meyerhof Parnas Pathway. T. brockii is an atypical hetero-lactic acid bacterium because it forms molecular hydrogen (H2), in addition to lactic acid and ethanol. The reduced end-products of glucose fermentation are enzymatically formed from pyruvate, via the following mechanisms: lactate by fructose 1-6 all-phosphate (F6P) activated lactate dehydrogenase; H2 by pyruvate ferredoxin oxidoreductase and hydrogenase; and ethanol via NADH- and NADPH-linked alcohol dehydrogenase.
By its side, the acidogenic activity was found in the early 20th century, but it was not until the mid-1960s that the engineering of phases separation was assumed in order to improve the stability and waste digesters treatment. In this phase, complex molecules (carbohydrates, lipids, and proteins) are depolymerized into soluble compounds by hydrolytic enzymes (cellulases, hemicellulases, amylases, lipases and proteases). The hydrolyzed compounds are fermented into volatile fatty acids (acetate, propionate, butyrate, and lactate), neutral compounds (ethanol, methanol), ammonia, hydrogen and carbon dioxide.
Acetogenesis is one of the main reactions of this stage, in this, the intermediary metabolites produced are metabolized to acetate, hydrogen and carbonic gas by the three main groups of bacteria:
homoacetogens;
syntrophes; and
sulphoreductors.
For the acetic acid production are considered three kind of bacteria:
Clostridium aceticum;
Acetobacter woodii; and
Clostridium termoautotrophicum.
Winter y Wolfe, in 1979, demonstrated that A. woodii in syntrophic association with Methanosarcina produce methane and carbon dioxide from fructose, instead of three molecules of acetate. Moorella thermoacetica and Clostridium formiaceticum are able to reduce the carbonic gas to acetate, but they do not have hydrogenases which inhabilite the hydrogen use, so they can produce three molecules of acetate from fructose. Acetic acid is equally a co-metabolite of the organic substrates fermentation (sugars, glycerol, lactic acid, etc.) by diverse groups of microorganisms which produce different acids:
Propionic bacteria (propionate + acetate);
Clostridium (butyrate + acetate);
Enterobacteria (acetate + lactate); and
Hetero-fermentative bacteria (acetate, propionate, butyrate, valerate, etc.).
References
Anaerobic digestion
Bacteriology
Biochemical reactions | Acidogenesis | [
"Chemistry",
"Engineering",
"Biology"
] | 860 | [
"Biochemistry",
"Biochemical reactions",
"Anaerobic digestion",
"Environmental engineering",
"Water technology"
] |
2,645,022 | https://en.wikipedia.org/wiki/Monterey%20Canyon | Monterey Canyon, or Monterey Submarine Canyon, is a submarine canyon in Monterey Bay, California with steep canyon walls measuring a full in height from bottom to top, which height/depth rivals the depth of the Grand Canyon itself. It is the largest such submarine canyon along the West coast of the North American continent, and was formed by the underwater erosion process known as turbidity current erosion. Many questions remain unresolved regarding the exact nature of its origins, and as such it is the subject of several ongoing geological and marine life studies being carried out by scientists stationed at the nearby Monterey Bay Aquarium Research Institute, the Moss Landing Marine Laboratories, and other oceanographic institutions.
Monterey Canyon begins at Moss Landing, California, which is situated along the middle of the coast of Monterey Bay, and extends horizontally under the Pacific Ocean where it terminates at the Monterey Canyon submarine fan, reaching depths of up to 3,600 m (11,800 ft) below surface level at its downstream mouth. It is a part of the greater Monterey Bay Canyon System, which consists of Monterey, Soquel and Carmel Canyons. The canyon's depth and nutrient availability (due to the regular influx of nutrient-rich sediment) provide a habitat suitable for many marine life forms.
The Soquel Canyon State Marine Conservation Area protects a side-branch of the Monterey Submarine Canyon. Like an underwater park, this marine protected area helps conserve ocean wildlife and marine ecosystems.
Geomorphology
While the erosion process of turbidity current erosion which once carved out the submarine Monterey Canyon is well known, the cause of the great depth and length of this canyon, obviously carved out millions of years ago, and the unusually large size of the sedimentary deposit (fan) at its underwater mouth 95 miles West of Monterey, have all been a cause for some speculation. Typically submarine canyons of this depth and length which cut so far across a continental shelf, and with such large sedimentary fans attached, are only formed when aligned to receive the outflows of very major rivers, such as the Mississippi or the Amazon, and such canyons are not typically found in alignment with relatively low flow rivers such as the Salinas River. The dominant theory is that it is the remnant outlet of a larger river that may have once drained the Central Valley, possibly even via the Los Angeles Area Catchment Basin (recalling that the canyon has steadily moved northwest due to fault action along the San Andreas fault). Recent research supports the latter due to the chemistry in iron-manganese deposits on seamounts near the Canyon indicating a sediment origin of southern Sierra Nevada or western Basin and Range. The Salinas River is thought to have been the outlet for prehistoric Lake Corcoran, which once occupied much of the Central Valley. The Upper Turbidite Unit of the Monterey submarine fan may have formed soon after Lake Corcoran found a new outlet and was catastrophically drained via what is now San Francisco Bay, when sediment from the former lake bed was carried out its new outlet and then down to Monterey Bay by longshore drift.
Reconstructions of ancient land configurations via plate tectonic theory indicate that the canyon has moved north to its current location via the horizontal slip-action of the San Andreas Fault and would have been approximately where Santa Barbara is located when both the San Andreas Fault and the Gulf of California came into being. Similar undersea canyons exist at the mouths of other large rivers around the world today, for instance, the Hudson River Canyon. As no major river lies at the head of Monterey Canyon today, it is surmised that it may have come into being when such a river did so in the past.
The clues to the ancient origins of this canyon lie somewhere at the 2 mile deep downstream mouth of the canyon in a huge sedimentary bed called the Monterey Fan. This fan appears to be far too massive to have accumulated from the modern coastal streams. Research including core sampling is ongoing. Thus far, only "recent" sedimentary cores have been obtained. The oldest cores lie deeply buried, and remain to be probed. Once these deeper core samples have been properly analyzed and traced back to their original sedimentary sources, the answers to such speculations as to which river might have provided the high level of turbidity current flows which are believed to have most probably been required to carve out such a deep and long canyon, with such a huge sedimentary deposit (fan) at its mouth will all hopefully be finally resolved.
References
Citations
Journal
Website
External links
Moss Landing Marine Laboratories website
Canyons and gorges of California
Monterey Bay
Submarine canyons of the Pacific Ocean
Landforms of Monterey County, California
Landforms of Santa Cruz County, California
Physical oceanography | Monterey Canyon | [
"Physics"
] | 943 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
2,646,222 | https://en.wikipedia.org/wiki/Drell%E2%80%93Yan%20process | The Drell–Yan process occurs in high energy hadron–hadron scattering. It takes place when a quark of one hadron and an antiquark of another hadron annihilate, creating a virtual photon or Z boson which then decays into a pair of oppositely-charged leptons. Importantly, the energy of the colliding quark–antiquark pair can be almost entirely transformed into the mass of new particles. This process was first suggested by Sidney Drell and Tung-Mow Yan in 1970 to describe the production of lepton–antilepton pairs in high-energy hadron collisions. Experimentally, this process was first observed by J. H. Christenson et al. in proton–uranium collisions at the Alternating Gradient Synchrotron.
Overview
The Drell–Yan process is studied both in fixed-target and collider experiments. It provides valuable information about the parton distribution functions (PDFs) which describe the way the momentum of an incoming high-energy nucleon is partitioned among its constituent partons. These PDFs are basic ingredients for calculating essentially all processes at hadron colliders. Although PDFs should be derivable in principle, current ignorance of some aspects of the strong force prevents this. Instead, the forms of the PDFs are deduced from experimental data.
Drell–Yan process and deep inelastic scattering
PDFs are determined using the world data from deep inelastic scattering, Drell–Yan process, etc. The Drell–Yan process is closely related to the deep inelastic scattering; the Feynman diagram of the Drell–Yan process is obtained if the Feynman diagram of deep inelastic scattering is rotated by 90°. A time-like virtual photon or Z boson is produced in s-channel in the Drell–Yan process while a space-like virtual photon or Z boson is produced in t-channel in the deep inelastic scattering.
Sensitivity to light sea quark flavor asymmetry in the proton
It had been naively believed that the quark sea in the proton was formed by quantum chromodynamics (QCD) processes that did not discriminate between up and down quarks.
However, results of deep inelastic scattering of high energy muons on a proton and a deuteron targets by CERN-NMC showed that there are more 's than 's in the proton.
The Gottfried sum measured by NMC was 0.235±0.026, which is significantly smaller than the expected value of 1/3.
This means that (x)-(x) integrated over Bjorken x from 0 to 1.0
is 0.147±0.039, indicating a flavor asymmetry in the proton sea.
Recent measurements using Drell–Yan scattering probed the flavor asymmetry of the proton.
To leading order in the strong interaction coupling constant, αs, the Drell-Yan cross section is given by
where is the fine-structure constant, is the charge of quark with flavor , and denote the parton distribution function of in hadron and hadron , with momentum and , respectively. Similarly denotes the antiquark distributions.
Using the isospin symmetry, the parton distribution functions for proton and neutron are related as follows:
Therefore, the proton on deuterium over proton on hydrogen Drell-Yan cross section can be written as
Using the fact that there are more quarks in proton, this ratio can be approximated as
where and are the anti-down and anti-up quark distributions in the proton sea and is the Bjorken- scaling variable (the momentum fraction of the target quark in the parton model).
Z boson production
The production of Z bosons through the Drell–Yan process affords the opportunity to study the couplings of the Z boson to quarks. The main observable is the forward–backward asymmetry in the angular distribution of the two leptons in their center-of-mass frame.
If heavier neutral gauge bosons exist (see W′ and Z′ bosons), they might be discovered as a peak in the dilepton invariant mass spectrum in much the same way that the standard Z boson appears by virtue of the Drell–Yan process.
Drell–Yan process and the underlying event
Even though high-energy QCD processes are accessible via perturbation theory, lower-energy effects like hadronization are still only understood from a phenomenological perspective. Since virtual photons and Z bosons are unable to transport color charges, the properties of the underlying event can be studied effectively in selections of Drell–Yan events, where the hadronic background is ignored. What is left is the pure underlying event, insensitive to the physics of the hard Drell–Yan process. Other processes may suffer from misidentification issues, since they might also produce hadronic jets in the hard process.
See also
Fermilab E-906/SeaQuest
References
Quantum mechanics | Drell–Yan process | [
"Physics"
] | 1,053 | [
"Theoretical physics",
"Quantum mechanics"
] |
2,646,364 | https://en.wikipedia.org/wiki/Synthetic%20alexandrite | Synthetic alexandrite is an artificially grown crystalline variety of chrysoberyl, composed of beryllium aluminum oxide (BeAlO).
The name is also often used erroneously to describe synthetically-grown corundum that simulates the appearance of alexandrite, but with a different mineral composition.
Manufacture
Most true synthetic alexandrite is grown by the Czochralski method, known as “pulling”. Another method is a “floating zone”, developed in 1964 by an Armenian scientist Khachatur Saakovich Bagdasarov, of the Russian (former Soviet) Institute of Crystallography, Moscow. Bagdasarov’s floating zone method was widely used to manufacture white YAG for spacecraft and submarine lighting, before the process found its way into jewelry production. Alexandrite crystals grown by floating zone method tend to have less intensity in color than crystals grown by the pulled method.
Flux-grown alexandrite stones are expensive to make and are grown in platinum crucibles. Crystals of platinum may still be evident in the cut stones. Alexandrite grown by the flux-melt process will contain particles of flux, resembling liquid “feathers” with a refractive index and specific gravity that echo that of natural alexandrite. Some stones contain parallel groups of negative crystals. Due to the high cost of this process, it is no longer used commercially.
The largest producer of jewelry quality laboratory-grown alexandrite to this day is Tairus. Production capacity is in the range of 100 kg/year.
Chrysoberyl-based synthetics
Czochralski or “pulled” alexandrite is easier to identify because it is very “clean”. Curved striations visible with magnification are a give-away. Some pulled stones have been seen to change color from blue to red – similar to natural alexandrite from Brazil, Madagascar, and India. Seiko synthetic alexandrites show a swirled internal structure characteristic of the floating zone method of synthesis. They have “tadpole” inclusions (with long tails) and spherical bubbles.
Flux-grown alexandrites are more difficult to spot because of their convincing colors, and because they are not “clean”. Their inclusions of undissolved flux can look like inclusions in natural chrysoberyl. However, layers of dust-like particles parallel to the seed plate, and strong banding or growth lines may also be apparent.
The Inamori synthetic alexandrite had a cat's eye variety, which showed a distinct color change. The eye was broad and of moderate intensity. Specimens were a dark greyish-green with slightly purple overtones under fluorescent lighting. The eye was slightly greenish-bluish-white and the stones were dull and oily. They appeared to be inclusion-free and under a strong incandescent light in the long direction, asterism could be seen with two rays weaker than the eye. This has not been reported in natural alexandrite. Under magnification, parallel striations could be seen along the length of the cabochon and the striations were undulating rather than straight, again not a feature of natural alexandrite.
The name allexite has been used for synthetic alexandrite manufactured by the Diamonair Corporation who maintains that its product is Czochralski-grown.
Corundum-based simulated alexandrite
Most gemstones described as synthetic alexandrite are actually simulated alexandrite: Synthetic corundum laced with vanadium to produce the color change. This alexandrite-like sapphire material has been known for almost 100 years.
The material shows a characteristic purple-mauve color change which, although attractive, differs from alexandrite because there is never any green. The stones will be very clean and may be available in large sizes. Gemological testing will reveal a refractive index of 1.759–1.778 (corundum) instead of 1.741–1.760 (chrysoberyl). Under magnification, gas bubbles and curved stria may be evident. When examined with a spectroscope a strong vanadium absorption line at 475 nm will be apparent.
Footnotes
References
External links
Alexandrite, synthetic
Alexandrite, synthetic | Synthetic alexandrite | [
"Physics",
"Chemistry"
] | 879 | [
"Matter",
"Synthetic materials",
"Materials",
"Optical materials",
"Synthetic minerals"
] |
12,977,206 | https://en.wikipedia.org/wiki/Granolithic | Granolithic screed, also known as granolithic paving and granolithic concrete, is a type of construction material composed of cement and fine aggregate such as granite or other hard-wearing rock. It is generally used as flooring, or as paving (such as for sidewalks). It has a similar appearance to concrete, and is used to provide a durable surface where texture and appearance are usually not important (such as outdoor pathways or factory floors). It is commonly laid as a screed. Screeds are a type of flooring laid on top of the structural element (like reinforced concrete) to provide a level surface on which the "wearing flooring" (the flooring which people see and walk on) is laid. A screed can also be laid bare, as it provides a long-lasting surface.
The aggregate mixed with the cement can be of various size, shape, and material, depending on the texture of the surface needed and how long-lasting it must be. The aggregate is usually sifted so that the particles are roughly the same size, which helps reduce air pockets in the material (which can weaken it). Generally, the mix of aggregate to cement is 2.5 to 1 by volume.
Granolithic screed or paving can be problematic. Because it is made with a high cement content and requires a great deal of water to mix, it may crack while drying. It can also come loose from the material below (especially if the lower material is not properly prepared). Pouring the material in layers is generally avoided. Cracking and curling can be reduced by dividing the area to be covered into smaller sections and then pouring the material. Debonding of the granolithic material can also be significantly avoided by using bonding agents like epoxy resins or polymer latex.
A high degree of skill in pouring and finishing the material is needed to prevent problems. Sealers and hardeners can be added to the granolithic material to improve its resistance to wear.
See also
roughcast (pebbledash): visually somewhat similar, but used mostly on outer walls
Footnotes
Bibliography
Emmitt, Stephen and Gorse, Christopher A. Barry's Introduction to Construction of Buildings. Chichester, U.K.: Wiley-Blackwell, 2010.
Harris, Cyril M. Dictionary of Architecture and Construction. New York: McGraw-Hill, 2005.
Ingham, Jeremy P. Geomaterials Under the Microscope: A Colour Guide. London: Manson, 2011.
Ransom, W.H. Building Failures: Diagnosis and Avoidance. Florence, Ky.:Taylor & Francis, 1987.
Snow, Dennis. Plant Engineer's Reference Book. 2d ed. Oxford: Butterworth-Heinemann, 2002.
O'Brien, Chris "Joseet, Rueben" 2014.
Concrete
Floors | Granolithic | [
"Engineering"
] | 577 | [
"Structural engineering",
"Floors",
"Concrete"
] |
12,978,453 | https://en.wikipedia.org/wiki/Enclomifene | Enclomifene (), or enclomiphene (), a nonsteroidal selective estrogen receptor modulator of the triphenylethylene group, acts by antagonizing the estrogen receptor (ER) in the pituitary gland, which reduces negative feedback by estrogen on the hypothalamic-pituitary-gonadal axis, thereby increasing gonadotropin secretion and hence gonadal production of testosterone. It is one of the two stereoisomers of clomifene, which itself is a mixture of 38% zuclomifene and 62% enclomifene. Enclomifene is the (E)-stereoisomer of clomifene, while zuclomifene is the (Z)-stereoisomer. Whereas zuclomifene is more estrogenic, enclomifene is more antiestrogenic. In accordance, unlike enclomifene, zuclomifene is antigonadotropic due to activation of the ER and reduces testosterone levels in men. As such, isomerically pure enclomifene is more favorable than clomifene as a progonadotropin for the treatment of male hypogonadism.
Enclomiphene (former tentative brand names Androxal and EnCyzix), was under development for the treatment of male hypogonadism and type 2 diabetes. By December 2016, it was in preregistration and was under review by the Food and Drug Administration in the United States and the European Medicines Agency in the European Union. In January 2018, the Committee for Medicinal Products for Human Use of the European Medicines Agency recommended refusal of marketing authorization for enclomifene for the treatment of secondary hypogonadism. In April 2021, development of enclomifene was discontinued for all indications.
Medical uses
Enclomiphene is primarily used as a treatment for men with persistent low testosterone as a result of secondary hypogonadotropic hypogonadism. In secondary hypogonadotropic hypogonadism, the resulting low levels of testosterone is attributed to inadequacies in the hypothalamic-pituitary-gonadal axis. In contrast, primary hypogonadism is caused by defects in the testes that causes them to be unable to produce the required amount of testosterone.
Enclomiphene, which stimulates the endogenous production of testosterone, is not currently known to have common adverse effects of exogenous testosterone replacement therapy, such as reduced spermatogenesis or infertility.
Contraindications
Enclomiphene citrate is contraindicated in the groups of individuals below:
Pregnant women.
Breastfeeding women.
Women with unexplained uterine bleeding.
Women with ovarian growths or cysts unrelated to polycystic ovary syndrome.
Patients with a history of liver disease.
Patients with uncontrolled adrenal or thyroid dysfunction.
Patient with known allergy to enclomiphene or clomiphene.
Adverse effects
The adverse effects of enclomiphene have not been extensively studied. Enclomiphene is a selective estrogen receptor modulator (SERM), which is associated with an increased risk of thrombo-embolic events. Enclomiphene, unlike testosterone replacement therapy, is not associated with infertility or decreased spermatogenesis.
The following adverse events were observed in a population of 1,403 persons participating in phase 2 and phase 3 studies of enclomiphene:
Mechanism of action
Enclomiphene is a selective estrogen receptor antagonist, antagonizing the estrogen receptors in the pituitary gland, disrupting the negative feedback loop by estrogen towards the hypothalamic-pituitary-gonadal axis, ultimately resulting in an increase in gonadotropin secretion.
In men with secondary hypogonadotropic hypogonadism, this improves testosterone levels and sperm motility. Men with secondary hypogonadotropic hypogonadism have abnormally low testosterone levels due to low-normal levels of luteinizing hormone (LH) and follicular stimulating hormone (FSH). The biological role of these hormones is to stimulate the endogenous production of testosterone by the testes.
Common symptoms of secondary hypogonadotropic hypogonadism include low libido, energy, and mood. In addition, men with low testosterone may experience osteoporosis, an increase in visceral fat, and the regression of secondary sexual characteristics. Enclomiphene stimulates the endogenous production of testosterone. It works differently from traditional testosterone replacement therapy, which replaces testosterone using an exogenous source.
In addition, research has uncovered that enclomiphene increases total and free testosterone levels without increasing dihydrotestosterone disproportionately, suggesting that it "normalizes endogenous testosterone production pathways and restores normal testosterone levels in men with secondary hypogonadism."
History
Enclomifene or Enclomiphene (former tentative brand names Androxal and EnCyzix), was under development for the treatment of male hypogonadism and type 2 diabetes. By December 2016, it was in preregistration and was under review by the Food and Drug Administration in the United States and the European Medicines Agency in the European Union. In January 2018, the Committee for Medicinal Products for Human Use of the European Medicines Agency recommended refusal of marketing authorization for enclomifene for the treatment of secondary hypogonadism. In April 2021, development of enclomifene was discontinued for all indications.
Clomiphene citrate, which enclomiphene citrate is derived from, is a drug approved by the Food and Drug Association (FDA) for indications of anovulatory or oligo-ovulatory infertility and male infertility (spermatogenesis induction).
A media release by the FDA for the pharmacy compounding advisory committee compared the efficacy of testosterone replacement therapy against enclomiphene. They wrote that while testosterone replacement therapy often resulted in side effects such as transference risk, supranormal testosterone levels, suppressed spermatogenesis, suppressed testicular function, and testicular atrophy, none of these risks are present in enclomiphene.
In 2009, a study discovered that "short-term clinical safety data for enclomiphene have been satisfactory and equivalent to safety data for testosterone gels and placebo."
In 2016, a study on enclomiphene citrate reported that "the ability [of enclomiphene citrate] to treat testosterone deficiency in men while maintaining fertility supports a role for enclomiphene citrate in the treatment of men in whom testosterone therapy is not a suitable option."
In 2019, a study was published that found that "enclomiphene has been shown to increase testosterone levels while stimulating [follicular-stimulating hormone] and [luteinizing hormone] production."
The key difference between enclomiphene citrate and traditional testosterone replacement therapy is that enclomiphene citrate stimulates the body to produce its own testosterone, while traditional testosterone replacement therapy replaces low testosterone levels in men with exogenous, synthetic testosterone.
A study conducted in 2013 offered this assessment of the potential of enclomiphene citrate to increase sexual function in men: "If enclomiphene citrate can correct the central defect in men that blocks their ability to produce [lutenizing hormone] and [follicular-stimulating hormone] and thus to produce both testosterone and sperm in the testes, this drug may prove itself superior to other treatments."
References
External links
Enclomifene - AdisInsight
Abandoned drugs
Diethylamino compounds
Organochlorides
Phenol ethers
Progonadotropins
Selective estrogen receptor modulators
Triphenylethylenes
Ethanolamines | Enclomifene | [
"Chemistry"
] | 1,705 | [
"Drug safety",
"Abandoned drugs"
] |
22,444,556 | https://en.wikipedia.org/wiki/Carbon%20nanotubes%20in%20medicine | Carbon nanotubes (CNTs) are very prevalent in today's world of medical research and are being highly researched in the fields of efficient drug delivery and biosensing methods for disease treatment and health monitoring. Carbon nanotube technology has shown to have the potential to alter drug delivery and biosensing methods for the better, and thus, carbon nanotubes have recently garnered interest in the field of medicine.
The use of CNTs in drug delivery and biosensing technology has the potential to revolutionalize medicine. Functionalization of single-walled nanotubes (SWNTs) has proven to enhance solubility and allow for efficient tumor targeting/drug delivery. It prevents SWNTs from being cytotoxic and altering the function of immune cells.
Cancer, a group of diseases in which cells grow and divide abnormally, is one of the primary diseases being looked at with regards to how it responds to CNT drug delivery. Current cancer therapy primarily involves surgery, radiation therapy, and chemotherapy. These methods of treatment are usually painful and kill normal cells in addition to producing adverse side effects. CNTs as drug delivery vehicles have shown potential in targeting specific cancer cells with a dosage lower than conventional drugs used, that is just as effective in killing the cells, however does not harm healthy cells and significantly reduces side effects. Current blood glucose monitoring methods by patients with diabetes are normally invasive and often painful. For example, one method involves a continuous glucose sensor integrated into a small needle which must be inserted under the skin to monitor glucose levels every few days. Another method involves glucose monitoring strips to which blood must be applied. These methods are not only invasive but they can also yield inaccurate results. It was shown that 70 percent of glucose readings obtained by continuous glucose sensors differed by 10 percent or more and 7 percent differed by over 50 percent. The high electrochemically accessible surface area, high electrical conductivity and useful structural properties have demonstrated the potential use of single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs) in highly sensitive noninvasive glucose detectors.
CNT properties
CNTs have several unique chemical, size, optical, electrical and structural properties that make them attractive as drug delivery and biosensing platforms for the treatment of various diseases and the noninvasive monitoring of blood levels and other chemical properties of the human body, respectively.
Electrical and structural
Carbon nanotubes can be metallic or semiconducting depending on their structure. This is due to the symmetry and unique electronic structure of graphene. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3, then the nanotube is semiconducting with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus all armchair (n=m) nanotubes are metallic, and nanotubes (5,0), (6,4), (9,1), etc. are semiconducting. Thus, some nanotubes have conductivities higher than that of copper, while others behave more like silicon.
Dimensional
Due to their nanoscale dimensions, electron transport in carbon nanotubes will take place through quantum effects and will only propagate along the axis of the tube. These electrical and structural properties best serve CNTs as far as biosensing is concerned because current changes in the CNTs can signify specific biological entities they are designed to detect. The fact that CNTs are small (nm scale) allows them to deliver smaller doses of drugs to specific disease cells in the body thus reducing side effects and harm to healthy cells unlike conventional drugs, whilst improving disease cell targeting efficiency.
Chemical
CNTs have been observed to have enhanced solubility when functionalized with lipids which would make their movement through the human body easier and would also reduce the risk of blockage of vital body organ pathways. As far as optical properties are concerned CNTs have been shown to exhibit strong optical absorbance in certain spectral windows such as NIR (near-infrared) light and when functionalized with tumor cell specific binding entities have allowed the selective destruction of disease (e.g. cancer) cells with NIR in drug delivery applications. They have good chemical properties.
CNTs in drug delivery and cancer therapy
Drug delivery is a rapidly growing area that is now taking advantage of nanotube technology. Systems being used currently for drug delivery include dendrimers, polymers, and liposomes, but carbon nanotubes present the opportunity to work with effective structures that have high drug loading capacities and good cell penetration qualities. These nanotubes function with a larger inner volume to be used as the drug container, large aspect ratios for numerous functionalization attachments, and the ability to be readily taken up by the cell. Because of their tube structure, carbon nanotubes can be made with or without end caps, meaning that without end caps the inside where the drug is held would be more accessible. Right now with carbon nanotube drug delivery systems, problems arise like the lack of solubility, clumping occurrences, and half-life. However, these are all issues that are currently being addressed and altered for further advancements in the carbon nanotube field. The advantages of carbon nanotubes as nanovectors for drug delivery remain where cell uptake of these structures was demonstrated efficiently where the effects were prominent, showing the particular nanotubes can be less harmful as nanovehicles for drugs. Also, drug encapsulation has been shown to enhance water dispersibility, better bioavailability, and reduced toxicity. Encapsulation of molecules also provides a material storage application as well as protection and controlled release of loaded molecules. All of these result in a good drug delivery basis where further research and understanding could improve upon numerous other advancements, like increased water solubility, decreased toxicity, sustained half-life, increased cell penetration and uptake, all of which are currently novel but undeveloped ideas.
Boron neutron capture therapy
Narayan Hosmane and his co-workers have recently developed a new approach to Boron Neutron Capture
Therapy in the treatment of cancer using substituted Carborane-Appended Water-Soluble single-wall carbon nanotubes. Substituted C2B10 carborane cages were successfully attached to the side walls of single wall carbon nanotubes (SWCNTs) via nitrene cycloaddition. The decapitations of these C2B10 carborane cages, with the appended SWCNTs intact, were accomplished by the reaction with sodium hydroxide in refluxing ethanol. During base reflux, the three-membered ring formed by the nitrene and SWCNT was opened to produce water-soluble SWCNTs in which the side walls were functionalized by both substituted nido-C2B9 carborane units and ethoxide moieties. All new compounds were characterized by EA, SEM, TEM, UV, NMR, and IR spectra and chemical analyses. Selected tissue distribution studies on one of these nanotubes, {([Na+][1-Me-2-((CH2)4NH-)-1,2-C2B9H10][OEt])n(SWCNT)} (Va), showed that the boron atoms are concentrated more in tumors cells than in blood and other organs, making it an attractive nanovehicle for the delivery of boron to tumor cells for an effective boron neutron capture therapy in the treatment of cancer.
Selective cancer cell destruction
Carbon nanotubes can be used as multifunctional biological transporters and near-infrared agents for selective cancer cell destruction. Biological systems are known to be highly transparent to 700- to 1,100-nm near-infrared (NIR) light. Researchers showed that the strong optical absorbance of single-walled carbon nanotubes (SWNTs) in this special spectral window, an intrinsic property of SWNTs, can be used for optical stimulation of nanotubes inside living cells to afford multifunctional nanotube biological transporters. They used oligonucleotides transported inside living Hela cells by nanotubes. The oligonucleotides translocated into the cell nucleus upon endosomal rupture triggered by NIR laser pulses. Continuous NIR radiation caused cell death because of excessive local heating of SWNT in vitro. Selective cancer cell destruction was achieved by functionalization of SWNT with a folate moiety, selective internalization of SWNTs inside cells labeled with folate receptor tumor markers, and NIR-triggered cell death, without harming receptor-free normal cells. Thus, the transporting capabilities of carbon nanotubes combined with suitable functionalization chemistry and their intrinsic optical properties can lead to new classes of novel nanomaterials for drug delivery and cancer therapy.
Tumor targeting
Research has been conducted on in vivo biodistribution and highly efficient tumor targeting of carbon nanotubes in mice for cancer therapy. Investigations are being done on the biodistribution of radio-labelled SWNTs in mice by in vivo positron emission tomography (PET), ex vivo biodistribution and Raman spectroscopy. It was found that SWNTs that are functionalized with phospholipids bearing polyethylene glycol (PEG) are surprisingly stable in vivo. The effect of PEG chain length on the biodistribution and circulation of the SWNTs was studied. Effectively PEGylated SWNTs exhibited relatively long blood circulation times and low uptake by the reticuloendothelial system (RES). Efficient targeting of integrin positive tumor in mice was achieved with SWNTs coated with PEG chains linked to an arginine–glycine–aspartic acid (RGD) peptide. A high tumor accumulation was attributed to the multivalent effect of the SWNTs. The Raman signatures of SWNTs were used to directly probe the presence of nanotubes in mice tissues and confirm the radio-label-based results.
CNTs as biosensors
CNT network bio-stress sensors
A single nanotube experiences a change in electrical resistance when experiencing stress or strain. This piezoresistive effect changes the current flow through the nanotube, which can be measured in order to accurately quantify the applied stress. A semi-random positioning of many overlapping nanotubes forms an electrically conducting network composed of many piezoresistive nanotubes. If the variance of the tube lengths and angles are known and controllable during manufacture, an eigensystem approach can be used to determine the expected current flow between any two points in the network. The tube network is embedded within orthopedic plates, clamps, and screws and in bone grafts in order to determine the state of bone healing by measuring the effect of a load on the plate, clamp, screw, or other fixation device attached to the bone. A healed bone will bear most of the load while a yet unhealed bone will defer the load to the fixation device wherein the nanotube network may measure the change in resistivity. Measurement is done wirelessly by electrical induction. This allows the doctor to accurately assess patient healing and also allows the patient to know how much stress the affected area may safely tolerate. Wolff's law indicates that bone responds positively to safe amounts of stress, which may be necessary for proper healing.
Glucose detection biosensors
Carbon nanotube–plasma polymer-based amperometric biosensors for ultrasensitive glucose detection have been fabricated. Two amperometric enzyme biosensors were fabricated. One had single wall nanotubes and the other multi wall nanotubes, however, plasma-polymerized thin films (PPFs) were incorporated into both. A mixture of the enzyme glucose oxidase (GOD) and a CNT film was sandwiched with 10-nm-thick acetonitrile PPFs. A PPF layer was deposited onto a sputtered gold electrode. In order to facilitate the electrochemical communication between the CNT layer and GOD, CNTs were treated with oxygen plasma. The device with single-walled CNTs showed a sensitivity higher than that of multi walled CNTs. The glucose biosensor showed ultrasensitivity (a sensitivity of 40 μA mM-1 cm-2, a correlation coefficient of 0.992, a linear response range of 0.025 –1.9 mM, a detection limit of 6.2 μM at S/N = 3, +0.8V vs Ag/AgCl), and a rapid response (<4 seconds in reaching 95% of maximum response). This high performance is attributed to the fact that CNTs have excellent electrocatalytic activity and enhance electron transfer, and that PPFs and/or the plasma process for CNTs are an enzyme-friendly platform, i.e., a suitable design of the interface between GOD and CNTs.
DNA detection biosensors
An aligned carbon nanotube ultrasensitive biosensor for DNA detection was developed. The design and fabrication of the biosensor was based on aligned single wall carbon nanotubes (SWCNTs) with integrated single-strand DNAs (ssDNA). The fabricated ultra-sensitive biosensor provided label-free real-time electronic detection of DNA hybridization between surface immobilized ssDNA and target ssDNA. Hybridization kinetics between complementary and target ssDNA nucleotide base pairs resulted in a local charge generation between base pairs that was injected into the SWCNTs resulting in a detectable change in SWCNT electrical conductance. This conductance change was amplified electrically through the integration of the functionalized SWCNTs as the semi-conductive channel in a silicon-silicon oxide based field effect transistor (FET). Based on previous Langmuir DNA kinetics calculations, the projected sensitivity level of the SWCNT-DNA sensor was considerably higher than traditional fluorescent and hybridization assays.
CNT modified electrode biosensors
A microbial biosensor based on carbon nanotube (CNT) modified electrodes was developed. Pseudomonas putida DSM 50026 cells were used as the biological component and the measurement was based on the respiratory activity of the cells estimated from electrochemical measurements. The cells were immobilized on carbon nanotube (CNT) modified carbon paste electrodes (CPE) by means of a redox osmium polymer. The osmium polymer efficiently shuttled electrons between redox enzymes located in the cell wall of the cells and promoted a stable binding to the electrode surface. The effect of varying the amounts of CNT and osmium polymer, on the response to glucose was investigated to find the optimum composition of the sensor. The effects of pH and temperature were also examined. After the optimisation studies, the system was characterised by using glucose as a substrate. Moreover, the microbial biosensor was also prepared by using phenol adapted bacteria and then, calibrated to phenol. After that, it was applied for phenol detection in an artificial waste water sample. The study found that whole cell P. putida biosensors using Os-redox polymers could be good alternatives for the analysis of different substrates such as glucose as well as xenobiotics in the absence of oxygen with high sensitivity because of the fast electron collection efficiency between the Os-redox polymer and the bacterial cells. The use of optimum amounts of CNTs and the Os redox mediator provided better sensor sensitivity by promoting the electron transfer within the structure of the biosensor. The main disadvantages were the high surface area of CNTs that increased the background current and the diffusion problem of electrons that occurred due to overlapping of the diffusion layers formed at closely spaced CNTs in the film. However, these problems could be overcome by optimising the CNT and polymer amounts.
Toxicity issues
Cytotoxity of functionalized CNTs
Research shows that functionalized carbon nanotubes are non-cytotoxic and preserve the functionality of primary immune cells. Two types of f-CNTs were prepared, following the 1,3-dipolar cycloaddition reaction (f-CNTs 1 and 2) and the oxidation/amidation treatment (f-CNTs 3 and 4), respectively. Both types of f-CNTs were uptaken by B and T lymphocytes as well as macrophages in vitro, without affecting cell viability. Subsequently, the functionality of the different cells was analyzed carefully. It was discovered that f-CNT 1, which is highly water-soluble, did not influence the functional activity of immunoregulatory cells. f-CNT 3, which instead possesses reduced solubility and forms mainly stable water suspensions, preserved lymphocytes' functionality while provoking secretion of proinflammatory cytokines by macrophages. One important thing to note from this study is the fact that certain types of CNTs functionalized with lipids are highly water-soluble which would make their movement through the human body easier and would also reduce the risk of blockage of vital body organ pathways thus making them more attractive as drug delivery vehicles.
In vitro cytotoxicity
In vitro toxicity of single- and multi-walled carbon nanotubes in human astrocytoma and lung carcinoma cells was investigated. The study was undertaken to characterize the physicochemical properties of single-walled nanotubes (SWNTs), multi-walled nanotubes (MWNTs) and functionalized MW (MW-COOH and MW-NH2), and to assess their cytotoxicity in human astrocytoma D384-cells and lung carcinoma A549-cells, using the MTT assay and calcein/propidium iodide (PI) staining. Both the as-received and the modified nanotubes were characterized by means of thermal analysis (TGA), infrared spectroscopy and atomic force microscopy chiefly to check the degree of functionalization. The cells were exposed to the nanomaterials (0.1–100 μg/ml) for 24, 48 and 72 hours in a medium containing 10% FCS. In D384 cells MTT results revealed a strong cytotoxicity (50%) of SWNTs after 24‑hour exposure already at 0.1 μg/ml, without further changes at higher concentrations or longer incubation times. At all time-points MTT metabolism was decreased by 50% by all the other compounds at 10 μg/ml and with no exacerbation at the higher dose. Similar results were obtained with A549 cells. Experiments using calcein/PI staining did not confirm MTT cytotoxicity data neither in D384- nor in A549-cells. The viability of these cells was not affected by any nanotube at any concentration or time of exposure, with the exception of the positive control SiO2. The results suggested the need of a careful examination of carbon nanotubes toxic effects by means of multiple tests to circumvent the possible problem of artifactual results due to the interference of nanomaterials with the dye markers employed.
Cytotoxicity of SWNTs and MWCNTs
Multi-walled carbon nanotubes have been investigated in several species for their potential to promote mutagenesis. Studies in spinach, mice, various human cell lines, and rats have shown that MWCNT exposure is associated with oxidative damage, increased apoptosis, chromosome damage, and necrosis. A study in mice found that biomarkers for lung cancer were specifically affected by MWCNT exposure; these biomarkers are being researched as a method for monitoring occupational exposure to carbon nanotubes.
The cytotoxicity was investigated on healthy alveolar macrophage cells obtained from adult guinea pigs for single-wall nanotubes (SWNTs), multi-wall nanotubes (with diameters ranging from 10 to 20 nm, MWNT10), and fullerene (C60) for comparison purposes. Profound cytotoxicity of SWNTs was observed in alveolar macrophage (AM) after a 6-hour exposure in vitro. The cytotoxicity increased by as high as ~35% when the dosage of SWNTs was increased by 11.30 μg/cm2. No significant toxicity was observed for C60 up to a dose of 226.00 μg/cm2. The cytotoxicity apparently followed a sequence order on a mass basis: SWNTs > MWNT10 > quartz > C60. SWNTs significantly impaired phagocytosis of AM at the low dose of 0.38 μg/cm2, whereas MWNT10 and C60 induced injury only at the high dose of 3.06 μg/cm2. The macrophages exposed to SWNTs or MWNT10 of 3.06 μg/cm2 showed characteristic features of necrosis and degeneration. A sign of apoptotic cell death likely existed. It was concluded from the study that carbon nanomaterials with different geometric structures exhibit quite different cytotoxicity and bioactivity in vitro, although they may not be accurately reflected in the comparative toxicity in vivo.
References
External links
Carbon nanotube bone healing stress sensor (video)
Nanomedicine
Medicine articles needing expert attention | Carbon nanotubes in medicine | [
"Materials_science"
] | 4,452 | [
"Nanomedicine",
"Nanotechnology"
] |
22,444,842 | https://en.wikipedia.org/wiki/Monk%27s%20formula | In mathematics, Monk's formula, found by , is an analogue of Pieri's formula that describes the product of a linear Schubert polynomial by a Schubert polynomial. Equivalently, it describes the product of a special Schubert cycle by a Schubert cycle in the cohomology of a flag manifold.
Write tij for the transposition (i j), and si = ti,i+1. Then 𝔖sr = x1 + ⋯ + xr, and Monk's formula states that for a permutation w,
where is the length of w. The pairs (i, j) appearing in the sum are exactly those such that i ≤ r < j, wi < wj, and there is no i < k < j with wi < wk < wj; each wtij is a cover of w in Bruhat order.
References
Symmetric functions | Monk's formula | [
"Physics",
"Mathematics"
] | 178 | [
"Algebra",
"Symmetric functions",
"Symmetry"
] |
22,445,951 | https://en.wikipedia.org/wiki/SheevaPlug | The SheevaPlug is a "plug computer" designed to allow standard computing features in as small a space as possible. It was a small embedded Linux ARM computer without a display which can be considered an early predecessor to the subsequent Raspberry Pi.
As one of the first such computers on the market, the device has a 1.2 GHz Marvell Kirkwood 6281 ARM-compatible CPU, a.k.a. Feroceon. It is sold with Ubuntu Linux version 9.04 pre-installed. A software development kit for the platform is also available.
Commercial products
The following commercial products are known to be based on the SheevaPlug platform:
BarracudaDrive is a free Cloud Server for the SheevaPlug.
CTERA CloudPlug by CTERA Networks, a plug computer providing remote backup service at local disk speeds and overlays a file sharing service.
TonidoPlug from CodeLathe, a SheevaPlug-based device that runs Tonido home server and NAS software, and allows users to access, share and sync files and media.
Pogoplug by Cloud Engines, a device that lets users access their files at home over the Internet without leaving a PC on.
Seagate FreeAgent DockStar and Black Armor 110/220 NAS, both a variant of the Pogoplug.
GuruPlug, a SheevaPlug with additional connectivity options.
DreamPlug, similar to a GuruPlug+
The PylonPlug by Equelex. A one interface OpenWrt device that when used in conjunction with a VLAN (IEEE 802.1Q) capable network switch, can be used as a Multi-WAN network router. Its operating system is OpenWrt Linux
The sipJack from pbxnsip is a Sheeva kit-based plug computer and provides Voice over IP services and PBX features.
The WeatherHub2 by Ambient Weather, a server that collects data from a weather station and uploads data to Web pages or other Internet services.
The GeNiJack by NETCOR. An endpoint for end-to-end network performance assessment.
BACnet Gateway by Kara Systems, a M-Bus, Modbus and OneWire gateway which represents a BACNet Device
Pwnie Express is a Computer Security tool.
AvaGigE by Avantes, USB to Ethernet converter which supports the connection of Avantes spectrometers to an Ethernet network.
Evercube, a do-it-yourself home server, designed for quiet, continuous operation in the living room
Lockitron server for remote operation of locks—with key management. Control server based on the SheevaPlug.
Iomega iConnect, a wireless, diskless NAS
ZigBee Gateway ZBG-100 from pikkerton
Pwn Plug by Pwnie Express
Other operating system ports and stacks
FreedomBox, for secured, encrypted and fully decentralized networking based on Debian
Debian has official support for the SheevaPlug and other plug computers, such as the GuruPlug.
Mark Gillespie has created scripts to build and install Debian Lenny and Squeeze onto either the internal NAND or SD card
An ARM port of Fedora exists that can be installed on the SheevaPlug.
Raúl Porcel has managed to run Gentoo on the plug and published an instruction on how to do so.
Stuart Winter has a working Slackware port. This is the official port of Slackware version 13.1 to ARM. Slackware for ARM now officially supports SheevaPlug.
Inferno boots on the SheevaPlug.
Plan 9 supports SheevaPlug (and other Kirkwood-based systems) in its official distribution.
SheevaPlug is supported on NetBSD 6.0 and FreeBSD 8.0 or newer.
OpenWrt supported
NixOS (SVN trunk) supports the SheevaPlug since the last quarter of 2009.
Plugbox Linux is an Arch Linux port for SheevaPlug and other plug devices.
Amahi is a home file server which has recently been ported to the SheevaPlug and other plug computing devices.
Arch Linux ARM ArchLinux for plug computer devices (ARMv5, ARMv6, ARMv7).
Pathagar Book Server - SheevaPlug Edition is an Open Publication Distribution System OPDS based Book Server running on top of Debian Squeeze.
RedSleeve A distribution derived from RHEL ported to ARM (ARMv5, ARMv6, ARMv7).
Variants and modifications
A version with an eSATA port for connecting a serial ATA hard disk is also available and sometimes referred to as SheevaPlug+. Revision 1.3 of the SheevaPlug can be extended by one ESATA port, but soldering is required and will void the warranty.
Marvell offers a development kit to assist in the development of software for the platform. The kit includes the GCC cross-compiler for ARM. The device includes a mini USB connector wired to an FTDI FT2232 chip which provides the developer's computer with access to two ports, a JTAG port connected to the internal JTAG bus, and an RS-232 port connected to the Kirkwood processor's serial port through which the bootstrap and kernel console can be accessed. This debug console can be accessed from any computer with support for the FTDI bus translator (FreeBSD, Linux, Mac OS X, Windows).
References
External links
A Note On Setting Up the SheevaPlug Linux Embedded Computer Off-Grid
Linux-based devices
Computer storage devices
Computer-related introductions in 2009 | SheevaPlug | [
"Technology"
] | 1,164 | [
"Computer storage devices",
"Recording devices"
] |
22,446,288 | https://en.wikipedia.org/wiki/Hebeloma%20alpinum | Hebeloma alpinum is a species of mushroom in the family Hymenogastraceae. It was originally described from Switzerland by Favre as variety alpina of Hebeloma crustuliniforme; G. Bruchet raised it to species status in 1970.
See also
List of Hebeloma species
References
alpinum
Fungi described in 1955
Fungi of Europe
Fungus species | Hebeloma alpinum | [
"Biology"
] | 79 | [
"Fungi",
"Fungus species"
] |
22,446,301 | https://en.wikipedia.org/wiki/Hebeloma%20aminophilum | Hebeloma aminophilum, commonly known as the ghoul fungus, is a species of mushroom in the family Hymenogastraceae. Found in Western Australia, it gets its common name from the propensity of the fruiting bodies to spring out of decomposing animal remains.
Taxonomy
The ghoul fungus was first described by mycologists R.N. Hilton and Orson K. Miller, Jr. in 1987. The holotype collection consisted of about 100 specimens that were fruiting around the bones of a decomposing kangaroo carcass that had been dumped some months before.
Etymology
The generic name is derived from the Ancient Greek Hebe, "youth", and -loma, a fringe (pertaining to the fungal veil), referring to how the fungal veil is only seen in immature specimens. It gets its common name of ghoul fungus from its habit of growing around animal carcasses.
Description
The dull pinkish brown or cream cap is in diameter, convex initially before flattening out with age. There is a slight umbo, and the cap margin is inrolled when young. A thin white veil rapidly disappears in young mushrooms. The cap surface is sticky initially. The adnate (or sometimes adnexed) gills are pale pink to pinkish brown and up to 1 cm deep. With age, they can be encrusted with clumps of spores. The cylindrical stipe is high, 1–1.2 cm in diameter and has a thickened base and lacks a ring. The thick flesh is cream or pale yellow, with a bitter taste and a stale smell. The spore print is pinkish brown, and the oval spores measure 8.5 by 4.9 μm. The mycelium is white.
Similar species
Similar species include the introduced poisonpie (Hebeloma crustuliniforme), which has been recorded in pine plantations, the native western Australian poisonpie (H. westraliense), which does not grow near carcasses, and the Australian white webcap (Cortinarius austroalbidus), which is paler and smells of curry.
Distribution and habitat
An uncommon fungus, H. aminophilum is found in southern Western Australia, southeastern South Australia and Victoria. Fruiting bodies arise in eucalyptus woodland in the vicinity of sheep, reptile and bird carcasses. The habit of growing from flesh gives it the term sarcophilous.
See also
List of Hebeloma species
References
Ammonia fungi
aminophilum
Fungi described in 1987
Fungi native to Australia
Fungus species | Hebeloma aminophilum | [
"Chemistry",
"Biology"
] | 529 | [
"Fungus species",
"Ammonia fungi",
"Fungi",
"Metabolism"
] |
22,446,376 | https://en.wikipedia.org/wiki/Hebeloma%20cavipes | Hebeloma cavipes is a species of mushroom in the family Hymenogastraceae.
Description
Pileus: diameter, cap convex and sometimes umbonate, slightly viscid. Cap colour yellow brown to cinnamon to chestnut or even dark brick, sometimes with a pale but strongly coloured zone and finally pinkish buff to cream to almost white near de margin. Disc zonate. Margin sometimes involute and slightly scalloped, but usually straight. Lamellae emarginated, spaced moderately; colour cream or brown when young, later sepia as spores mature; edge fimbriated and paler than lamellae; with droplets. Lamellules frequent.
Stipe central, sometimes cylindrical but usually clavate and subbulbous, white to leather-tan, usually discoloring to brown with age. Stipe surface pruinose to floccose in the apex. Cortina not observed. Smell raphanoid with cocoa hints. Taste raphanoid to bitter.
Spore deposit brownish olive to umber. Spores amygdaloid with a small apiculus. Size 9.2–11.7 × 5.5–6.6 μm.
References
cavipes
Fungi of Europe
Fungus species | Hebeloma cavipes | [
"Biology"
] | 257 | [
"Fungi",
"Fungus species"
] |
22,447,230 | https://en.wikipedia.org/wiki/DOTA%20%28chelator%29 | DOTA (also known as tetraxetan) is an organic compound with the formula (CH2CH2NCH2CO2H)4. The molecule consists of a central 12-membered tetraaza (i.e., containing four nitrogen atoms) ring. DOTA is used as a complexing agent, especially for lanthanide ions. Its complexes have medical applications as contrast agents and cancer treatments.
Terminology
The acronym DOTA (for dodecane tetraacetic acid) is shorthand for both the tetracarboxylic acid and its various conjugate bases. In the area of coordination chemistry, the tetraacid is called H4DOTA and its fully deprotonated derivative is DOTA4−. Many related ligands are referred to using the DOTA acronym, although these derivatives are generally not tetracarboxylic acids or the conjugate bases.
Structure
DOTA is derived from the macrocycle known as cyclen. The four secondary amine groups are modified by replacement of the N-H centers with N-CH2CO2H groups. The resulting aminopolycarboxylic acid, upon ionization of the carboxylic acid groups, is a high affinity chelating agent for di- and trivalent cations. The tetracarboxylic acid was first reported in 1976.
At the time of its discovery DOTA exhibited the largest known formation constant for the complexation (chelating) of Ca2+ and Gd3+ ions. Modified versions of DOTA were first reported in 1988 and this area has proliferated since.
As a polydentate ligand, DOTA envelops metal cations, but the denticity of the ligand depends on the geometric tendencies of the metal cation. The main applications involve the lanthanides and in such complexes DOTA functions as an octadentate ligand, binding the metal through four amine and four carboxylate groups. Most such complexes feature an additional water ligand, giving an overall coordination number of nine.
For most transition metals, DOTA functions as a hexadentate ligand, binding through the four nitrogen and two carboxylate centres. The complexes have octahedral coordination geometry, with two pendent carboxylate groups. In the case of [Fe(DOTA)]−, the ligand is heptadentate.
Uses
Cancer treatment and diagnosis
DOTA can be conjugated to monoclonal antibodies by attachment of one of the four carboxyl groups as an amide. The remaining three carboxylate anions are available for binding to the yttrium ion. The modified antibody accumulates in the tumour cells, concentrating the effects of the radioactivity of 90Y. Drugs containing this module receive an International Nonproprietary Name ending in tetraxetan:
Yttrium (90Y) clivatuzumab tetraxetan
Yttrium (90Y) tacatuzumab tetraxetan
DOTA can also be linked to molecules that have affinity for various structures. The resulting compounds are used with a number of radioisotopes in cancer therapy and diagnosis (for example in positron emission tomography).
Affinity for somatostatin receptors, which are found on neuroendocrine tumours:
DOTATOC, DOTA-(Tyr3)-octreotide or edotreotide
DOTA-TATE or DOTA-(Tyr3)-octreotate
Affinity for the proteins streptavidin and avidin, which can be targeted at tumours by aid of monoclonal antibodies:
DOTA-biotin
Contrast agent
The complex of Gd3+ and DOTA is used as a gadolinium-based MRI contrast agent under the name gadoteric acid.
Synthesis
DOTA was first synthesized in 1976 from cyclen and bromoacetic acid. This method is simple and still in use.
References
Carboxylic acids
Macrocycles
Chelating agents
Octadentate ligands
Substances discovered in the 1970s | DOTA (chelator) | [
"Chemistry"
] | 850 | [
"Carboxylic acids",
"Functional groups",
"Organic compounds",
"Macrocycles",
"Chelating agents",
"Process chemicals"
] |
4,855,261 | https://en.wikipedia.org/wiki/SPEAR | SPEAR (originally Stanford Positron Electron Accelerating Ring, now simply a name) was a collider at the SLAC National Accelerator Laboratory. It began running in 1972, colliding electrons and positrons with an energy of . During the 1970s, experiments at the accelerator played a key role in particle physics research, including the discovery of the meson (awarded the 1976 Nobel Prize in Physics), many charmonium states, and the discovery of the tau (awarded the 1995 Nobel Prize in Physics).
Today, SPEAR is used as a synchrotron radiation source for the Stanford Synchrotron Radiation Lightsource (SSRL). The latest major upgrade of the ring in that finished in 2004 rendered it the current name SPEAR3.
Notes
a: The original design consists of a single ring, an upgraded proposal for a pair of asymmetric rings did not receive enough funding and finally the acronym was kept as a simple name. Though the name Stanford Positron Electron Asymmetric Ring is also used in official sources.
References
External links
Brief explanation of the acronym in SLACspeak
25th Anniversary Info from SLAC
SPEAR3 status
Buildings and structures in San Mateo County, California
Particle physics facilities
Stanford University
Particle accelerators | SPEAR | [
"Physics"
] | 252 | [
"Particle physics stubs",
"Particle physics"
] |
4,856,882 | https://en.wikipedia.org/wiki/Transfer%20hydrogenation | In chemistry, transfer hydrogenation is a chemical reaction involving the addition of hydrogen to a compound from a source other than molecular . It is applied in laboratory and industrial organic synthesis to saturate organic compounds and reduce ketones to alcohols, and imines to amines. It avoids the need for high-pressure molecular used in conventional hydrogenation. Transfer hydrogenation usually occurs at mild temperature and pressure conditions using organic or organometallic catalysts, many of which are chiral, allowing efficient asymmetric synthesis. It uses hydrogen donor compounds such as formic acid, isopropanol or dihydroanthracene, dehydrogenating them to , acetone, or anthracene respectively. Often, the donor molecules also function as solvents for the reaction. A large scale application of transfer hydrogenation is coal liquefaction using "donor solvents" such as tetralin.
Organometallic catalysts
In the area of organic synthesis, a useful family of hydrogen-transfer catalysts have been developed based on ruthenium and rhodium complexes, often with diamine and phosphine ligands. A representative catalyst precursor is derived from (cymene)ruthenium dichloride dimer and the tosylated diphenylethylenediamine. These catalysts are mainly employed for the reduction of ketones and imines to alcohols and amines, respectively. The hydrogen-donor (transfer agent) is typically isopropanol, which converts to acetone upon donation of hydrogen. Transfer hydrogenations can proceed with high enantioselectivities when the starting material is prochiral:
RR'C=O{} + Me2CHOH -> RR'C^{\star}H-OH{} + Me2C=O
where is a chiral product. A typical catalyst is , where Ts refers to a tosyl group () and R,R refers to the absolute configuration of the two chiral carbon centers. This work was recognized with the 2001 Nobel Prize in Chemistry to Ryōji Noyori.
Another family of hydrogen-transfer agents are those based on aluminium alkoxides, such as aluminium isopropoxide in the MPV reduction; however their activities are relatively low by comparison with the transition metal-based systems.
The catalytic asymmetric hydrogenation of ketones was demonstrated with ruthenium-based complexes of BINAP.
Even though the BINAP-Ru dihalide catalyst could reduce functionalized ketones, the hydrogenation of simple ketones remained unsolved. This challenge was solved with precatalysts of the type RuCl2(diphosphane)(diamine). These catalysts preferentially reduce ketones and aldehydes, leaving olefins and many other substituents unaffected.
Metal-free routes
Prior to the development of catalytic hydrogenation, many methods were developed for the hydrogenation of unsaturated substrates. Many of these methods are only of historical and pedagogical interest. One prominent transfer hydrogenation agent is diimide or (NH)2, also called diazene. This becomes oxidized to the very stable N2:
The diimide can be generated from hydrazine or certain other organic precursors.
Two hydrocarbons that can serve as hydrogen donors are cyclohexene or cyclohexadiene. In this case, an alkane is formed, along with a benzene. The gain of aromatic stabilization energy when the benzene is formed is the driving force of the reaction. Pd can be used as a catalyst and a temperature of 100 °C is employed. More exotic transfer hydrogenations have been reported, including this intramolecular one:
Many reactions exist with alcohol or amines as the proton donors, and alkali metals as electron donors. Of continuing value is the sodium metal-mediated Birch reduction of arenes (another name for aromatic hydrocarbons). Less important presently is the Bouveault–Blanc reduction of esters. The combination of magnesium and methanol is used in alkene reductions, e.g. the synthesis of asenapine:
Organocatalytic transfer hydrogenation
Organocatalytic transfer hydrogenation has been described by the group of List in 2004 in a system with a Hantzsch ester as hydride donor and an amine catalyst:
In this particular reaction the substrate is an α,β-unsaturated carbonyl compound. The proton donor is oxidized to the pyridine form and resembles the biochemically relevant coenzyme NADH. In the catalytic cycle for this reaction the amine and the aldehyde first form an iminium ion, then proton transfer is followed by hydrolysis of the iminium bond regenerating the catalyst. By adopting a chiral imidazolidinone MacMillan organocatalyst an enantioselectivity of 81% ee was obtained:
In a case of stereoconvergence, both the E-isomer and the Z-isomer in this reaction yield the (S)-enantiomer.
Extending the scope of this reaction towards ketones or rather enones requires fine tuning of the catalyst (add a benzyl group and replace the t-butyl group by a furan) and of the Hantzsch ester (add more bulky t-butyl groups):
With another organocatalyst altogether, hydrogenation can also be accomplished for imines. One cascade reaction is catalyzed by a chiral phosphoric acid:
The reaction proceeds via a chiral iminium ion. Traditional metal-based catalysts, hydrogenation of aromatic or heteroaromatic substrates tend to fail.
See also
Meerwein–Ponndorf–Verley reduction
Oppenauer oxidation
Dehydrogenation
Hydrogenation
Hydrogenolysis
Borrowing hydrogen
References
Chemical processes
Industrial processes
Organic redox reactions
Hydrogenation | Transfer hydrogenation | [
"Chemistry"
] | 1,222 | [
"Organic redox reactions",
"Organic reactions",
"Chemical processes",
"nan",
"Hydrogenation",
"Chemical process engineering"
] |
4,857,139 | https://en.wikipedia.org/wiki/Rest%20frame | In special relativity, the rest frame of a particle is the frame of reference (a coordinate system attached to physical markers) in which the particle is at rest.
The rest frame of compound objects (such as a fluid, or a solid made of many vibrating atoms) is taken to be the frame of reference in which the average momentum of the particles which make up the substance is zero (the particles may individually have momentum, but collectively have no net momentum). The rest frame of a container of gas, for example, would be the rest frame of the container itself, in which the gas molecules are not at rest, but are no more likely to be traveling in one direction than another. The rest frame of a river would be the frame of an unpowered boat, in which the mean velocity of the water is zero. This frame is also called the center-of-mass frame, or center-of-momentum frame.
The center-of-momentum frame is notable for being the reference frame in which the total energy (total relativistic energy) of a particle or compound object, is also the invariant mass (times the scale-factor speed of light squared). It is also the reference frame in which the object or system has minimum total energy.
In both special relativity and general relativity it is essential to specify the rest frame of any time measurements, as the time that an event occurred is dependent on the rest frame of the observer. For this reason the timings of astronomical events such as supernovae are usually recorded in terms of when the light from the event reached Earth, as the "real time" that the event occurred depends on the rest frame chosen. For example, in the rest frame of a neutrino particle travelling from the Crab Nebula supernova to Earth, the supernova occurred in the 11th Century AD only a short while before the light reached Earth, but in Earth's rest frame the event occurred about 6300 years earlier.
References
See p. 139-140 for discussion of the stress-energy tensor for a perfect fluid such as an ideal gas.
See also
Co-moving frame
Special relativity
Frames of reference | Rest frame | [
"Physics",
"Mathematics"
] | 433 | [
"Frames of reference",
"Classical mechanics",
"Theory of relativity",
"Special relativity",
"Coordinate systems"
] |
4,859,082 | https://en.wikipedia.org/wiki/Rp-process | The rp-process (rapid proton capture process) consists of consecutive proton captures onto seed nuclei to produce heavier elements. It is a nucleosynthesis process and, along with the s-process and the r-process, may be responsible for the generation of many of the heavy elements present in the universe. However, it is notably different from the other processes mentioned in that it occurs on the proton-rich side of stability as opposed to on the neutron-rich side of stability.
The end point of the rp-process (the highest-mass element it can create) is not yet well established, but recent research has indicated that in neutron stars it cannot progress beyond tellurium. The rp-process is inhibited by alpha decay, which puts an upper limit on the end point at 104Te, the lightest observed alpha-decaying nuclide, and the proton drip line in light antimony isotopes. At this point, further proton captures result in prompt proton emission or alpha emission, and thus the proton flux is consumed without yielding heavier elements; this end process is known as the tin–antimony–tellurium cycle.
Conditions
The process has to occur in very high-temperature environments (above 109 kelvins) so that the protons can overcome the large Coulomb barrier for charged-particle reactions. A hydrogen-rich environment is also a prerequisite due to the large proton flux needed. The seed nuclei needed for this process to occur are thought to be formed during breakout reactions from the hot CNO cycle. Typically proton capture in the rp-process will compete with (α,p) reactions, as most environments with a high flux of hydrogen are also rich in helium. The time scale for the rp-process is set by β+ decays at or near the proton drip line, because the weak interaction is notoriously slower than the strong interaction and electromagnetic force at these high temperatures.
Possible sites
Sites suggested for the rp-process are accreting binary systems where one star is a neutron star. In these systems the donor star is accreting material onto its compact partner star. The accreted material is usually rich in hydrogen and helium because of its origin from the surface layers of the donor star. Because such compact stars have high gravitational fields, the material falls with a high velocity towards the compact star, usually colliding with other accreted material en route, forming an accretion disk. In the case of accretion onto a neutron star, as this material slowly builds up on the surface, it will attain a temperature on the order of 108 K.
Eventually, it is believed that thermonuclear instabilities arise in this hot atmosphere, allowing the temperature to continue to rise until it leads to a runaway thermonuclear explosion of the hydrogen and helium. During the flash, the temperature quickly rises, becoming high enough for the rp-process to occur. While the initial flash of hydrogen and helium lasts only a second, the rp-process typically takes up to 100 seconds. Therefore, the rp-process is observed as the tail of the resulting X-ray burst.
See also
p-nuclei
References
Nuclear physics
Concepts in stellar astronomy
Nucleosynthesis
-
Proton | Rp-process | [
"Physics",
"Chemistry"
] | 668 | [
"Nuclear fission",
"Concepts in astrophysics",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Concepts in stellar astronomy",
"Nuclear fusion"
] |
4,859,639 | https://en.wikipedia.org/wiki/Milnor%27s%20sphere | In mathematics, specifically differential and algebraic topology, during the mid 1950's John Milnorpg 14 was trying to understand the structure of -connected manifolds of dimension (since -connected -manifolds are homeomorphic to spheres, this is the first non-trivial case after) and found an example of a space which is homotopy equivalent to a sphere, but was not explicitly diffeomorphic. He did this through looking at real vector bundles over a sphere and studied the properties of the associated disk bundle. It turns out, the boundary of this bundle is homotopically equivalent to a sphere , but in certain cases it is not diffeomorphic. This lack of diffeomorphism comes from studying a hypothetical cobordism between this boundary and a sphere, and showing this hypothetical cobordism invalidates certain properties of the Hirzebruch signature theorem.
See also
Exotic sphere
Oriented cobordism
References
Differential topology
Algebraic topology
Topology | Milnor's sphere | [
"Physics",
"Mathematics"
] | 195 | [
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Space",
"Differential topology",
"Geometry",
"Spacetime"
] |
4,860,340 | https://en.wikipedia.org/wiki/Wind%20profiler | A wind profiler is a type of weather observing equipment that uses radar or sound waves (SODAR) to detect the wind speed and direction at various elevations above the ground. Readings are made at each kilometer above sea level, up to the extent of the troposphere (i.e., between 8 and 17 km above mean sea level). Above this level there is inadequate water vapor present to produce a radar "bounce." The data synthesized from wind direction and speed is very useful to meteorological forecasting and timely reporting for flight planning. A twelve-hour history of data is available through NOAA websites.
Principle
In a typical implementation, the radar or sodar can sample along each of five beams: one is aimed vertically to measure vertical velocity, and four are tilted off vertical and oriented orthogonal to one another to measure the horizontal components of the air's motion. A profiler's ability to measure winds is based on the assumption that the turbulent eddies that induce scattering are carried along by the mean wind. The energy scattered by these eddies and received by the profiler is orders of magnitude smaller than the energy transmitted. However, if sufficient samples can be obtained, then the amplitude of the energy scattered by these eddies can be clearly identified above the background noise level, then the mean wind speed and direction within the volume being sampled can be determined. The radial components measured by the tilted beams are the vector sum of the horizontal motion of the air toward or away from the radar and any vertical motion present in the beam. Using appropriate trigonometry, the three-dimensional meteorological velocity components (u,v,w) and wind speed and wind direction are calculated from the radial velocities with corrections for vertical motions.
Radar wind profiler
Pulse-Doppler radar wind profilers operate using electromagnetic (EM) signals to remotely sense winds aloft. The radar transmits an electromagnetic pulse along each of the antenna's pointing directions. A UHF profiler includes subsystems to control the radar's transmitter, receiver, signal processing, and Radio Acoustic Sounding System (RASS), if provided, as well as data telemetry and remote control.
The duration of the transmission determines the length of the pulse emitted by the antenna, which in turn corresponds to the volume of air illuminated (in electrical terms) by the radar beam. Small amounts of the transmitted energy are scattered back (referred to as backscattering) toward and received by the radar. Delays of fixed intervals are built into the data processing system so that the radar receives scattered energy from discrete altitudes, referred to as range gates. The Doppler frequency shift of the backscattered energy is determined, and then used to calculate the velocity
of the air toward or away from the radar along each beam as a function of altitude. The source of the backscattered energy (radar “targets”) is small-scale turbulent fluctuations that induce irregularities in the radio refractive index of the atmosphere. The radar is most sensitive to scattering by turbulent eddies whose spatial scale is ½ the wavelength of the radar, or approximately 16 centimeters (cm) for a UHF profiler.
A boundary-layer radar wind profiler can be configured to compute averaged wind profiles for periods ranging from a few minutes to an hour. Boundary-layer radar wind profilers are often configured to sample in more than one mode. For example, in a “low mode,” the pulse of energy transmitted by the profiler may be 60 m in length. The pulse length determines the depth of the column of air being sampled and thus the vertical resolution of the data. In a “high mode,” the pulse length is increased, usually to 100 m or greater. The longer pulse length means that more energy is being transmitted for each sample, which improves the signal-to-noise ratio (SNR) of the data. Using a longer pulse length increases the depth of the sample volume and thus decreases the vertical resolution in the data. The greater energy output of the high mode increases the maximum altitude to which the radar wind profiler can sample, but at the expense of coarser vertical resolution and an increase in the
altitude at which the first winds are measured. When radar wind profilers are operated in multiple modes, the data are often combined into a single overlapping data set to simplify postprocessing and data validation procedures.
Radar wind profilers may also have additional uses, for example in a biological context to complement large-scale bird monitoring schemes.
Radar precipitation profiler
A special case of a radar wind profiler is a vertical precipitation profiler. It has a single vertical axis of unistatic or bistatic configuration. It is used to measure precipitation only. It can be used to identify the melting height of precipitation. These type of radars have been used to study the interactions of different freezing levels and atmospheric rivers on flooding in lowland mountains of the western US.
Sodar wind profiler
Alternatively, a wind profiler may use sound waves to measure wind speed at various heights above the ground, and the thermodynamic structure of the lower layer of the atmosphere. These sodars can be divided in mono-static system using the same antenna for transmitting and receiving, and bi-static system using separate antennas. The difference between the two antenna systems determines whether atmospheric scattering is by temperature fluctuations (in mono-static systems), or by both temperature and wind velocity fluctuations (in bi-static systems).
Mono-static antenna systems can be divided further into two categories: those using multiple axis, individual antennas and those using a single phased array antenna. The multiple-axis systems generally use three individual antennas aimed in specific directions to steer the acoustic beam. One antenna is generally aimed vertically, and the other two are tilted slightly from the vertical at an orthogonal angle. Each of the individual antennas may use a single transducer focused into a parabolic reflector to form a parabolic loudspeaker, or an array of speaker drivers and horns (transducers) all transmitting in-phase to form a single beam. Both the tilt angle from the vertical and the azimuth angle of each antenna are fixed when the system is set up.
The vertical range of sodars is approximately 0.2 to 2 kilometers (km) and is a function of frequency, power output, atmospheric stability, turbulence, and, most importantly, the noise environment in which a sodar is operated. Operating frequencies range from less than 1000 Hz to over 4000 Hz, with power levels up to several hundred watts. Due to the attenuation characteristics of the atmosphere, high power, lower frequency sodars will generally produce greater height coverage. Some sodars can be operated in different modes to better match vertical resolution and range to the application. This is accomplished through a relaxation between pulse length and maximum altitude.
References
External links
Official NOAA wind profiler search page See real time (and 12-hour history) graphic displays of wind direction and speed from ground level up to 17 km above sea level (at 1 km intervals). Click on any star or dot, then click on "get plot" at left.
Meteorological instrumentation and equipment
Weather radars
Radar meteorology
Atmospheric sounding | Wind profiler | [
"Technology",
"Engineering"
] | 1,455 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
4,860,631 | https://en.wikipedia.org/wiki/TA%20Luft | Germany has an air pollution control regulation titled "Technical Instructions on Air Quality Control" (Technische Anleitung zur Reinhaltung der Luft) and commonly referred to as the TA Luft.
The first version of the TA Luft was established in 1964. It has subsequently been revised in 1974, 1983, 1988 and 2002. Parts of the TA Luft have been adopted by other countries as well.
In 1974, 10 years after the TA Luft was first established, the German government enacted the "Federal Pollution Control Act" (Bundes-Immissionsschutzgesetz). It also has subsequently been amended a number of times, the last of which was in 2002. Although the first version of the TA Luft existed 10 years before the enactment of the "Federal Pollution Control Act", it is often called the "First General Administrative Regulation" pertaining to the "Federal Pollution Control Act".
The German government created the Federal Ministry for Environment, Nature Conservation and Nuclear Safety (Bundesministerium für Umwelt, Naturschutz und Reaktorsicherheit) in June, 1986 and it is now responsible for implementing the TA Luft regulation under the "Federal Air Pollution Control Act".
Overview
The TA Luft is a comprehensive air pollution control regulation that includes:
A discussion of the scope of the TA Luft application, which is to review applications for licenses to construct and operate new industrial facilities (or altered existing facilities) and to determine whether the proposed new or altered facilities will comply with the requirements of the TA Luft and the requirements of other air pollutant emission regulations promulgated under the Federal Pollution Control Act.
Air pollutant emission limits for dust, sulfur dioxide, nitrogen oxides, hydrofluoric acid and other gaseous inorganic fluorine compounds, arsenic and inorganic arsenic compounds, lead and inorganic lead compounds, cadmium and inorganic cadmium compounds, nickel and inorganic nickel compounds, mercury and inorganic mercury compounds, thallium and inorganic thallium compounds, ammonia from farming and livestock breeding operations, inorganic gases and particulates, organic substances and others.
Emission limits may also be set for hazardous, toxic, carcinogenic or mutagenic substances as part of the TA Luft review procedures.
Other limits or requirements related to stack heights (for flue gases or other process vents) and for storing, loading or working with liquid or solid substances.
Various requirements for sampling measuring and monitoring emissions.
Listing of the industries subject to the requirements of the TA Luft, such as mining, electric power generation, glass and ceramics, steel, aluminum and other metals, chemical plants, oil refining, plastics, food, and others.
Annex 3 is devoted to guidelines on: how the atmospheric dispersion modeling required during the TA Luft review is to be performed, and the acceptable type of dispersion model to be used. In essence, the modeling must be in accordance with the VDI Guidelines 3782 Parts 1 and 2, 3783 Part 8, 3784 Part 2, and 3945 Part 3.
The full text of the TA Luft is available on the Internet.
AUSTAL2000
AUSTAL2000 is an atmospheric dispersion model for simulating the dispersion of air pollutants in the ambient atmosphere. It was developed by Ingenieurbüro Janicke in Dunum, Germany under contract to the Federal Ministry for Environment, Nature Conservation and Nuclear Safety. Although not named in the TA Luft, it is the reference dispersion model accepted as being in compliance with the requirements of Annex 3 of the TA Luft and the pertinent VDI Guidelines.
It simulates the dispersion of air pollutants by utilizing a random walk process (Lagrangian simulation model) and it has capabilities for building effects, complex terrain, pollutant plume depletion by wet or dry deposition, and first order chemical reactions. It is available for download on the Internet free of cost.
Austal2000G is a similar model for simulating the dispersion of odours and it was also developed by Ingenieurbüro Janicke. The development of Austal 2000G was financed by three German states: Niedersachsen, Nordrhein-Westfalen and Baden-Württemberg.
See also
2008/50/EG
Air Quality Modeling Group
Air Resources Laboratory
AP 42 Compilation of Air Pollutant Emission Factors
Bibliography of atmospheric dispersion modeling
List of atmospheric dispersion models
UK Atmospheric Dispersion Modelling Liaison Committee
UK Dispersion Modelling Bureau
References
Further reading
www.air-dispersion.com
Official Web page.
External links
UK Dispersion Modelling Bureau web site
UK ADMLC web site
Air Resources Laboratory (ARL)
Air Quality Modeling Group
Error propagation in air dispersion modeling
Air pollution
Atmospheric dispersion modeling
Environmentalism in Germany
Law of Germany | TA Luft | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 998 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
4,860,678 | https://en.wikipedia.org/wiki/Einstein-aether%20theory | In physics the Einstein-aether theory, also called aetheory, is the name coined in 2004 for a modification of general relativity that has a preferred reference frame and hence violates Lorentz invariance. These generally covariant theories describes a spacetime endowed with both a metric and a unit timelike vector field named the aether. The aether in this theory is "a Lorentz-violating vector field" unrelated to older luminiferous aether theories; the "Einstein" in the theory's name comes from its use of Einstein's general relativity equation.
Relation to other theories of gravity
An Einstein-aether theory is an alternative theory of gravity that adds a vector field to the theory of general relativity. There are also scalar field modifications, including Brans–Dicke theory, all included with Horndeski's theory. Going the other direction, there are theories that add tensor fields, under the name Bimetric gravity or both scalar and vector fields can be added, as in Tensor–vector–scalar gravity.
History
The name "Einstein-aether theory" was coined in 2004 by T. Jacobson and D. Mattingly. This type of theory originated in the 1970s with the work of C.M.Will and K. Nordtvedt Jr. on gravitationally coupled vector field theories.
In the 1980's Maurizio Gasperini added a scalar field, which intuitively corresponded to a universal notion of time, to the metric of general relativity. Such a theory will have a preferred reference frame, that in which the universal time is the actual time.
In 2000, Ted Jacobson and David Mattingly developed a model that allows the consequences of preferred frames to be studied. Their theory contains less information than that of Gasperini, instead of a scalar field giving a universal time it contains only a unit vector field which gives the direction of time. Thus observers who follow the aether at different points will not necessarily age at the same rate in the Jacobson–Mattingly theory. In 2008 Ted Jacobson presented a status report on Einstein-aether theory.
Breaking Lorentz symmetry
The existence of a preferred, dynamical time vector breaks the Lorentz symmetry of the theory, more precisely it breaks the invariance under boosts. This symmetry breaking may lead to a Higgs mechanism for the graviton which would alter long distance physics, perhaps yielding an explanation for recent supernova data which would otherwise be explained by a cosmological constant. The effect of breaking Lorentz invariance on quantum field theory has a long history leading back at least to the work of Markus Fierz and Wolfgang Pauli in 1939. Recently it has regained popularity with, for example, the paper Effective Field Theory for Massive Gravitons and Gravity in Theory Space by Nima Arkani-Hamed, Howard Georgi and Matthew Schwartz. Einstein-aether theories provide a concrete example of a theory with broken Lorentz invariance and so have proven to be a natural setting for such investigations.
Action
The action of the Einstein-aether theory is generally taken to consist of the sum of the Einstein–Hilbert action with a Lagrange multiplier λ that ensures that the time vector is a unit vector and also with all of the covariant terms involving the time vector u but having at most two derivatives.
In particular it is assumed that the action may be written as the integral of a local Lagrangian density
where GN is Newton's constant and g is a metric with Minkowski signature. The Lagrangian density is
Here R is the Ricci scalar, is the covariant derivative and the tensor K is defined by
Here the ci are dimensionless adjustable parameters of the theory.
Solutions
Stars
Several spherically symmetric solutions to ae-theory have been found. Most recently Christopher Eling and Ted Jacobson have found solutions resembling stars and solutions resembling black holes.
In particular, they demonstrated that there are no spherically symmetric solutions in which stars are constructed entirely from the aether. Solutions without additional matter always have either naked singularities or else two asymptotic regions of spacetime, resembling a wormhole but with no horizon. They have argued that static stars must have static aether solutions, which means that the aether points in the direction of a timelike killing vector.
Black holes and potential problems
However this is difficult to reconcile with static black holes, as at the event horizon there are no timelike Killing vectors available and so the black hole solutions cannot have static aethers. Thus when a star collapses to form a black hole, somehow the aether must eventually become static even very far away from the collapse.
In addition the stress tensor does not obviously satisfy the Raychaudhuri equation, one needs to make recourse to the equations of motion. This is in contrast with theories with no aether, where this property is independent of the equations of motion.
Experimental constraints
In a 2005 paper, Nima Arkani-Hamed, Hsin-Chia Cheng, Markus Luty and Jesse Thaler have examined experimental consequences of the breaking of boost symmetries inherent in aether theories. They have found that the resulting Goldstone boson leads to, among other things, a new kind of Cherenkov radiation.
In addition they have argued that spin sources will interact via a new inverse square law force with a very unusual angular dependence. They suggest that the discovery of such a force would be very strong evidence for an aether theory, although not necessarily that of Jacobson, et al.
See also
Aether theories
Modern searches for Lorentz violation
References
Aether theories
Theories of gravity | Einstein-aether theory | [
"Physics"
] | 1,159 | [
"Theoretical physics",
"Theories of gravity"
] |
4,861,222 | https://en.wikipedia.org/wiki/Spectral%20asymmetry | In mathematics and physics, the spectral asymmetry is the asymmetry in the distribution of the spectrum of eigenvalues of an operator. In mathematics, the spectral asymmetry arises in the study of elliptic operators on compact manifolds, and is given a deep meaning by the Atiyah-Singer index theorem. In physics, it has numerous applications, typically resulting in a fractional charge due to the asymmetry of the spectrum of a Dirac operator. For example, the vacuum expectation value of the baryon number is given by the spectral asymmetry of the Hamiltonian operator. The spectral asymmetry of the confined quark fields is an important property of the chiral bag model. For fermions, it is known as the Witten index, and can be understood as describing the Casimir effect for fermions.
Definition
Given an operator with eigenvalues , an equal number of which are positive and negative, the spectral asymmetry may be defined as the sum
where is the sign function. Other regulators, such as the zeta function regulator, may be used.
The need for both a positive and negative spectrum in the definition is why the spectral asymmetry usually occurs in the study of Dirac operators.
Example
As an example, consider an operator with a spectrum
where n is an integer, ranging over all positive and negative values. One may show in a straightforward manner that in this case obeys for any integer , and that for we have . The graph of is therefore a periodic sawtooth curve.
Discussion
Related to the spectral asymmetry is the vacuum expectation value of the energy associated with the operator, the Casimir energy, which is given by
This sum is formally divergent, and the divergences must be accounted for and removed using standard regularization techniques.
References
Spectral theory
Asymmetry | Spectral asymmetry | [
"Physics"
] | 372 | [
"Symmetry",
"Asymmetry"
] |
4,861,681 | https://en.wikipedia.org/wiki/Binomial%20number | In mathematics, specifically in number theory, a binomial number is an integer which can be obtained by evaluating a homogeneous polynomial containing two terms. It is a generalization of a Cunningham number.
Definition
A binomial number is an integer obtained by evaluating a homogeneous polynomial containing two terms, also called a binomial. The form of this binomial is , with and . However, since is always divisible by , when studying the numbers generated from the version with the negative sign, they are usually divided by first. Binomial numbers formed this way form Lucas sequences. Specifically:
and
Binomial numbers are a generalization of a Cunningham numbers, and it will be seen that the Cunningham numbers are binomial numbers where . Other subsets of the binomial numbers are the Mersenne numbers and the repunits.
Factorization
The main reason for studying these numbers is to obtain their factorizations. Aside from algebraic factors, which are obtained by factoring the underlying polynomial (binomial) that was used to define the number, such as difference of two squares and sum of two cubes, there are other prime factors (called primitive prime factors, because for a given they do not factorize with , except for a small number of exceptions as stated in Zsigmondy's theorem) which occur seemingly at random, and it is these which the number theorist is looking for.
Some binomial numbers' underlying binomials have Aurifeuillian factorizations, which can assist in finding prime factors. Cyclotomic polynomials are also helpful in finding factorizations.
The amount of work required in searching for a factor is considerably reduced by applying Legendre's theorem. This theorem states that all factors of a binomial number are of the form if is even or if it is odd.
Observation
Some people write "binomial number" when they mean binomial coefficient, but this usage is not standard and is deprecated.
See also
Cunningham project
Notes
References
External links
Binomial Number at MathWorld
Number theory | Binomial number | [
"Mathematics"
] | 421 | [
"Discrete mathematics",
"Number theory"
] |
4,864,009 | https://en.wikipedia.org/wiki/Deformable%20mirror | Deformable mirrors (DM) are mirrors whose surface can be deformed, in order to achieve wavefront control and correction of optical aberrations. Deformable mirrors are used in combination with wavefront sensors and real-time control systems in adaptive optics. In 2006 they found a new use in femtosecond pulse shaping.
The shape of a DM can be controlled with a speed that is appropriate for compensation of dynamic aberrations present in the optical system. In practice the DM shape should be changed much faster than the process to be corrected, as the correction process, even for a static aberration, may take several iterations.
A DM usually has many degrees of freedom. Typically, these degrees of freedom are associated with the mechanical actuators and it can be roughly taken that one actuator corresponds to one degree of freedom.
Deformable mirror parameters
Number of actuators determines the number of degrees of freedom (wavefront inflections) the mirror can correct. It is very common to compare an arbitrary DM to an ideal device that can perfectly reproduce wavefront modes in the form of Zernike polynomials. For predefined statistics of aberrations a deformable mirror with M actuators can be equivalent to an ideal Zernike corrector with N (usually N < M) degrees of freedom. For correction of the atmospheric turbulence, elimination of low-order Zernike terms usually results in significant improvement of the image quality, while further correction of the higher-order terms introduces less significant improvements. For strong and rapid wavefront error fluctuations such as shocks and wake turbulence typically encountered in high-speed aerodynamic flowfields, the number of actuators, actuator pitch and stroke determine the maximum wavefront gradients that can be compensated for.
Actuator pitch is the distance between actuator centers. Deformable mirrors with large actuator pitch and large number of actuators are bulky and expensive.
Actuator stroke is the maximum possible actuator displacement, typically in positive or negative excursions from some central null position. Stroke typically ranges from ±1 to ±30 micrometres. Free actuator stroke limits the maximum amplitude of the corrected wavefront, while the inter-actuator stroke limits the maximum amplitude and gradients of correctable higher-order aberrations.
Influence function is the characteristic shape corresponding to the mirror response to the action of a single actuator. Different types of deformable mirrors have different influence functions, moreover the influence functions can be different for different actuators of the same mirror. Influence function that covers the whole mirror surface is called a "modal" function, while localized response is called "zonal".
Actuator coupling shows how much the movement of one actuator will displace its neighbors. All "modal" mirrors have large cross-coupling, which in fact is good as it secures the high quality of correction of smooth low-order optical aberrations that usually have the highest statistical weight.
Response time shows how quickly the mirror will react to the control signal. Can vary from microseconds (MEMS and magnetics mirrors) to tens of seconds for thermally controlled DM's.
Hysteresis and creep are nonlinear actuation effects that decrease the precision of the response of the deformable mirror. For different concepts, the hysteresis can vary from zero (electrostatically-actuated mirrors) to tens of percent for mirrors with piezoelectric actuators. Hysteresis is a residual positional error from previous actuator position commands, and limits the mirror ability to work in a feedforward mode, outside of a feedback loop.
Deformable mirror concepts
Segmented concept mirrors are formed by independent flat mirror segments. Each segment can move a small distance back and forth to approximate the average value of the wavefront over the patch area. Advantageously, these mirrors have little or zero cross-talk between actuators. Stepwise approximation works poorly for smooth continuous wavefronts. Sharp edges of the segments and gaps between the segments contribute to light scattering, limiting the applications to those not sensitive to scattered light. Considerable improvement of the performance of the segmented mirror can be achieved by the introduction of three degrees of freedom per segment: piston, tip and tilt. These mirrors require three times as many actuators compared to piston-segmented mirrors. This concept was used for fabrication of large segmented primary mirrors for the Keck telescopes, the James Webb Space Telescope, and the future E-ELT. Numerous methods exist to accurately co-phase the segments and reduce the diffraction patterns introduced by the segment shapes and gaps. Future large space-based telescopes, such as the NASA Large UV Optical Infrared Surveyor will also possess a segmented primary mirror. The development of robust methods to increase the contrast is key for the direct imaging and characterization of exoplanets.
Continuous faceplate concept mirrors with discrete actuators are formed by the front surface of a thin deformable membrane. The shape of the plate is controlled by a number of discrete actuators that are fixed to its back side. The shape of the mirror depends on the combination of forces applied to the faceplate, boundary conditions (the way the plate is fixed to the mirror) and the geometry and the material of the plate. These mirrors allow smooth wavefront control with very large - up to several thousands - degrees of freedom.
Magnetic concept mirrors consist of a thin flexible continuous membrane actuated by voicecoils and magnets. This technology allows great design flexibility to achieve very different performances. Depending on the design choices made, they can achieve unrivaled stroke - up to a hundred microns of deformation - or very high speed - <ms timeframe. As the membrane is a single sheet of material, very high optical quality is also achievable. This technology can exhibit good stability and keep its shape almost unchanged for weeks. The actuator count can range from several tens of actuators to several thousand actuators.
MEMS concept mirrors are fabricated using bulk and surface micromachining technologies. They consist of a thin reflective membrane controlled by a multitude of actuators. MEMS mirrors could break the high price threshold of conventional adaptive optics. They enable a higher actuator count at a more cost-effective price allowing for accurate wave-front correction. MEMS mirrors offer fast response times from the actuators with limited hysteresis. An additional benefit is that micromachining technologies allow for the benefit of economies of scale to create cheaper and lighter deformable mirrors with a greater number of actuators.
Membrane concept mirrors are formed by a thin conductive and reflective membrane stretched over a solid flat frame. The membrane can be deformed electrostatically by applying control voltages to electrostatic electrode actuators that can be positioned under or over the membrane. If there are any electrodes positioned over the membrane, they are transparent. It is possible to operate the mirror with only one group of electrodes positioned under the mirror. In this case, a bias voltage is applied to all electrodes, to make the membrane initially spherical. The membrane can move back and forth with respect to the reference sphere.
Bimorph concept mirrors are formed by two or more layers of different materials. One or more of (active) layers are fabricated from a piezoelectric or electrostrictive material. Electrode structure is patterned on the active layer to facilitate local response. The mirror is deformed when a voltage is applied to one or more of its electrodes, causing them to extend laterally, which results in local mirror curvature. Bimorph mirrors are rarely made with more than 100 electrodes.
Ferrofluid concept mirrors are liquid deformable mirrors made with a suspension of small (about 10 nm in diameter) ferromagnetic nanoparticles dispersed in a liquid carrier. In the presence of an external magnetic field, the ferromagnetic particles align with the field, the liquid becomes magnetized and its surface acquires a shape governed by the equilibrium between the magnetic, gravitational and surface tension forces. Using proper magnetic field geometries, any desired shape can be produced at the surface of the ferrofluid. This new concept offers a potential alternative for low-cost, high stroke and large number of actuators deformable mirrors.
See also
References
AO Tutorial: WF correctors
Mirrors
Microtechnology | Deformable mirror | [
"Materials_science",
"Engineering"
] | 1,748 | [
"Materials science",
"Microtechnology"
] |
27,147,535 | https://en.wikipedia.org/wiki/Transport%20phenomena | In engineering, physics, and chemistry, the study of transport phenomena concerns the exchange of mass, energy, charge, momentum and angular momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, it places a heavy emphasis on the commonalities between the topics covered. Mass, momentum, and heat transport all share a very similar mathematical framework, and the parallels between them are exploited in the study of transport phenomena to draw deep mathematical connections that often provide very useful tools in the analysis of one field that are directly derived from the others.
The fundamental analysis in all three subfields of mass, heat, and momentum transfer are often grounded in the simple principle that the total sum of the quantities being studied must be conserved by the system and its environment. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero. This principle is useful for calculating many relevant quantities. For example, in fluid mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume.
Transport phenomena are ubiquitous throughout the engineering disciplines. Some of the most common examples of transport analysis in engineering are seen in the fields of process, chemical, biological, and mechanical engineering, but the subject is a fundamental component of the curriculum in all disciplines involved in any way with fluid mechanics, heat transfer, and mass transfer. It is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism.
Transport phenomena encompass all agents of physical change in the universe. Moreover, they are considered to be fundamental building blocks which developed the universe, and which are responsible for the success of all life on Earth. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems.
Overview
In physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts : the conservation laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, describe how the quantity being studied must be conserved. The constitutive equations describe how the quantity in question responds to various stimuli via transport. Prominent examples include Fourier's law of heat conduction and the Navier–Stokes equations, which describe, respectively, the response of heat flux to temperature gradients and the relationship between fluid flux and the forces applied to the fluid. These equations also demonstrate the deep connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of these physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system and transport ceases. The various aspects of such equilibrium are directly connected to a specific transport: heat transfer is the system's attempt to achieve thermal equilibrium with its environment, just as mass and momentum transport move the system towards chemical and mechanical equilibrium.
Examples of transport processes include heat conduction (energy transfer), fluid flow (momentum transfer), molecular diffusion (mass transfer), radiation and electric charge transfer in semiconductors.
Transport phenomena have wide application. For example, in solid state physics, the motion and interaction of electrons, holes and phonons are studied under "transport phenomena". Another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy.
The transport of mass, energy, and momentum can be affected by the presence of external sources:
An odor dissipates more slowly (and may intensify) when the source of the odor remains present.
The rate of cooling of a solid that is conducting heat depends on whether a heat source is applied.
The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air.
Commonalities among phenomena
An important principle in the study of transport phenomena is analogy between phenomena.
Diffusion
There are some notable similarities in equations for momentum, energy, and mass transfer which can all be transported by diffusion, as illustrated by the following examples:
Mass: the spreading and dissipation of odors in air is an example of mass diffusion.
Energy: the conduction of heat in a solid material is an example of heat diffusion.
Momentum: the drag experienced by a rain drop as it falls in the atmosphere is an example of momentum diffusion (the rain drop loses momentum to the surrounding air through viscous stresses and decelerates).
The molecular transfer equations of Newton's law for fluid momentum, Fourier's law for heat, and Fick's law for mass are very similar. One can convert from one transport coefficient to another in order to compare all three different transport phenomena.
A great deal of effort has been devoted in the literature to developing analogies among these three transport processes for turbulent transfer so as to allow prediction of one from any of the others. The Reynolds analogy assumes that the turbulent diffusivities are all equal and that the molecular diffusivities of momentum (μ/ρ) and mass (DAB) are negligible compared to the turbulent diffusivities. When liquids are present and/or drag is present, the analogy is not valid. Other analogies, such as von Karman's and Prandtl's, usually result in poor relations.
The most successful and most widely used analogy is the Chilton and Colburn J-factor analogy. This analogy is based on experimental data for gases and liquids in both the laminar and turbulent regimes. Although it is based on experimental data, it can be shown to satisfy the exact solution derived from laminar flow over a flat plate. All of this information is used to predict transfer of mass.
Onsager reciprocal relations
In fluid systems described in terms of temperature, matter density, and pressure, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions (a "reciprocal relation"). What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. The heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal.
This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics. The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once.
Momentum transfer
In momentum transfer, the fluid is treated as a continuous distribution of matter. The study of momentum transfer, or fluid mechanics can be divided into two branches: fluid statics (fluids at rest), and fluid dynamics (fluids in motion).
When a fluid is flowing in the x-direction parallel to a solid surface, the fluid has x-directed momentum, and its concentration is υxρ. By random diffusion of molecules there is an exchange of molecules in the z-direction. Hence the x-directed momentum has been transferred in the z-direction from the faster- to the slower-moving layer.
The equation for momentum transfer is Newton's law of viscosity written as follows:
where τzx is the flux of x-directed momentum in the z-direction, ν is μ/ρ, the momentum diffusivity, z is the distance of transport or diffusion, ρ is the density, and μ is the dynamic viscosity. Newton's law of viscosity is the simplest relationship between the flux of momentum and the velocity gradient. It may be useful to note that this is an unconventional use of the symbol τzx; the indices are reversed as compared with standard usage in solid mechanics, and the sign is reversed.
Mass transfer
When a system contains two or more components whose concentration vary from point to point, there is a natural tendency for mass to be transferred, minimizing any concentration difference within the system. Mass transfer in a system is governed by Fick's first law: 'Diffusion flux from higher concentration to lower concentration is proportional to the gradient of the concentration of the substance and the diffusivity of the substance in the medium.' Mass transfer can take place due to different driving forces. Some of them are:
Mass can be transferred by the action of a pressure gradient (pressure diffusion)
Forced diffusion occurs because of the action of some external force
Diffusion can be caused by temperature gradients (thermal diffusion)
Diffusion can be caused by differences in chemical potential
This can be compared to Fick's law of diffusion, for a species A in a binary mixture consisting of A and B:
where D is the diffusivity constant.
Heat transfer
Many important engineered systems involve heat transfer. Some examples are the heating and cooling of process streams, phase changes, distillation, etc. The basic principle is the Fourier's law which is expressed as follows for a static system:
The net flux of heat through a system equals the conductivity times the rate of change of temperature with respect to position.
For convective transport involving turbulent flow, complex geometries, or difficult boundary conditions, the heat transfer may be represented by a heat transfer coefficient.
where A is the surface area, is the temperature driving force, Q is the heat flow per unit time, and h is the heat transfer coefficient.
Within heat transfer, two principal types of convection can occur:
Forced convection can occur in both laminar and turbulent flow. In the situation of laminar flow in circular tubes, several dimensionless numbers are used such as Nusselt number, Reynolds number, and Prandtl number. The commonly used equation is .
Natural or free convection is a function of Grashof and Prandtl numbers. The complexities of free convection heat transfer make it necessary to mainly use empirical relations from experimental data.
Heat transfer is analyzed in packed beds, nuclear reactors and heat exchangers.
Heat and mass transfer analogy
The heat and mass analogy allows solutions for mass transfer problems to be obtained from known solutions to heat transfer problems. Its arises from similar non-dimensional governing equations between heat and mass transfer.
Derivation
The non-dimensional energy equation for fluid flow in a boundary layer can simplify to the following, when heating from viscous dissipation and heat generation can be neglected:
Where and are the velocities in the x and y directions respectively normalized by the free stream velocity, and are the x and y coordinates non-dimensionalized by a relevant length scale, is the Reynolds number, is the Prandtl number, and is the non-dimensional temperature, which is defined by the local, minimum, and maximum temperatures:
The non-dimensional species transport equation for fluid flow in a boundary layer can be given as the following, assuming no bulk species generation:
Where is the non-dimensional concentration, and is the Schmidt number.
Transport of heat is driven by temperature differences, while transport of species is due to concentration differences. They differ by the relative diffusion of their transport compared to the diffusion of momentum. For heat, the comparison is between viscous diffusivity () and thermal diffusion (), given by the Prandtl number. Meanwhile, for mass transfer, the comparison is between viscous diffusivity () and mass Diffusivity (), given by the Schmidt number.
In some cases direct analytic solutions can be found from these equations for the Nusselt and Sherwood numbers. In cases where experimental results are used, one can assume these equations underlie the observed transport.
At an interface, the boundary conditions for both equations are also similar. For heat transfer at an interface, the no-slip condition allows us to equate conduction with convection, thus equating Fourier's law and Newton's law of cooling:
Where q” is the heat flux, is the thermal conductivity, is the heat transfer coefficient, and the subscripts and compare the surface and bulk values respectively.
For mass transfer at an interface, we can equate Fick's law with Newton's law for convection, yielding:
Where is the mass flux [kg/s ], is the diffusivity of species a in fluid b, and is the mass transfer coefficient. As we can see, and are analogous, and are analogous, while and are analogous.
Implementing the Analogy
Heat-Mass Analogy:
Because the Nu and Sh equations are derived from these analogous governing equations, one can directly swap the Nu and Sh and the Pr and Sc numbers to convert these equations between mass and heat.
In many situations, such as flow over a flat plate, the Nu and Sh numbers are functions of the Pr and Sc numbers to some coefficient . Therefore, one can directly calculate these numbers from one another using:
Where can be used in most cases, which comes from the analytical solution for the Nusselt Number for laminar flow over a flat plate. For best accuracy, n should be adjusted where correlations have a different exponent.
We can take this further by substituting into this equation the definitions of the heat transfer coefficient, mass transfer coefficient, and Lewis number, yielding:
For fully developed turbulent flow, with n=1/3, this becomes the Chilton–Colburn J-factor analogy. Said analogy also relates viscous forces and heat transfer, like the Reynolds analogy.
Limitations
The analogy between heat transfer and mass transfer is strictly limited to binary diffusion in dilute (ideal) solutions for which the mass transfer rates are low enough that mass transfer has no effect on the velocity field. The concentration of the diffusing species must be low enough that the chemical potential gradient is accurately represented by the concentration gradient (thus, the analogy has limited application to concentrated liquid solutions). When the rate of mass transfer is high or the concentration of the diffusing species is not low, corrections to the low-rate heat transfer coefficient can sometimes help. Further, in multicomponent mixtures, the transport of one species is affected by the chemical potential gradients of other species.
The heat and mass analogy may also break down in cases where the governing equations differ substantially. For instance, situations with substantial contributions from generation terms in the flow, such as bulk heat generation or bulk chemical reactions, may cause solutions to diverge.
Applications of the Heat-Mass Analogy
The analogy is useful for both using heat and mass transport to predict one another, or for understanding systems which experience simultaneous heat and mass transfer. For example, predicting heat transfer coefficients around turbine blades is challenging and is often done through measuring evaporating of a volatile compound and using the analogy. Many systems also experience simultaneous mass and heat transfer, and particularly common examples occur in processes with phase change, as the enthalpy of phase change often substantially influences heat transfer. Such examples include: evaporation at a water surface, transport of vapor in the air gap above a membrane distillation desalination membrane, and HVAC dehumidification equipment that combine heat transfer and selective membranes.
Applications
Pollution
The study of transport processes is relevant for understanding the release and distribution of pollutants into the environment. In particular, accurate modeling can inform mitigation strategies. Examples include the control of surface water pollution from urban runoff, and policies intended to reduce the copper content of vehicle brake pads in the U.S.
See also
Constitutive equation
Continuity equation
Wave propagation
Pulse
Action potential
Bioheat transfer
References
External links
Transport Phenomena Archive in the Teaching Archives of the Materials Digital Library Pathway
Chemical engineering | Transport phenomena | [
"Physics",
"Chemistry",
"Engineering"
] | 3,280 | [
"Transport phenomena",
"Chemical engineering",
"Physical phenomena",
"nan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.