id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
71,667,849 | https://en.wikipedia.org/wiki/Dirac%E2%80%93K%C3%A4hler%20equation | In theoretical physics, the Dirac–Kähler equation, also known as the Ivanenko–Landau–Kähler equation, is the geometric analogue of the Dirac equation that can be defined on any pseudo-Riemannian manifold using the Laplace–de Rham operator. In four-dimensional flat spacetime, it is equivalent to four copies of the Dirac equation that transform into each other under Lorentz transformations, although this is no longer true in curved spacetime. The geometric structure gives the equation a natural discretization that is equivalent to the staggered fermion formalism in lattice field theory, making Dirac–Kähler fermions the formal continuum limit of staggered fermions. The equation was discovered by Dmitri Ivanenko and Lev Landau in 1928 and later rediscovered by Erich Kähler in 1962.
Mathematical overview
In four dimensional Euclidean spacetime a generic fields of differential forms
is written as a linear combination of sixteen basis forms indexed by , which runs over the sixteen ordered combinations of indices with . Each index runs from one to four. Here are antisymmetric tensor fields while are the corresponding differential form basis elements
Using the Hodge star operator , the exterior derivative is related to the codifferential through . These form the Laplace–de Rham operator which can be viewed as the square root of the Laplacian operator since . The Dirac–Kähler equation is motivated by noting that this is also the property of the Dirac operator, yielding
This equation is closely related to the usual Dirac equation, a connection which emerges from the close relation between the exterior algebra of differential forms and the Clifford algebra of which Dirac spinors are irreducible representations. For the basis elements to satisfy the Clifford algebra , it is required to introduce a new Clifford product acting on basis elements as
Using this product, the action of the Laplace–de Rham operator on differential form basis elements is written as
To acquire the Dirac equation, a change of basis must be performed, where the new basis can be packaged into a matrix defined using the Dirac matrices
The matrix is designed to satisfy , decomposing the Clifford algebra into four irreducible copies of the Dirac algebra. This is because in this basis the Clifford product only mixes the column elements indexed by . Writing the differential form in this basis
transforms the Dirac–Kähler equation into four sets of the Dirac equation indexed by
The minimally coupled Dirac–Kähler equation is found by replacing the derivative with the covariant derivative leading to
As before, this is also equivalent to four copies of the Dirac equation. In the abelian case , while in the non-abelian case there are additional color indices. The Dirac–Kähler fermion also picks up color indices, with it formally corresponding to cross-sections of the Whitney product of the Atiyah–Kähler bundle of differential forms with the vector bundle of local color spaces.
Discretization
There is a natural way in which to discretize the Dirac–Kähler equation using the correspondence between exterior algebra and simplicial complexes. In four dimensional space a lattice can be considered as a simplicial complex, whose simplexes are constructed using a basis of -dimensional hypercubes with a base point and an orientation determined by . Then a h-chain is a formal linear combination
The h-chains admit a boundary operator defined as the (h-1)-simplex forming the boundary of the h-chain. A coboundary operator can be similarly defined to yield a (h+1)-chain. The dual space of chains consists of -cochains , which are linear functions acting on the h-chains mapping them to real numbers. The boundary and coboundary operators admit similar structures in dual space called the dual boundary and dual coboundary defined to satisfy
Under the correspondence between the exterior algebra and simplicial complexes, differential forms are equivalent to cochains, while the exterior derivative and codifferential correspond to the dual boundary and dual coboundary, respectively. Therefore, the Dirac–Kähler equation is written on simplicial complexes as
The resulting discretized Dirac–Kähler fermion is equivalent to the staggered fermion found in lattice field theory, which can be seen explicitly by an explicit change of basis. This equivalence shows that the continuum Dirac–Kähler fermion is the formal continuum limit of fermion staggered fermions.
Relation to the Dirac equation
As described previously, the Dirac–Kähler equation in flat spacetime is equivalent to four copies of the Dirac equation, despite being a set of equations for antisymmetric tensor fields. The ability of integer spin tensor fields to describe half integer spinor fields is explained by the fact that Lorentz transformations do not commute with the internal Dirac–Kähler symmetry, with the parameters of this symmetry being tensors rather than scalars. This means that the Lorentz transformations mix different spins together and the Dirac fermions are not strictly speaking half-integer spin representations of the Clifford algebra. They instead correspond to a coherent superposition of differential forms. In higher dimensions, particularly on dimensional surfaces, the Dirac–Kähler equation is equivalent to Dirac equations.
In curved spacetime, the Dirac–Kähler equation no longer decomposes into four Dirac equations. Rather it is a modified Dirac equation acquired if the Dirac operator remained the square root of the Laplace operator, a property not shared by the Dirac equation in curved spacetime. This comes at the expense of Lorentz invariance, although these effects are suppressed by powers of the Planck mass. The equation also differs in that its zero modes on a compact manifold are always guaranteed to exist whenever some of the Betti numbers vanish, being given by the harmonic forms, unlike for the Dirac equation which never has zero modes on a manifold with positive curvature.
See also
Fermion doubling
Lattice QCD
References
Theoretical physics
Dirac equation
Lattice field theory
Lev Landau | Dirac–Kähler equation | [
"Physics"
] | 1,232 | [
"Theoretical physics",
"Eponymous equations of physics",
"Equations of physics",
"Dirac equation"
] |
71,668,690 | https://en.wikipedia.org/wiki/Regulator%20of%20CO%20metabolism | Regulator of CO Metabolism (RcoM) is a heme-containing transcription factor found in bacteria that senses carbon monoxide (CO). In the presence of carbon monoxide, this protein upregulates expression of genes involved in carbon monoxide oxidation or carbon monoxide stress response. RcoM is functionally related to another heme-containing transcription factor, CooA, but RcoM shares no structural relationship with CooA. RcoM is composed of an N-terminal Per-Arnt-Sim (PAS) domain and a C-terminal LytTR domain. The PAS domain binds a single molecule of heme and the LytTR domain binds to DNA upstream of carbon monoxide oxidation genes. The RcoM homolog from Paraburkholderia xenovorans is known to be dimeric and binds heme using a histidine and a methionine ligand in the Fe(II) oxidation state. Carbon monoxide replaces the methionine ligand and binds directly to the heme to active RcoM for DNA binding. Relative to other heme-containing proteins, RcoM has an extraordinarily high CO affinity, with a Kd < 100 pM, allowing this protein to sense very low levels of carbon monoxide.
References
Prokaryote genes
Carbon monoxide
Heme enzymes
Transcription factors | Regulator of CO metabolism | [
"Chemistry",
"Biology"
] | 272 | [
"Protein stubs",
"Gene expression",
"Prokaryotes",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Prokaryote genes",
"Transcription factors"
] |
63,007,621 | https://en.wikipedia.org/wiki/Geometric%20Folding%20Algorithms | Geometric Folding Algorithms: Linkages, Origami, Polyhedra is a monograph on the mathematics and computational geometry of mechanical linkages, paper folding, and polyhedral nets, by Erik Demaine and Joseph O'Rourke. It was published in 2007 by Cambridge University Press ().
A Japanese-language translation by Ryuhei Uehara was published in 2009 by the Modern Science Company ().
Audience
Although aimed at computer science and mathematics students, much of the book is accessible to a broader audience of mathematically-sophisticated readers with some background in high-school level geometry.
Mathematical origami expert Tom Hull has called it "a must-read for anyone interested in the field of computational origami".
It is a monograph rather than a textbook, and in particular does not include sets of exercises.
The Basic Library List Committee of the Mathematical Association of America has recommended this book for inclusion in undergraduate mathematics libraries.
Topics and organization
The book is organized into three sections, on linkages, origami, and polyhedra.
Topics in the section on linkages include
the Peaucellier–Lipkin linkage for converting rotary motion into linear motion,
Kempe's universality theorem that any algebraic curve can be traced out by a linkage,
the existence of linkages for angle trisection,
and the carpenter's rule problem on straightening two-dimensional polygonal chains.
This part of the book also includes applications to motion planning for robotic arms, and to protein folding.
The second section of the book concerns the mathematics of paper folding, and mathematical origami. It includes the NP-completeness of testing flat foldability,
the problem of map folding (determining whether a pattern of mountain and valley folds forming a square grid can be folded flat),
the work of Robert J. Lang using tree structures and circle packing to automate the design of origami folding patterns,
the fold-and-cut theorem according to which any polygon can be constructed by folding a piece of paper and then making a single straight cut,
origami-based angle trisection,
rigid origami,
and the work of David A. Huffman on curved folds.
In the third section, on polyhedra, the topics include polyhedral nets and Dürer's conjecture on their existence for convex polyhedra, the sets of polyhedra that have a given polygon as their net, Steinitz's theorem characterizing the graphs of polyhedra, Cauchy's theorem that every polyhedron, considered as a linkage of flat polygons, is rigid, and Alexandrov's uniqueness theorem stating that the three-dimensional shape of a convex polyhedron is uniquely determined by the metric space of geodesics on its surface.
The book concludes with a more speculative chapter on higher-dimensional generalizations of the problems it discusses.
References
External links
Authors' web site for Geometric Folding Algorithms including contents, errata, and advances on open problems
Linkages (mechanical)
Paper folding
Polyhedra
Computational geometry
Mathematics books
2007 non-fiction books
2009 non-fiction books | Geometric Folding Algorithms | [
"Mathematics"
] | 631 | [
"Recreational mathematics",
"Computational geometry",
"Computational mathematics",
"Paper folding"
] |
70,221,526 | https://en.wikipedia.org/wiki/Pedersen%20process | The Pederson process is a process of refining aluminum that first separates iron by reducing it to metal, and reacting alumina with lime to produce calcium aluminate, which is then leached with sodium hydroxide. It is more environmentally friendly than the more well-known Bayer process. This is because instead of producing alumina slag, also known as red mud, it produces pig iron as a byproduct. Red mud is considered both an economic and environmental challenge in the aluminum industry because it is considered a waste, with little benefit. It destroys the environment with its high pH, and is costly to maintain, even when in a landfill. Iron, however, is used in the manufacture of steel, and has structural uses in civil engineering and chemical uses as a catalyst.
History
The Pedersen Process was invented by Harald Pedersen in the 1920s and used in Norway for over 40 years before shutting down due to the Pedersen Process being less economically competitive than the Bayer Process. However, it is believed a modern Pedersen process could be economically viable with "low-quality" bauxite, as even though "low-quality" bauxite has less alumina in the form of trihydrate gibbsite, it has more iron oxide which would be converted to pig iron in the smelting process instead of red mud.
Use in aluminum smelting
In most of today's smelting, aluminum ore, also known as bauxite, is first smelted into alumina through the Bayer Process. This step could be replaced by the Pedersen process -- either result in alumina. Unlike the smelting processes of iron and coal into steel or copper and tin into bronze, which require thermal energy, alumina must be smelted with electrical energy. This is done through the Hall–Héroult process, producing 99.5–99.8% pure aluminum.
References
Aluminium industry
Metallurgical processes
Chemical processes | Pedersen process | [
"Chemistry",
"Materials_science"
] | 399 | [
"Metallurgical processes",
"Metallurgy",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
70,223,505 | https://en.wikipedia.org/wiki/Grigory%20E.%20Volovik | Grigory (or Grigori or Grigorii) Efimovich Volovik (Григорий Ефимович Воловик; born 7 September 1946 in Moscow) is a Russian theoretical physicist, who specializes in condensed matter physics. He is known for the Volovik effect.
Education and career
After graduating in 1970 from the Moscow Institute of Physics and Technology, Volovik became a graduate student at Moscow's Landau Institute for Theoretical Physics, where his received his Russian Candidate of Science degree (Ph.D.) in 1973. His thesis was on Dynamics of a particle strongly interacting with a Bose System. He has held since 1973 an appointment as a staff member of the Landau Institute and since 1993 a simultaneous appointment as a professor at the Low Temperature Laboratory (now called the Olli Lounasmaa Laboratory) at the Helsinki University of Technology (now called Aalto University). In 1981 he received from the Landau Institute his Russian Doctor of Sciences degree (habilitation). His Russian doctoral thesis was on Topology of defects in condensed matter. He is the author or co-author of over 450 research publications.
Volovik won in 1992 the Landau Gold Medal. He received in 2004 the Simon Memorial Prize "for his pioneering research on the effects of symmetry in superfluids and superconductors and for extending theses concepts to quantum field theory, cosmology, quantum gravity and particle physics." In 2014 he shared the Lars Onsager Prize with Vladimir Petrovich Mineev for "their contribution to a comprehensive classification of topological defects in condensed matter phases with broken symmetry, culminating in the prediction of half-quantum vortices in superfluid He-3 and related systems." Volovik was elected in 2001 a foreign member of the Finnish Academy of Science and Letters and in 2007 of the German Academy of Sciences Leopoldina.
Volovik's research deals with low temperature quantum spin liquids (such as liquid helium), superfluids, unconventional superconductivity (e.g. in systems of heavy fermions), the physics of glasses and liquid crystals, quantum turbulence, intrinsic quantum Hall effect, coherent states in the Larmor precession. He proposed ideas and novel experiments to investigate analogies between phenomena of quantum field theory and astrophysics and phenomena of solid state physics. He proposed a solution to the problem of the cosmological constant from analogies to solid state physics, in which, unlike particle physics and quantum gravity, the microscopic model is precisely known. In 2010 with Frans R. Klinkhamer, he published Towards a solution of the cosmological constant problem.
Volovik collaborated with the experimentalist Yuri Mikhailovich Bunkov on the study of particle physics analogues and phenomena in helium-3. In quantum field theory, liquid helium-3 is a good model of the vacuum state in elementary particle physics, with fermions as elementary excitations and bosons such as photons, gravitons, gluons as collective ones. According to Volovik's research, excitations and fundamental physical symmetry laws such as gauge and Lorentz invariance are "emergent" laws at sufficiently low temperatures. His view of the emergence of gravitation as a collective vacuum excitation stands in Russia in the tradition of a theory by Andrei Sakharov. In the case of helium-3, this is expressed by the loss of symmetry at high energies (gas) and the formation (emergence) of symmetries such as translational invariance in the superfluid state at low temperatures. There are phenomena in between a phase with global U(1) and two SO(3) symmetries and, at even lower temperatures, in the A-phase additional symmetries which, according to Volovik, are analogous to those observed symmetries (i.e., Lorentz and gauge symmetries and general covariance) of the Standard Model. Volovik calls the latter phenomenon "anti-GUT".
He investigated many-body problems from the point of view of classifying their properties as topological defects. In 2007 he published a Fermi point scenario making the assumption that gravity is "an emergent low-energy phenomenon arising from a topologically stable defect in momentum space". He did research on the topological invariants of the Standard Model and the possible topological quantum phase transitions that occur between the Standard Model's vacuum states.
In the first decade of the 21st century he served on the steering committee of the European Science Foundation's program Cosmology in the Laboratory (COSLAB).
Books
The Universe in a Helium Droplet. Clarendon Press, Oxford 2003; hbk ; 2009 edition. (over 3000 citations)
Exotic properties of superfluid 3He. World Scientific 1992.
with Mário Novello and Matt Visser (eds.): Artificial Black Holes. World Scientific, 2002 (with a chapter by Volovik: Effective Gravity and quantum vacuum in superfluids), pp. 127–178
with R. Huebener and N. Schopohl (eds.): Vortices in unconventional superconductors and superfluids. Springer Verlag, 2002; 2013 edition (with an introduction by Volovik: The beautiful world of the vortex, pp. 1–4)
References
External links
1946 births
Living people
Moscow Institute of Physics and Technology alumni
Landau Institute for Theoretical Physics alumni
Academic staff of Aalto University
20th-century Russian physicists
21st-century Russian physicists
Soviet physicists
Condensed matter physicists
Russian theoretical physicists
Members of the Finnish Academy of Science and Letters
Members of the German National Academy of Sciences Leopoldina
Russian expatriates in Finland | Grigory E. Volovik | [
"Physics",
"Materials_science"
] | 1,183 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
56,049,870 | https://en.wikipedia.org/wiki/Galactic%20superwind | A galactic superwind, or just galactic wind, is a high velocity stellar wind emanating from either newly formed massive stars, spiral density waves, or as the result of the effects of supermassive black holes. They are normally observed in starburst galaxies.
Description
Galactic winds are strong stellar winds made up of charged particles, ejecta, and varying amounts of hot and cool gas, interacting with enough force that the ejecta's kinetic energy is converted to thermal energy. The resulting effect is a massive gust of rapidly expanding super-heated gases that can span the length of a galaxy.
In galaxies with active galactic nuclei, galactic winds can also be driven by the effects of super-massive black holes.
Galactic winds are considered an important function in the evolution of a galaxy. The winds cause an outflow of gas and other material into the halo of a galaxy, while also facilitating the spread of metals around a galaxy. Galactic winds are also capable of blowing material out of a galaxy entirely and into the intergalactic medium.
Formation
Superwinds are theorized to form in compact starburst galaxies in which star growth is much higher than in other types of galaxies. This accelerated star growth results in more prevalent stellar winds being present in starburst galaxies. Superwinds form when ejecta released either by supernovae or stellar winds collide with such force that the shock from the impact converts the kinetic energy of the ejecta into thermal energy. The violent conversion from kinetic to thermal energy prevents a significant amount of energy from being radiated away. This in turn creates an incredibly hot bubble of gas that is under much greater pressure than it surroundings are. Eventually the gas bubble will expand to encompass other particles of ejected gases, further increasing the force and size of its expansion. This "snowplow" effect results in a gust of stellar wind and gas that can span the width of a galaxy. It has been theorized that superwinds can potentially be traveling at a velocity of several thousand kilometers per second by the time they enter the intergalactic medium.
See also
Cosmic wind
Stellar wind
Solar wind
Planetary wind
Stellar-wind bubble
Colliding-wind binary
Pulsar wind nebula
Superwind
References
Stellar evolution
Solar phenomena
Stellar phenomena | Galactic superwind | [
"Physics"
] | 455 | [
"Physical phenomena",
"Astrophysics",
"Stellar evolution",
"Solar phenomena",
"Stellar phenomena"
] |
56,052,525 | https://en.wikipedia.org/wiki/Linus%20%28fusion%20experiment%29 | The Linus program was an experimental fusion power project developed by the United States Naval Research Laboratory (NRL) starting in 1971. The goal of the project was to produce a controlled fusion reaction by compressing plasma inside a metal liner. The basic concept is today known as magnetized target fusion.
The reactor design was based on the mechanical compression of a molten metal liner. A chamber would be filled with molten metal and rotated along one axis, creating a cylindrical cavity in the center. A suitable fusion fuel, heated to several thousand degrees to form it into a plasma, is injected into the center of the cavity. The metal is then rapidly collapsed, and due to the conservation of magnetic flux within the metal, the plasma is confined within the resulting collapsing shell and is itself collapsed. The adiabatic process would raise the temperature and density of the trapped plasma to fusion conditions.
The use of a liquid metal liner has many advantages over previous Soviet experiments that imploded cylindrical solid metal liners to achieve high-energy-density fusion. The liquid metal liner provided the benefits of recovering the heat energy of the reaction, absorbing neutrons, transferring kinetic energy, and replacing the plasma-facing wall during each cycle. Added benefits of a liquid liner include greatly simplified servicing of the reactor, reducing radioactivity, protecting the permanent sections of the reactor from neutron damage, and reducing the danger from flying debris.
The concept was revived in the 2000s as the basis for the General Fusion design, currently being built in Canada.
Conceptual design
In the Linus concept, the reactor chamber consists of a drum filled with a liquid metal liner, typically molten lead-lithium. The drum is spun, creating centrifugal force which causes the liquid to be forced onto the inside wall of the container. There is only enough liquid metal to fill perhaps 20% of the total volume, so a large open area in the middle forms during rotation. For operation, a system, typically consisting of pistons, is used to drive additional liquid metal into the drum. This causes the entire liner to be forced inward. In experimental systems, this provided about ten-to-one compression. The extra metal is then removed again by releasing the pistons, causing the compression to reverse and the metal reach the original position at the outside of the drum.
To create fusion, a fusion-fuel plasma is injected into the cavity before the piston stroke. Because of magnetic interactions in the metal, the plasma in the cavity is forced inward as well. This compression causes the plasma temperature to increase through the adiabatic process, raising it to fusion-relevant temperatures and pressures, around 100 million K and . At these temperatures and pressures, the rate of fusion, according to the fusion triple product, is very rapid and completes before the mechanical compression reverses. The energy released by these reactions, in the case of the typical deuterium-tritium (D-T) fuel, is mostly in the form of high-energy neutrons about 14.1 MeV. These are captured in the liquid metal, raising its temperature. Some of the neutrons will interact with the lithium in the liner, undergoing a nuclear reaction that produces new tritium. In a functioning reactor, the energy would then be extracted using a steam generator as is the case in conventional heat driven power plants, while the tritium would be extracted through a variety of chemical processes.
A key advantage of the Linus concept is that the compression cycle is reversible, in contrast to other concepts that use thin solid metal shells that can only be used once. This allows the system to run continually, limited generally by the ability to clear out the results of the last reaction and generate and inject new fuel plasma, on a timescale of a few seconds. Additionally, systems using non-rotating shells are subject to the Rayleigh-Taylor instability and have proven extremely difficult to stabilize. The rotation of the liquid in Linus suppresses these instabilities. Finally, the metal protects the rest of the reactor from the neutron flux, which is a major problem in other designs.
History
The Linus effort ultimately traces its history to a discussion between Ramy Shanny of the United States Naval Research Laboratory (NRL) and Evgeny Velikhov of the Kurchatov Institute.
The basic idea of super-high magnetic fields as a path to fusion had been considered as early as the 1950s by Andrei Sakharov, who proposed imploding metal liners to produce the required field. The concept was not picked up until the 1960s, when Velikhov began small-scale experiments. It was realized that the cost of the metal liners would likely be higher than the value of the electricity they would produce, the "kopeck problem", and they considered the idea of using a liquid metal liner instead.
Shanny asked about how such a system would be stabilized against Rayleigh-Taylor issues. Velikhov misunderstood the question, thinking he was asking how it would be stabilized against gravity within the drum. He replied that they would spin it. Shanny, believing Velikhov was saying spinning would address Rayleigh-Taylor problems, performed the calculations and found that it did indeed stabilize these instabilities. The Linus program was born.
Suzy I
To gain experience with the concept, NRL initially built liner imploders. The first experimental device was Suzy, constructed in 1971 under the direction of D.C. dePackh. The system used solid metal liners, like the Soviet experiments and many later devices. The liner was driven inward through the theta pinch process, using a capacitor bank.
Suzy II
A.E. Robson and P.J. Turchi joined the program in 1972, and dePackh departed NRL. Robson and Turchi continued the development of the concept with Suzy II, a similar system to what then became Suzy I, but much larger and equipped with a larger capacitor bank power supply. Suzy II compressed liners from an initial diameter of to a final diameter of about , giving an overall compression ratio of 28:1. Pressures greater than were achieved during the implosions.
With the success of the Suzy II experiments, attention turned to the liquid liner. This was built on Suzy II using a plastic liner inside a steel drum, filled with sodium-potassium alloy (NaK) at its eutectic ratio (22% Na, 78% K) which is a liquid at room temperature. By firing the implosion bank at different powers, the relationship between implosion speed and rotation speed could be tested. As long as the rotational speed is high enough, as the liner compressed and its rotational speed increased due to conservation of angular momentum, the centripetal force kept the apparent gravity vector pointed outward. This stabilizes against R-T instabilities because it is the lighter fluid in the center falling outward, a naturally stable condition.
Suzy II was successful in producing a stable inward compression of the liner, but unfortunately, the reverse was not true. As the liner began to expand again when the compression current was turned off, it once again caused a heavy fluid to move into a lighter one, and the R-T instabilities reappeared. This caused the liner to break up into droplets, which, due to their high mass and velocity, impacted the container randomly with the entire embodied energy. In a production machine, this would be on the order of 100 MJ, the equivalent of about of TNT.
Piston implosion experiments
The solution to the liner breakup during expansion is to fill the void with additional liner material. This precludes the use of electromagnetic drivers as in Suzy, and attention turned to using a mechanical piston driving material from a reservoir into the main chamber. The piston was driven by compressed gas.
Several experimental machines followed. The first, the "water model", consisted of a drum of water with pistons positioned radially around it. The entire system spun, including the pistons. This verified the basic approach but was problematic as the piston timing proved difficult to control with the required accuracy. This problem was addressed by a new piston layout with the pistons arranged annularly that could be fired by a single source. This proved to solve the problems and plans began to build larger devices.
Linus-0
With the success of the piston models, plans began to build a larger machine similar to the size and energy as the Suzy II machine. This led to the Linus-0 design, which consisted of a diameter steel rotor surrounded by a gas cylinder that was pressurized to using a series of small high-explosive DATB () charges, also known as the polymer-bonded explosive PBXN, chosen for its high melting point, low particulate matter, and compatibly low cost. The charges were loaded into a series of ports on one end of the device and fired just before the experimental run to pressurize the system. The inner rotor was spun to using a 454 cubic inch Chevrolet V8 engine.
Linus-0 proved to be slow to build due to the only machine shop large enough to make the rotor being busy with other tasks, and the device was not completed until 1978, shortly before the program closed down. Nevertheless, the system was used with water and proved to be able to make repeatable shots in the short time it was operational. During data collection, Linus-0 was fired as often as three times daily.
Helius
The delays in the construction of Linus-0 led to the construction of a half-scale version, Helius. It was designed to use liquid sodium and potassium in the liner chamber. In practice, the use of water was sufficient for the hydrodynamic studies. In the experiment, the liquid sodium-potassium liners were imploded using high-pressure Helium () to drive mechanical pistons.
Project fate
The initial proposals for the Linus designs were based on the cylindrical collapse of the liner with a continuous plasma inside. This arrangement meant there was nothing to confine the plasma from being squirted out the ends of the imploding cylinder of metal. This was not necessarily a problem; both the liner and the plasma would move at the speed of sound, but because the speed of sound in the metal is much higher than in the plasma, most of the plasma would not have time to move before it had already completed the reaction. There was some concern about bad curvature at the ends of the cylinder, which can lead to the interchange instability that operates much faster than the speed of sound. The magnitude of this effect, if was present at all, was not explored.
The disadvantage of this approach was that some plasma did escape, and that amount increased as the speed of the implosion decreased. To get a reasonable reaction rate, driver energies on the order of were required. While this was not impossible to achieve, it still represented a significant capital cost to build such a storage system, and the resulting high-energy and high-speed implosion represented an engineering challenge.
Linus was being developed while another fusion concept was first emerging, the field-reversed configuration, or FRC. This is essentially a smoke ring of plasma that is naturally stable until it cools. Using an FRC inside the machine would provide natural confinement at the ends of the cylinder, preventing the plasma from escaping. This would significantly reduce the required implosion energy, and thus lower the size and cost of the machine as a whole.
At the time, FRCs were very new technology. But as they appeared to represent a significant advance in the state of the art, potentially making a successful fusion system even without the implosion, NRLs interest quickly changed to the underlying physics of the FRC. Experiments on Linus-0 and Helius were relatively brief due in part to delays incurred in the design, fabrication, and assembly phases. Time wasn't allocated to recover from delays or unexpected challenges, and the machines were eventually disassembled and placed in storage.
The Linus project encountered several engineering problems which limited its performance and thus its attractiveness as an approach to commercial fusion power. These issues included performance of the plasma preparation and injection method, the ability to achieve reversible compression–expansion cycles, problems with magnetic flux diffusion into the liner material, and the ability to remove the vaporized liner material from the cavity between cycles (within a duration of about ) which was not accomplished. Shortcomings also occurred with the design of the inner mechanism which pumped the liquid-metal liner.
Another major problem encountered involved hydrodynamic instabilities in the liquid liner. If the liquid was imprecisely compressed, the plasma boundaries could undergo Rayleigh–Taylor instability. This condition could quench the fusion reaction by reducing compression efficiency, and by injecting liner material (vaporized lead and lithium) contaminants into the plasma. Both effects reduce the efficiency of fusion reactions. Strong instability could even cause damage to a reactor. Synchronizing the timing of the compression system was not possible with the technology of the time, and the proposed design was canceled.
See also
Electromagnetic forming
General Fusion
Magnetized target fusion
Shiva Star
Notes
References
Bibliography
Nuclear power
Magnetic confinement fusion devices | Linus (fusion experiment) | [
"Physics",
"Chemistry"
] | 2,673 | [
"Physical quantities",
"Nuclear power",
"Power (physics)",
"Particle traps",
"Magnetic confinement fusion devices"
] |
56,054,765 | https://en.wikipedia.org/wiki/Sigma%20electron%20donor-acceptor | The sEDA parameter (sigma electron donor-acceptor) is a sigma-electron substituent effect scale, described also as inductive and electronegativity related effect. There is also a complementary scale - pEDA. The more positive is the value of sEDA the more sigma-electron donating is a substituent. The more negative sEDA, the more sigma-electron withdrawing is the substituent (see the table below).
The sEDA parameter for a given substituent is calculated by means of quantum chemistry methods. The model molecule is the monosubstituted benzene. First the geometry should be optimized at a suitable model of theory, then the natural population analysis within the framework of Natural Bond Orbital theory is performed. The molecule have to be oriented in such a way that the aromatic benzene ring lays in the xy plane and is perpendicular to the z-axis. Then, the 2s, 2px and 2py orbital occupations of ring carbon atoms are summed up to give the total sigma system occupation. From this value the sum of sigma-occupation for unsubstituted benzene is subtracted resulting in original sEDA parameter. For sigma-electron donating substituents like -Li, -BH2, -SiH3, the sEDA parameter is positive, and for sigma-electron withdrawing substituents like -F, -OH, -NH2, -NO2, -COOH the sEDA is negative.
The sEDA scale was invented by Wojciech P. Oziminski and Jan Cz. Dobrowolski and the details are available in the original paper.
The sEDA scale linearly correlates with experimental substituent constants like Taft-Topsom σR parameter.
For easy calculation of sEDA the free of charge for academic purposes written in Tcl program with graphical user interface AromaTcl is available.
Sums of sigma-electron occupations and sEDA parameter for substituents of various character are gathered in the following table:
References
Organic chemistry
Quantum chemistry
Chemical bond properties | Sigma electron donor-acceptor | [
"Physics",
"Chemistry"
] | 437 | [
"Chemical bond properties",
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
56,054,874 | https://en.wikipedia.org/wiki/Static%20synchronous%20series%20compensator | A static synchronous series compensator (SSSC) is a type of flexible AC transmission system which consists of a solid-state voltage source inverter coupled with a transformer that is connected in series with a transmission line. This device can inject an almost sinusoidal voltage in series with the line. This injected voltage could be considered as an inductive or capacitive reactance, which is connected in series with the transmission line. This feature can provide controllable voltage compensation. In addition, SSSC is able to reverse the power flow by injecting a sufficiently large series reactive compensating voltage.
The SSSC consists of a voltage source converter (VSC) connected in series with the transmission line through a transformer. The VSC, a power electronic device, converts direct current (DC) power into alternating current (AC) power, enabling the injection of the desired voltage. By controlling the magnitude and phase angle of this injected voltage, the SSSC can effectively modify the line's impedance.
One of the primary functions of the SSSC is to improve power flow control. By adjusting the line impedance, the SSSC can regulate the amount of power flowing through a specific transmission line. This is particularly useful for balancing power flows between different regions of a power system or for optimizing the utilization of existing transmission infrastructure.
Furthermore, the SSSC can enhance the stability of the power system by damping power oscillations. Power oscillations can occur due to disturbances such as sudden load changes or faults. The SSSC can quickly respond to these disturbances by injecting appropriate voltages, thereby stabilizing the system and preventing cascading failures.
In addition to power flow control and stability enhancement, the SSSC can also be used to improve voltage profile and mitigate voltage fluctuations. By injecting reactive power, the SSSC can regulate the voltage levels at various points in the power system, ensuring that they remain within acceptable limits. This is particularly important for maintaining the quality of power supply to consumers.
See also
Active power filter
Static synchronous compensator (STATCOM), a similar shunt-connected device
Unified power flow controller, a combination of SSSC and STATCOM
Dynamic voltage restoration
References
Electric power transmission
Power engineering
Power electronics | Static synchronous series compensator | [
"Engineering"
] | 475 | [
"Energy engineering",
"Electronic engineering",
"Power engineering",
"Electrical engineering",
"Power electronics"
] |
56,057,331 | https://en.wikipedia.org/wiki/Non-covalent%20interactions%20index | The Non-Covalent Interactions index, commonly referred to as simply Non-Covalent Interactions (NCI) is a visualization index based in the Electron density (ρ) and the reduced density gradient (s). It is based on the empirical observation that Non-covalent interactions can be associated with the regions of small reduced density gradient at low electronic densities. In quantum chemistry, the non-covalent interactions index is used to visualize non-covalent interactions in three-dimensional space.
Its visual representation arises from the isosurfaces of the reduced density gradient colored by a scale of strength. The strength is usually estimated through the product of the electron density and the second eigenvalue (λ) of the Hessian of the electron density in each point of the isosurface, with the attractive or repulsive character being determined by the sign of λ. This allows for a direct representation and characterization of non-covalent interactions in three-dimensional space, including hydrogen bonds and steric clashes. Being based on the electron density and derived scalar fields, NCI indexes are invariant with respect to the transformation of molecular orbitals. Furthermore, the electron density of a system can be calculated both by X-ray diffraction experiments and theoretical wavefunction calculations.
The reduced density gradient (s) is a scalar field of the electron density (ρ) that can be defined as
Within the Density Functional Theory framework the reduced density gradient arises in the definition of the Generalized Gradient Approximation of the exchange functional. The original definition is
in which k is the Fermi momentum of the free electron gas.
The NCI was developed by Canadian computational chemist Erin Johnson while she was a postdoctoral fellow at Duke University in the group of Weitao Yang.
References
Chemical bonding | Non-covalent interactions index | [
"Physics",
"Chemistry",
"Materials_science"
] | 359 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
56,057,874 | https://en.wikipedia.org/wiki/Fashion%20design%20copyright | Fashion design copyright refers to the web of domestic and international laws that protect unique clothing or apparel designs. The roots of fashion design copyright may be traced in Europe to as early as the 15th century.
As of 2016, in most countries (including the United States and the United Kingdom), fashion design does not have the same protection as other creative works (art, film, literature, etc.), because apparel (clothes, shoes, handbags, etc.) are classified as "functional items", except when copyright laws can be applied. This explains the success of the knockoff businesses to the detriment of established labels and emerging designers, the latter ones being especially damaged, because they rely on relatively few designs.
History
French king Francis I gave out specific privileges related to the production of textiles. By 1711, in Lyon, illegalities were already being defined in regards to fashion materials, and in 1787, in England and Scotland fashion designers had fruitfully pushed their needs for protection into basic legislation. In 1876 Germany began protecting fashion patterns as well as models, and in 2002 European regulation on designs that were new and provided an aspect of fresh character or aesthetic were brought under protection. From 2004 to 2006 the "total production volume for clothing decreased by about 5% each year... [and by] 2006 the European union trade deficit for clothing was at 33.7 billion." These statistics show that while there are benefits of their advanced design legislation, the economic and external factors still hindered their industry growth in ways the U.S. can empathize with. As 2007 came to a close, WIPO, or the World Intellectual Property Organization, had registered twenty-nine international designs.
Current regulation
The protection of fashion design varies greatly from one country to the other.
European Union
Unlike in the USA, when the laws regarding the clothing industry were created in Europe, the continent had a booming fashion industry that already started to reshape the clothing manufacturing industry.
In the European Union, the Creative Designs Directive and the European Designs Directive are in effect to protect new designs for three or five years. The European Union Intellectual Property Office are responsible for managing intellectual property in the EU.
United States
The US laws written in 1976 identify fashion as a manufacturing industry rather than a creative one, because fashion design had not reshaped the clothing manufacturing industry yet. The Digital Millennium Copyright Act (DMCA) of 1998 originally brought more limits to fashion design copyrighting, but a sui generis protection to the design of vessel hulls (DMCA-Title V: Vessel Hull Design Protection Act or VHDPA) was included to give more protection to some useful articles. The House of Representatives deemed fit to enable tighter fashion design copyrights through an extension of the VHDPA. There is no official design rights system, so brands and companies have to use design patents (a technical component of the design) and trademarks (names, slogans, logos) to "copyright" their products. Another option for highly-recognizable fashion designs is to register it as a trade dress with the United States Patent and Trademark Office (ex Hermès and the Birkin bag).
In the 2017 Supreme Court case. Star Athletica, LLC v. Varsity Brands, Inc., it was ruled that Fashion design can be covered by copyright.
This decision enhanced the protection of unique fashion works, which are often knocked off by fast-fashion retailers who turn the vast grey area of fashion copyrights into a profit.
Infringement cases
From 2009 to 2018, Gucci and Guess were in a copyrights feud over the use of a logo: Courts in the USA, China and Australia had ruled in favor of Gucci, while courts in France and Italy had ruled in favor of Guess.
In 2010, Alexander McQueen destroyed all its products containing the Hells Angels' trademarked winged death heads symbol after the motorcycle club threatened to sue.
In the 2012 case of Yves Saint Laurent v. Christian Louboutin, a court ruled that a brand could reuse Louboutin's signature red on shoes as long as the whole shoe is covered in red, because having only the soles in red was indeed a copyright violation.
In the UK, in the 2023 case of Adidas v. Thom Browne, a court ruled that Adidas' 3-stripe signature could be used by other brands.
Societal impact
Researcher Johanna Blakley argues that the very lack of regulation of fashion design has allowed the fashion industry to do very well economically and has led to the birth of fast fashion and a much faster changing of fashion trends and has enabled pieces of clothing to become pieces of art. She also refers to Tom Ford pointing out that the people who buy cheap lookalikes are a different demographic compared with people who buy the original very high-end products and that while many exclusive designers get copied, also the high end designers often attribute the inspiration of their creations to following street fashion, so the copying is a two-way street.
Digital fashion copyrights
When a garment is replicated digitally, the copyrights holder of the physical garment does not necessarily remain the copyrights holder of the digital garment. For example, if a design agency does 3D applications for a fashion company, those 3D animations belong to the agency. Licence agreements are essential if several agencies are involved. Facing digitization, the fashion industry may go through the same disruption the music and film industries went through.
The Hermès v. MetaBirkins case, the copyrights holder of the Birkin bag, the Hermès group, filed a lawsuit against the company MetaBirkins which had created an almost identical NFT bag sold $450 apiece. It was ruled in 2023 that NFTs were not protected by the First Amendment and had to respect copyrighted fashion designs.
See also
Design patent
References
External links
How Is Fashion Protected by Copyright Law?, Copyrightalliance.org
Fashion Design and Copyright in the US and EU, Wipo.int
Fashion design
Copyright law | Fashion design copyright | [
"Engineering"
] | 1,205 | [
"Design",
"Fashion design"
] |
56,058,842 | https://en.wikipedia.org/wiki/%C3%89douard%20Guillaume | Édouard Guillaume (1881–1959) was a Swiss physicist and patent examiner, notorious for his published papers attacking Albert Einstein's theory of special relativity. He is also noteworthy for his work on mathematical economics.
Édouard Guillaume was the younger cousin of Charles Édouard Guillaume, who won the Nobel prize in physics in 1920. Both of the Guillaume cousins received doctorates in physics from the Zurich Polytechnique (ETH Zurich). Édouard Guillaume (the younger cousin) worked at the Swiss patent office where Einstein worked from 1902 to 1909. Beginning in 1913 Guillaume began publishing in the Archives des Sciences Physiques et Naturelles papers arguing for a Lorentzian electrodynamics with a universal time. He claimed that Einstein's theory mistakes changes in the units of measurement for physical changes and that time can be regarded as absolute. Guillaume opposed the theory of relativity, though most of his objections were related to special relativity.
Beginning in 1917, Einstein started to reply to some of the letters Guillaume sent to him. The correspondence went on for a number of years, but Einstein was unable to convince Guillaume.
In 1915 he moved from the Swiss patent office to the Swiss Federal Office for Insurance. From 1916 to 1946 when he retired, he worked for the Swiss insurance company La Neuchâteloise, of which he became a director. For the academic year 1936-1937 he lectured on financial economics as a privat docent at the University of Neuchâtel.
Édouard Guillaume was an Invited Speaker of the ICM in 1920 in Strasbourg, where he presented his ideas concerning relativity theory. In 1932 he was an Invited Speaker of the ICM in 1932 in Zurich, where he gave a talk stemming from the Guillaume brothers' work on mathematical economics.
Selected publications
Guillaume, Édouard. "La théorie de la relativité et le temps universel." Revue de Métaphysique et de Morale 25, no. 3 (1918): 285-323.
Guillaume, Édouard. "La théorie de la relativité et sa signification." Revue de Métaphysique et de Morale 27, no. 4 (1920): 423-469.
References
1881 births
1959 deaths
ETH Zurich alumni
Patent examiners
Relativity critics | Édouard Guillaume | [
"Physics"
] | 453 | [
"Relativity critics",
"Theory of relativity"
] |
56,059,315 | https://en.wikipedia.org/wiki/Tortilla%20machine | A tortilla machine, called in Spanish máquina tortilladora, is a machine for processing corn dough (masa) into corn tortillas.
History
The earliest tortilla machines were invented by Evarardo Rodríguez Arce and Luis Romero, and patented in 1904. Their machine formed dough balls into square tortillas, and was not commercially successful.
Mexican inventor Fausto Celorio Mendoza is credited with the invention of the first automatic tortilla machine. Celorio's 1947 machine pressed dough into round flats, then transported the flats to a series of three ovens for baking, and could produce one tortilla per minute. Celorio worked with engineer Alfonso Gándara to improve the machine's product and efficiency, so that by 1963 the machines were capable of producing of tortillas per hour.
References
Mexican inventions
Machines | Tortilla machine | [
"Physics",
"Technology",
"Engineering"
] | 177 | [
"Physical systems",
"Machines",
"Mechanical engineering"
] |
56,060,544 | https://en.wikipedia.org/wiki/TLQP-62 | TLQP-62 (amino acid 556-617) is a VGF-derived C-terminal peptide that was first discovered by Trani et al. TLQP-62 is derived from VGF precursor protein via proteolytic cleavage by prohormone convertases PC1/3 at the RPR555 site. TLQP-62 is named after its first four N-terminal amino acids and its peptide length.
Function
Although the receptor(s) for TLQP-62 has not been identified so far, extensive studies have demonstrated that it acts on central nervous system, peripheral nervous system and endocrine tissue to exert its biological functions.
Synaptic plasticity
Acute TLQP-62 treatment rapidly increases synaptic activity in hippocampal neurons, and potentiates CA1 field excitatory postsynaptic potential fEPSP in the hippocampal slices, thus facilitating hippocampal synaptic transmission. TLQP-62 also increases dendritic branching and length in cultured hippocampal neurons.
Neurogenesis
TLQP-62 treatment enhances hippocampal neurogenesis both in vitro and in vivo by promoting the proliferation in neuronal progenitor cells.
Antidepressant efficacy
Intrahippocampal TLQP-62 infusion produces both rapid and sustained antidepressant-like effects in the forced swim test. TLQP-62's processed peptide AQEE-30, when given via intracerebroventricular route, also elicits antidepressant-like effects.
Memory and learning
Acute intrahippocampal TLQP-62 infusion enhances memory formation via BDNF/TrkB signaling.
Pain
Acute intrathecal administration of TLQP-62 induces hypersensitivity to mechanical and cold stimuli that recapitulates neuropathic pain, potentially by regulating the excitability of dorsal horn neurons.
Insulin secretion
TLQP-62 treatment increases insulin secretion in cultured insulinoma cells by increasing intracellular calcium mobilization.
References
Peptides | TLQP-62 | [
"Chemistry"
] | 453 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
56,061,455 | https://en.wikipedia.org/wiki/Bouquet%20graph | In mathematics, a bouquet graph , for an integer parameter , is an undirected graph with one vertex and edges, all of which are self-loops. It is the graph-theoretic analogue of the topological rose, a space of circles joined at a point. When the context of graph theory is clear, it can be called more simply a bouquet.
Although bouquets have a very simple structure as graphs, they are of some importance in topological graph theory because their graph embeddings can still be non-trivial. In particular, every cellularly embedded graph can be reduced to an embedded bouquet by a partial duality applied to the edges of any spanning tree of the graph, or alternatively by contracting the edges of any spanning tree.
In graph-theoretic approaches to group theory, every Cayley–Serre graph (a variant of Cayley graphs with doubled edges) can be represented as the covering graph of a bouquet.
References
Parametric families of graphs | Bouquet graph | [
"Mathematics"
] | 200 | [
"Graph theory stubs",
"Mathematical relations",
"Graph theory"
] |
56,061,978 | https://en.wikipedia.org/wiki/Cephaloticoccus%20capnophilus | Cephaloticoccus capnophilus is a Gram-negative and non-motile bacterium from the genus of Cephaloticoccus which has been isolated from the gut of the ant Cephalotes varians from the Crocodile Lake National Wildlife Refuge in Florida in the United States.
References
External links
Type strain of Cephaloticoccus capnophilus at BacDive - the Bacterial Diversity Metadatabase
https://lpsn.dsmz.de/species/cephaloticoccus-capnophilus
Verrucomicrobiota
Bacteria described in 2016 | Cephaloticoccus capnophilus | [
"Biology"
] | 123 | [
"Bacteria stubs",
"Bacteria"
] |
65,860,226 | https://en.wikipedia.org/wiki/Grodno%20Azot | Grodno Azot (Belarusian «Гро́дна Азо́т») is an open joint-stock company, Belarusian state-run producer of nitrogen compounds and fertilizers located in Grodno, Belarus.
History
The construction of temporary auxiliary facilities started in October 1960. In January 1965, the first lines of Ammiak-1 and Karbamid-1 workshops were put in operation. In October 1970, Grodno Nitrogen and Fertiliser Plant was transformed into Grodno Chemicals Plant named after Siarhei Prytytski. In May 1975, it was transformed into Grodno Production Association Azot named after Siarhei Prytytski.
In August 2000, the association was changed into a unitary enterprise and in 2002 it became OJSC Grodno Azot.
Sanctions
In 2006, the United States imposed sanctions against nine Belarusian companies including Grodno Azot and its affiliate Grodno Khimvolokno for "undermining the democratic process”. In October 2015, the sanctions were partially lifted.
After the falsified Belarusian Presidential elections on August 9, 2020, Grodno Azot workers joined the opposition protests and national strike; however, many were detained and beaten by the police on multiple occasions.
In 2021, the United States reported that the sanctions against Grodno Azot can be renewed. On 30 March 2021, Grodno Azot's subsidiary announced a tender for the shipment of its goods. One of the terms of the tender was the possibility of not marking the affiliation of the cargo with the Grodno Azot. It was caused by the threat of sanctions, according to the tender documentation and media.
In April 2021, full-scale US sanctions against Grodno Azot and Grodno Khimvolokno were renewed. On 9 August 2021, the US has added Grodno Azot CEO Igor Lyashenko to the SDN list.
In September 2021, several Grodno Azot workers were detained. New arrests were associated with the threat of Alexander Lukashenko that workers who reveal the ways of bypassing the sanctions would be put in jail for a long time.
In December 2021, European Union sanctioned Grodno Azot and Grodno Khimvolokno. Switzerland joined the EU sanctions on December 20.
In 2022, Japan and Ukraine joined the sanctions against Grodno Azot.
In 2023, several sanctions circumvention schemes involving companies registered in Kyrgyzstan, Uzbekistan, Serbia and Lithuania were identified as a result of journalistic investigations by the Belarusian Investigative Center and Siena. In October 2024, it was reported that Grodno Azot products were being supplied to Ukraine under the guise of being produced in Turkmenistan through a company registered in the United Arab Emirates.
On February 21, 2024, the Court of Justice of the European Union in Luxembourg rejected the claim of Grodno Azot and its subsidiary Khimvolokno, which had demanded the lifting of the European sanctions.
See also
Economy of Belarus
Belneftekhim
References
Bibliography
Официальный сайт
Гродненское производственное объединение «Азот» им. С. О. Притыцкого
Завершено объединение «Гродно Азота» и «Гродно Химволокно»
Belarus: Amid vicious crackdown on peaceful protesters, authorities arrest workers planning strike
Grodno
Grodno region
Companies of Belarus
Chemical industry
Chemical engineering organizations
Belarusian entities subject to U.S. Department of the Treasury sanctions
Chemical companies of the Soviet Union
Chemical companies of Belarus | Grodno Azot | [
"Chemistry",
"Engineering"
] | 811 | [
"Chemical engineering",
"Chemical engineering organizations",
"nan"
] |
65,865,322 | https://en.wikipedia.org/wiki/Genetically%20modified%20vaccine | Most vaccines consist of viruses that have been attenuated, disabled, weakened or killed in some way so that their virulent properties are no longer effective. A simple
genetically modified vaccine, based on a thymidine kinase deficient mutant of pseudorabies virus was reportedly available as early as 2001 as a commercial vaccine to control Aujeszky's disease in Europe, North America and Japan.
References
Vaccines
Viruses
Genetically modified organisms | Genetically modified vaccine | [
"Engineering",
"Biology"
] | 89 | [
"Viruses",
"Tree of life (biology)",
"Genetically modified organisms",
"Genetic engineering",
"Vaccination",
"Microorganisms",
"Vaccines"
] |
65,866,844 | https://en.wikipedia.org/wiki/Rugate%20filter | A rugate filter, also known as a gradient-index filter, is an optical filter based on a dielectric mirror that selectively reflects specific wavelength ranges of light. This effect is achieved by a periodic, continuous change of the refractive index of the dielectric coating. The word "rugate" is derived from corrugated structures found in nature, which also selectively reflect certain wavelength ranges of light, for example the wings of the Morpho butterfly.
Characteristics
In rugate filters the refractive index varies periodically and continuously as a function of the depth of the mirror coating. This is similar to Bragg mirrors with the difference that the refractive index profile of a Bragg mirror is discontinuous. The refractive index profiles of a Rugate and a Bragg mirror are shown in the graph on the right. In Bragg mirrors, the discontinuous transitions are responsible for reflection of incident light, whereas in rugate filters, incident light is reflected throughout the thickness of the coating. According to the Fresnel equations, however, the reflection coefficient is greatest where the greatest change in refractive index occurs. For rugate filters, these are the inflection points in the refractive index profile. The theory of the Bragg mirror leads to a calculation of the wavelength at which the reflection of a rugate filter is greatest. For an alternating sequence in the Bragg mirror, the maximum reflection at a wavelength is:
In this equation and stand for the high and low refractive indices of the Bragg mirror while and are the respective thicknesses of these layers. For the more general case that the refractive index changes continuously, the previous equation can be rewritten as:
On the left hand side is the integral over the refractive index over one period of the refractive index profile divided by the period length . This term corresponds to the mean value of the refractive index profile. As a sanity check for the correctness of this equation, one can solve the integral for a discrete refractive index profile and substitute the period of a Bragg mirror .
The figure on the right shows the reflection spectra calculated by the transfer-matrix method for the refractive index profiles of a Bragg and Rugate filter. It can be seen that both mirrors have their maximum reflectivity at 700 nm, whereas the rugate filter has a lower bandwidth. For this reason rugate filters are often used as optical notch filters. Furthermore, one can see a smaller peak in the spectrum of the rugate filter at . This peak is not present in the spectrum of the Bragg mirror because of its discrete layer system, which causes destructive interference at this wavelength. However, Bragg mirrors have secondary maxima at wavelengths of , which may be undesirable if you only want to filter out a certain wavelength. Rugate filters are better suited for this purpose because the sinusoidal refractive index profile has anti-reflection properties similar to those of black silicon. This reduces the intensity of the secondary maxima.
Production
Rugate filters can be produced by sputtering and chemical vapor deposition. A special challenge is the creation of the continuous refractive index profile. To achieve this, the chemical composition of the mirror must also change continuously as a function of the layer thickness. This can be achieved by continuously changing the gas composition during the deposition process. Another possibility for the production of rugate filters is electrochemical porosification of silicon. Here, the current density during the etching process is selected so that the resulting porosity and thus the refractive index varies sinusoidally with the layer thickness.
References
Optical filters
Electrodynamics
Physical optics | Rugate filter | [
"Chemistry",
"Mathematics"
] | 726 | [
"Electrodynamics",
"Optical filters",
"Filters",
"Dynamical systems"
] |
65,869,118 | https://en.wikipedia.org/wiki/Economic%20evaluation%20of%20time | In organizational behavior and psychology, Economic evaluation of time refers to perceiving of time in terms of money. (Other forms of evaluation of time are concerned with costs and benefits to the general community of changes in time-dependent activities.)
When a person evaluates their time in monetary terms, time is viewed as a scarce resource that should be used as efficiently as possible to maximize the perceived monetary gains. Therefore, people who evaluate their time in terms of money are more likely to trade their time for money (i.e., workers provide their time to organizations in exchange for money)—as illustrated by research examining time and money trade-offs.
Trading time for money is revealed through people's time use decisions. Across both mundane and major life decisions, people who evaluate their time in terms of money tend to spend their time in ways that give them more money at the expense of acquiring more time (e.g., driving to a cheaper, yet farther away gas station). Research found that, across these decisions, choosing to get more money at the expense of getting more time is associated with lower subjective well-being.
Furthermore, the activation of economic evaluation of time has primarily been studied in organizational behavior research with hourly payment schedules and performance incentives, which are robust predictors of economic evaluation of time. The psychological effects of receiving hourly payment and performance incentives promote the economic evaluations of time, and in turn lead employees to spend their time in ways that maximize personal success and economic gains, such as working more hours, socializing less with loved ones, and volunteering less.
Time and Money
Time is money
The idea that time can be evaluated in monetary terms was first introduced by Benjamin Franklin in his 1748 essay Advice to a Young Tradesman. His famous adage 'time is money', that appeared in this essay, was intended to convey that wasting time in frivolous pursuits results in lost money. He believed that wasting time wasted money in two ways. First, by not earning money. Second, by spending money during non-working time.
A great number of researchers argue that this aphorism is true in Western societies. Jean-Claude Usunier noted that "the United States is quite emblematic of the 'time is money' cultures, where time is an economic good. Since time is a scarce resource, or at least perceived as such, people should try to reach its optimal allocation, between competing ways of using it." Consistent with this line of thought, literature on economic evaluation of time views that people can treat time and money in similar ways (and they are tradeable) in certain contexts. Specifically, organizational practices, such as hourly payment schedules and exposure to the concept of 'money', are significant activators of economic evaluation of time.
Differences in time and money
However, a different line of research provides contrasting arguments by showing that people evaluate time and money very differently. In particular, money has a readily exchangeable market where people can buy, sell, borrow, and save, which is impossible to do with time. A lost dollar has potential to be earned back tomorrow, yet a lost minute cannot be recouped.
In a study done by LeClerc, Schmitt, and Dube, people were more risk-averse to uncertainties that involved losses of time compared to money (whereas, according to prospect theory, people are risk-seeking under decisions that involve losses of money). For example, people were less likely to choose to wait 90 minutes over 60 minutes for sure than they were to choose the chance of losing $15 over $10 for sure. Okada and Hoch also found systematic differences in how people spent time versus money, and these differences in spending patternIn organizational behavior and psychology, Economic evaluation of time refers to perceiving of time in terms of money. (Other forms of evaluation of time are concerned with costs and benefits to the general community of changes in time-dependent activities.)
When a person evaluates their time in monetary terms, time is viewed as a scarce resource that should be used as efficiently as possible to maximize the perceived monetary gains. Therefore, people who evaluate their time in terms of money are more likely to trade their time for money (i.e., workers provide their time to organizations in exchange for money)—as illustrated by research examining time and money trade-offs.
Trading time for money is revealed through people's time use decisions. Across both mundane and major life decisions, people who evaluate their time in terms of money tend to spend their time in ways that give them more money at the expense of acquiring more time (e.g., driving to a cheaper, yet farther away gas station). Research found that, across these decisions, choosing to get more money at the expense of getting more time is associated with lower subjective well-being.
Furthermore, the activation of economic evaluation of time has primarily been studied in the organizational behavior research with hourly payment schedules and performance incentives, which are robust predictors of economic evaluation of time. The psychological effects of receiving hourly payment and performance incentives promote the economic evaluations of time, and in turn lead employees to spend their time in ways that maximize personal success and economic gains, such as working more hours, socializing less with loved ones, and volunteering less.
Time and Money
Time is money
The idea that time can be evaluated in monetary terms was first introduced by Benjamin Franklin in his 1748 essay Advice to a Young Tradesman. His famous adage 'time is money', that appeared in this essay, was intended to convey that wasting time in frivolous pursuits results in lost money. He believed that wasting time wasted money in two ways. First, by not earning money. Second, by spending money during non-working time.
A great number of researchers argue that this aphorism is true in Western societies. Jean-Claude Usunier noted that "the United States is quite emblematic of the 'time is money' cultures, where time is an economic good. Since time is a scarce resource, or at least perceived as such, people should try to reach its optimal allocation, between competing ways of using it." Consistent with this line of thought, literature on economic evaluation of time views that people can treat time and money in similar ways (and they are tradeable) in certain contexts. Specifically, organizational practices, such as hourly payment schedules and exposure to the concept of 'money', are significant activators of economic evaluation of time.
Differences in time and money
However, a different line of research provides contrasting arguments by showing that people evaluate time and money very differently. In particular, money has a readily exchangeable market where people can buy, sell, borrow, and save, which is impossible to do with time. A lost dollar has a potential to be earned back tomorrow, yet a lost minute cannot be recouped.
In a study done by LeClerc, Schmitt, and Dube, people were more risk-averse to uncertainties that involved losses of time compared to money (whereas, according to prospect theory, people are risk-seeking under decisions that involve losses of money). For example, people were less likely to choose to wait 90 minutes over 60 minutes for sure than they were to choose the chance of losing $15 over $10 for sure. Okada and Hoch also found systematic differences in how people spent time versus money, and these differences in spending patterns were explained by the ambiguity in the value of time in contrast to money that was perceived as more fungible. For example, people believed that they will have more time in the future than now (which leads to greater slack and procrastination), yet people did not overestimate the amount of money they will have in the future than now.
Time and money also differ in their connections to people's self-concepts. People perceive that their temporal expenditures, such as spending leisure time, are more reflective of their self-concept, as compared to their monetary expenditures. For instance, Reed and his colleagues found that people view donations of time (e.g., volunteering) as higher in moral value and more self-expressive than monetary donations. Similarly, Carter and Gilovich found that people’s experiences are more critical to their personal narrative than material goods. Therefore, although people do express aspects of their self-identity through purchasing of material goods as well, expenditures of time may constitute people’s lives more strongly. Together, the fact that people can view their time in terms of money may not be true across all contexts.
Factors that Promote Economic Evaluation of Time
Money
Research looking at the relationship between time and money found that activating the concept of money can heighten people's focus on the goal of maximizing economic gains. Thinking about their time in terms of money (economic evaluation of time), subsequently impacts people's decisions about time-use and attitude toward others (see 'Consequences' section). The focus on money can be induced in laboratory settings, as well as in organizational contexts, such as under hourly payment schedules and performance incentives, which are explained in detail below.
Laboratory tasks
People can be primed to think about money through simple laboratory procedures. Studies found that people who were asked to formulate sentences using money-relevant words (e.g., price) versus time-relevant words (e.g., clock) primed people to think about money, and they became more self-focused in their decisions about time use. For example, participants who were primed to think about money spent more time working and less time socializing with friends. They were also far less likely to help others or seek help. There are various other manipulation techniques used to prime the concept of money. The 'descrambling task' consists of 30 sets of five jumbled words, where participants are asked to formulate sensible phrases using four of the five words. In the control conditions, all 30 of the phrases primed neutral concepts (e.g., “cold it desk outside is” descrambled to “it is cold outside”). In the money-prime condition, 15 of the phrases primed the concept of money (e.g., “high a salary desk paying” descrambled to “a high-paying salary”). Other studies presented participants with money bills versus paper sheets to prime the concept of money.
However, some of the recent studies in the money-priming research failed to replicate these results. Across several experiments, the same manipulation (e.g., showing an image of a $100 bill) did reliably activate the concept of money; however, it did not have consistent effects on several dependent measures including subjective wealth, self-sufficiency, agency, and communion, which are theorized to be influenced by the thought of money. Furthermore, socioeconomic factors such as gender, socioeconomic status, and political ideology did not moderate these effects of money primes. Since variance in study population and methods are inevitable across experiments, these laboratory studies should therefore be interpreted with caution. Caruso and his colleagues suggest that using large-scale pre-registered experiment and assessing wide-ranging individual factors within the same heterogeneous sample will be helpful in identifying meaningful variations among the dependent variables.
Performance incentive
The research found that certain organizational practices promote economic evaluation of time. One such factor is performance incentives, a ubiquitous payment system used in various domains including education, health, and management. The main alternative to performance incentives is task-based incentive (also known as fixed incentive)—a fixed amount of payment for completing a task. Performance incentives, as compared to task-based incentives, increase people's attention to reward objects, which in turn heighten their desire for money. This desire then motivates people's focus on earning monetary and material rewards, and decreases prosocial spending like making donations.
Hourly payment
One of the most salient features in organizations that induce the economic evaluation of time is hourly pay, a type of payment schedule that approximately 58% of employees work under in the United States. Time and money connection is particularly salient under hourly payment because people's income is a direct function of the number of hours they worked, multiplied by their rate of pay. Sanford DeVoe and Jeffery Pfeffer found that workers who were paid by the hour showed more similarity in how they evaluated time and money, as compared to workers who were paid by salary. Specifically, people who were paid by the hour (vs. salary) applied mental accounting rules to time that are typically only applied to money. Participants were asked to rate their endorsement on a mental accounting questionnaire (e.g., "If I have wasted money [time] on a particular activity or item, I try to save it on another activity or item."), where DeVoe and Pfeffer found that hourly wage participants showed high similarity in how they applied mental accounting rules to both time and money, whereas salaried participants did not apply mental accounting rules to time.
Furthermore, research looking at the economic evaluation of time proved that 'economic mindset' can be induced in laboratory settings through hourly wage calculations. Although not everyone is paid by the hour, every worker has an implicit hourly wage—their total income divided by the number of hours they worked. Therefore, participants who calculated their hourly wage in an experiment versus who did not calculate their hourly wage were more likely to adopt an economic mindset and were more willing to trade their time for money.
DeVoe and Pfeffer also showed that the mechanism for how hourly wage payment activates economic evaluation of time is the people's viewing of themselves as the economic evaluator in their decision-making. This suggests that the mere activation of an economic concept, such as hourly wage in general or of another person, itself cannot activate economic evaluation of time. Rather, a person's own prior experience with hourly payment or calculating one's own hourly wage (vs. another person's hourly wage) is what activates the economic evaluation. Therefore, the degree to which hourly payment impacts an individual's attitudes and behaviors depends on the extent to which the economic evaluation becomes more central to one's self-concept.
Consequences
Devaluing non-compensated time
Economic evaluation of time impacts people's decisions about time use. A salient outcome for adopting an economic mindset, or thinking about time in terms of money, is devaluing of non-compensated time. Results from a survey of nationally representative sample of Americans from the May 2001 Current Population Survey (CPS) Work Schedule Supplement showed that people who were paid by the hour, compared to those not paid by the hour, weighed the monetary returns more strongly when making decisions about time use. Therefore, they showed greater willingness to give up their free time to earn more money ("Work more hours but earn more money" vs. "Work fewer hours but earn less money"). Another study demonstrated that technical contractors who sold their services by the hour came to evaluate their time in terms of money, which led the contractors to devalue non-compensated time (e.g., volunteering). These non-compensated time use domains are discussed below.
Volunteering
People who are paid by the hour (vs. salary) volunteer less. In the laboratory, participants who calculated their hourly wage (vs. those who did not calculate their hourly wage), volunteered less and also reported that they are less willing to volunteer their time.
Pro-environmental behavior
were explained by the ambiguity in the value of time in contrast to money that was perceived as more fungible. For example, people believed that they will have more time in the future than now (which leads to greater slack and procrastination), yet people did not overestimate the amount of money they will have in the future than now.
Time and money also differ in their connections to people's self-concepts. People perceive that their temporal expenditures, such as spending leisure time, are more reflective of their self-concept, as compared to their monetary expenditures. For instance, Reed and his colleagues found that people view donations of time (e.g., volunteering) as higher in moral value and more self-expressive than monetary donations. Similarly, Carter and Gilovich found that people’s experiences are more critical to their personal narrative than material goods. Therefore, although people do express aspects of their self-identity through purchasing of material goods as well, expenditures of time may constitute people’s lives more strongly. Together, the fact that people can view their time in terms of money may not be true across all contexts.
Factors that Promote Economic Evaluation of Time
Money
Research looking at the relationship between time and money found that activating the concept of money can heighten people's focus on the goal of maximizing economic gains. Thinking about their time in terms of money (economic evaluation of time), subsequently impacts people's decisions about time-use and attitude toward others (see 'Consequences' section). The focus on money can be induced in laboratory settings, as well as in organizational contexts, such as under hourly payment schedules and performance incentives, which are explained in detail below.
Laboratory tasks
People can be primed to think about money through simple laboratory procedures. Studies found that people who were asked to formulate sentences using money-relevant words (e.g., price) versus time-relevant words (e.g., clock) primed people to think about money, and they became more self-focused in their decisions about time use. For example, participants who were primed to think about money spent more time working and less time socializing with friends. They were also far less likely to help others or seek help. There are various other manipulation techniques used to prime the concept of money. The 'descrambling task' consists of 30 sets of five jumbled words, where participants are asked to formulate sensible phrases using four of the five words. In the control conditions, all 30 of the phrases primed neutral concepts (e.g., “cold it desk outside is” descrambled to “it is cold outside”). In the money-prime condition, 15 of the phrases primed the concept of money (e.g., “high a salary desk paying” descrambled to “a high-paying salary”). Other studies presented participants with money bills versus paper sheets to prime the concept of money.
However, some of recent studies in the money-priming research failed to replicate these results. Across several experiments, the same manipulation (e.g., showing an image of a $100 bill) did reliably activate the concept of money; however, it did not have consistent effects on several dependent measures including subjective wealth, self-sufficiency, agency, and communion, which are theorized to be influenced by the thought of money. Furthermore, socioeconomic factors such as gender, socioeconomic status, and political ideology did not moderate these effects of money primes. Since variance in study population and methods are inevitable across experiments, these laboratory studies should therefore be interpreted with caution. Caruso and his colleagues suggest that using large-scale pre-registered experiment and assessing wide-ranging individual factors within the same heterogeneous sample will be helpful in identifying meaningful variations among the dependent variables.
Performance incentive
Research found that certain organizational practices promote economic evaluation of time. One such factor is performance incentives, a ubiquitous payment system used in various domains including education, health, and management. The main alternative to performance incentives is task-based incentive (also known as fixed incentive)—a fixed amount of payment for completing a task. Performance incentives, as compared to task-based incentives, increase people's attention to reward objects, which in turn heighten their desire for money. This desire then motivates people's focus on earning monetary and material rewards, and decreases prosocial spending like making donations.
Hourly payment
One of the most salient features in organizations that induce the economic evaluation of time is hourly pay, a type of payment schedule that approximately 58% of employees work under in the United States. Time and money connection is particularly salient under hourly payment because people's income is a direct function of the number of hours they worked, multiplied by their rate of pay. Sanford DeVoe and Jeffery Pfeffer found that workers who were paid by the hour showed more similarity in how they evaluated time and money, as compared to workers who were paid by salary. Specifically, people who were paid by the hour (vs. salary) applied mental accounting rules to time that are typically only applied to money. Participants were asked to rate their endorsement on a mental accounting questionnaire (e.g., "If I have wasted money [time] on a particular activity or item, I try to save it on another activity or item."), where DeVoe and Pfeffer found that hourly wage participants showed high similarity in how they applied mental accounting rules to both time and money, whereas salaried participants did not apply mental accounting rules to time.
Furthermore, research looking at the economic evaluation of time proved that an 'economic mindset' can be induced in laboratory settings through hourly wage calculations. Although not everyone is paid by the hour, every worker has an implicit hourly wage—their total income divided by the number of hours they work. Therefore, participants who calculated their hourly wage in an experiment versus those who did not calculate their hourly wage were more likely to adopt an economic mindset and were more willing to trade their time for money.
DeVoe and Pfeffer also showed that the mechanism for how hourly wage payment activates economic evaluation of time is the people's viewing of themselves as the economic evaluator in their decision-making. This suggests that the mere activation of an economic concept, such as hourly wage in general or of another person, itself cannot activate the economic evaluation of time. Rather, a person's own prior experience with hourly payment or calculating one's own hourly wage (vs. another person's hourly wage) is what activates the economic evaluation. Therefore, the degree to which hourly payment impacts an individual's attitudes and behaviors depends on the extent to which the economic evaluation becomes more central to one's self-concept.
Consequences
Devaluing non-compensated time
Economic evaluation of time impacts people's decisions about time use. A salient outcome for adopting an economic mindset, or thinking about time in terms of money, is devaluing of non-compensated time. Results from a survey of nationally representative sample of Americans from the May 2001 Current Population Survey (CPS) Work Schedule Supplement showed that people who were paid by the hour, compared to those not paid by the hour, weighed the monetary returns more strongly when making decisions about time use. Therefore, they showed greater willingness to give up their free time to earn more money ("Work more hours but earn more money" vs. "Work fewer hours but earn less money"). Another study demonstrated that technical contractors who sold their services by the hour came to evaluate their time in terms of money, which led the contractors to devalue non-compensated time (e.g., volunteering). These non-compensated time use domains are discussed below.
Volunteering
People who are paid by the hour (vs. salary) volunteer less. In the laboratory, participants who calculated their hourly wage (vs. those who did not calculate their hourly wage), volunteered less and also reported that they are less willing to volunteer their time.
Pro-environmental behavior
People who are paid by the hour are less likely to engage in pro-environmental behaviors, such as recycling. Simply asking participants to calculate their hourly wage lowered their willingness to engage in environmental behaviors as well as their actual behaviors in recycling scrap papers in a laboratory experiment. This is due to the hourly participants' spontaneous recognition of the trade-offs they are making with every minute of their time. People feel as if they are losing money when engaging in environmental activities because these are non-compensated.
Social interaction
Economic evaluation of time undermines social interactions. Thinking about money increases people's willingness to work and reduces their willingness to spend time with others. In an experiment done by Cassie Mogilner, participants who thought about money-related words (e.g., price), compared to participants who thought about time-related words (e.g., clock), were significantly more likely to spend time working more and socializing less with loved ones. As such, people who are focused on money are less interpersonally attuned—they are less caring and warm and rather in a business mindset.
Well-being
Economic evaluation of time has multiple negative implications for well-being. Economic evaluation of time activates the human motivation system that is associated with self-focused values. People with an economic mindset therefore tend to prioritize personal achievement more than the wellbeing of others and spend time in ways that maximize personal gains. This tendency negatively contributes to well-being.
First, evaluating time in terms of money motivates people to work more because every hour they put into non-compensated activities is lost money. Although this may be useful when trying to meet a short deadline at work, work time does not typically translate into happiness. However, spending time with loved ones, such as family and friends, spending time volunteering, and engaging in pro-environmental behaviors have been found to contribute to greater happiness. Daniel Kahneman also demonstrated that being prosocial and socializing with friends are known to be the happiest part of most people's days. Economic evaluation of time that decreases these happiness-promoting activities may therefore have grave consequences on well-being.
References
Economics and time
Psychological effects
Organizational behavior
Personal finance | Economic evaluation of time | [
"Physics",
"Biology"
] | 5,251 | [
"Behavior",
"Physical quantities",
"Time",
"Organizational behavior",
"Economics and time",
"Spacetime",
"Human behavior"
] |
64,388,125 | https://en.wikipedia.org/wiki/Ciechocinek%20graduation%20towers | The Ciechocinek graduation towers are a complex of three brine graduation towers, erected in the nineteenth century in Ciechocinek, in the Kuyavian-Pomeranian Voivodeship, Poland. They constitute the largest wooden structure of this type in Europe. The complex of graduation towers and salt breweries, together with two surrounding parks, are designated as a Historic Monument.
History
The towers were designed by Jakub Graff, professor of the Mining Academy in Kielce, based on the brine sources discovered here back in the second half of the eighteenth century, although the local community extracted and brewed salt as early as in the thirteenth century under the permissions given by Konrad I Mazowiecki.
The graduation tower I with a capacity of and the graduation tower II with a capacity of , were built between 1824 and 1828. The graduation tower III with a capacity of , was built in 1859. The base of the towers is made up of 7000 oak piles driven into the ground, on which a spruce-and-pine structure planted with blackthorn was placed, where brine flows. The towers are arranged in the shape of a horseshoe with a total length of ; each is high. The brine with a concentration of 5.8% is pumped a depth of in spring No. 11 (the so-called Grzybek fountain) into dedicated channels at the top of the graduation towers. The brine seeps on the walls of the towers, on the blackthorn, and evaporates under the influence of wind and sun, creating a microclimate rich in iodine, sodium, chlorine and bromine, thanks to which a natural healing inhalatorium developed.
The towers are the second stage in the salt production process, where the brine concentration is gradually increased. The smallest concentration occurs at tower No. I (9%); the brine concentration increases at tower No. III (16%) and becomes greatest at tower No. II (30%). From the latter, the brine flows in pipelines to the salt-brewing plant (the third stage of salt production) where salt, sludge and therapeutic lye are produced. The first stage in the process of salt production is pumping brine from the source No. 11 "Grzybek fountain". The graduation towers also act as a giant air filter. In 1996, radioactive caesium isotopes (Cs-134 and Cs-137) from the Chernobyl nuclear power plant disaster (1986) were detected in the sludge and salt from the towers; however, their concentration in these products did not pose a threat to human health.
In 2017, the complex of graduation towers and salt breweries, together with the Tężniowy and Zdrojowy parks, was entered on the list of Historic Monuments.
In 2019, the Ciechocinek Health Resort obtained PLN 15 million from European funds for the renovation of the graduation towers (total cost of the project: 21.6 million). The project "Modernization and extension of the infrastructure of the graduation tower complex in Ciechocinek" includes renovation of tower No. I (replacement of blackthorn), tower No. III (general overhaul: replacement of structural elements and reinforcement of foundations) and the brine-pumping-station building, as well as paths and areas near the towers and the pumping station. Gardening work will also be carried out, and an installation to illuminate the towers at night will be constructed. The work, scheduled from March 2020 to December 2021, started with tower No. III.
Gallery
References
Salt production
Aleksandrów County
Buildings and structures in Kuyavian-Pomeranian Voivodeship | Ciechocinek graduation towers | [
"Chemistry"
] | 751 | [
"Salt production",
"Salts"
] |
64,388,266 | https://en.wikipedia.org/wiki/Graph%20Fourier%20transform | In mathematics, the graph Fourier transform is a mathematical transform which eigendecomposes the Laplacian matrix of a graph into eigenvalues and eigenvectors. Analogously to the classical Fourier transform, the eigenvalues represent frequencies and eigenvectors form what is known as a graph Fourier basis.
The Graph Fourier transform is important in spectral graph theory. It is widely applied in the recent study of graph structured learning algorithms, such as the widely employed convolutional networks.
Definition
Given an undirected weighted graph , where is the set of nodes with ( being the number of nodes) and is the set of edges, a graph signal is a function defined on the vertices of the graph . The signal maps every vertex to a real number . Any graph signal can be projected on the eigenvectors of the Laplacian matrix . Let and be the eigenvalue and eigenvector of the Laplacian matrix (the eigenvalues are sorted in an increasing order, i.e., ), the graph Fourier transform (GFT) of a graph signal on the vertices of is the expansion of in terms of the eigenfunctions of . It is defined as:
where .
Since is a real symmetric matrix, its eigenvectors form an orthogonal basis. Hence an inverse graph Fourier transform (IGFT) exists, and it is written as:
Analogously to the classical Fourier transform, graph Fourier transform provides a way to represent a signal in two different domains: the vertex domain and the graph spectral domain. Note that the definition of the graph Fourier transform and its inverse depend on the choice of Laplacian eigenvectors, which are not necessarily unique. The eigenvectors of the normalized Laplacian matrix are also a possible base to define the forward and inverse graph Fourier transform.
Properties
Parseval's identity
The Parseval relation holds for the graph Fourier transform, that is, for any
This gives us Parseval's identity:
Generalized convolution operator
The definition of convolution between two functions and cannot be directly applied to graph signals, because the signal translation is not defined in the context of graphs. However, by replacing the complex exponential shift in classical Fourier transform with the graph Laplacian eigenvectors, convolution of two graph signals can be defined as:
Properties of the convolution operator
The generalized convolution operator satisfies the following properties:
Generalized convolution in the vertex domain is multiplication in the graph spectral domain:
Commutativity:
Distributivity:
Associativity:
Associativity with scalar multiplication: , for any .
Multiplicative identity: , where is an identity for the generalized convolution operator.
The sum of the generalized convolution of two signals is a constant times the product of the sums of the two signals:
Generalized translation operator
As previously stated, the classical translation operator cannot be generalized to the graph setting. One way to define a generalized translation operator is through generalized convolution with a delta function centered at vertex :
where
The normalization constant ensures that the translation operator preserves the signal mean, i.e.,
Properties of the translation operator
The generalized convolution operator satisfies the following properties:
For any , and ,
Applications
Image compression
Representing signals in frequency domain is a common approach to data compression. As graph signals can be sparse in their graph spectral domain, the graph Fourier transform can also be used for image compression.
Graph noise reduction
Similar to classical noise reduction of signals based on Fourier transform, graph filters based on the graph Fourier transform can be designed for graph signal denoising.
Data classification
As the graph Fourier transform enables the definition of convolution on graphs, it makes possible to adapt the conventional convolutional neural networks (CNN) to work on graphs. Graph structured semi-supervised learning algorithms such as graph convolutional network (GCN), are able to propagate the labels of a graph signal throughout the graph with a small subset of labeled nodes, theoretically operating as a first order approximation of spectral graph convolutions without computing the graph Laplacian and its eigendecomposition.
Toolbox
GSPBOX is a toolbox for signal processing of graphs, including the graph Fourier transform. It supports both Python and MATLAB languages.
References
External links
DeepGraphLibrary A free Python package built for easy implementation of graph neural networks.
Graph theory
Fourier analysis | Graph Fourier transform | [
"Mathematics"
] | 935 | [
"Discrete mathematics",
"Mathematical relations",
"Graph theory",
"Combinatorics"
] |
64,393,952 | https://en.wikipedia.org/wiki/List%20of%20alkanols | This list is ordered by the number of carbon atoms in an alcohol.
C1
Methanol
C2
Ethanol
C3
1-Propanol
Isopropyl alcohol
C4
n-Butanol
Isobutanol
sec-Butanol
tert-Butyl alcohol
C5
1-Pentanol
Isoamyl alcohol
2-Methyl-1-butanol
Neopentyl alcohol
2-Pentanol
3-Methyl-2-butanol
3-Pentanol
tert-Amyl alcohol
C6
1-Hexanol
2-Hexanol
3-Hexanol
2-Methyl-1-pentanol
3-Methyl-1-pentanol
4-Methyl-1-pentanol
2-Methyl-2-pentanol
3-Methyl-2-pentanol
4-Methyl-2-pentanol
2-Methyl-3-pentanol
3-Methyl-3-pentanol
2,2-Dimethyl-1-butanol
2,3-Dimethyl-1-butanol
3,3-Dimethyl-1-butanol
Alcohols | List of alkanols | [
"Chemistry"
] | 239 | [
"nan"
] |
47,532,636 | https://en.wikipedia.org/wiki/GoMentum%20Station | GoMentum Station is a testing ground for connected and autonomous vehicles at the former Concord Naval Weapons Station (CNWS) in Concord, California, United States. The property was acquired and repurposed by the Contra Costa Transportation Authority.
In October 2014, the Contra Costa Transportation Authority announced that the GoMentum Station proving grounds would be used to test self-driving cars; according to them, "The public will not have access to the test site, and the self-driving cars will be restricted to the test bed site. With of testing area and of paved roadway, the CNWS is currently the largest secure test bed site in the United States". Mercedes-Benz is reported to have licenses to test new driving technology, including smart infrastructure such as traffic signals that communicate with cars. Among the site's other notable features: "a -long roadway is great for testing high-speed driving, and a pair of -long tunnels" for sensor testing.
Among the roughly 30 partners listed on the company's site are automakers Toyota and Honda, ridesharing companies Uber and Lyft and China-based autonomous driving company Baidu. In summer 2015, reports suggested the Apple electric car project was interested in using the site, as members of Apple's Special Project group were reported to have met GoMentum representatives but there were no subsequent reports of Apple personnel and vehicles actually using the site.
In August 2019, GoMentum announced the October launch of its V2X (vehicles-to-everything) testing facility.
References
External links
Video
Movie from Honda about an autonomous driving test of the company on the grounds of GoMentum station of 23 July 2015; accessed on 15 January 2016 – provides an interesting insight into the area.
Buildings and structures in Concord, California
Self-driving cars | GoMentum Station | [
"Engineering"
] | 364 | [
"Automotive engineering",
"Self-driving cars"
] |
57,797,047 | https://en.wikipedia.org/wiki/Space%20jellyfish | A space jellyfish (also jellyfish UFO or rocket jellyfish) is a rocket launch-related phenomenon caused by sunlight reflecting off the high-altitude rocket plume gases emitted by a launching rocket during morning or evening twilight. The observer is in darkness, while the exhaust plumes at high altitudes are still in direct sunlight. This luminous apparition is reminiscent of a jellyfish. Sightings of the phenomenon have led to panic, fear of nuclear missile strike, and reports of unidentified flying objects.
List of rocket launches causing space jellyfish
See also
Noctilucent cloud
Exhaust gas
Contrail
Twilight phenomenon
Notes
References
Further reading
External links
Associated Press, , 10 December 2009
News4JAX (WJXT4), , 6 May 2022
UFO-related phenomena
Atmospheric optical phenomena
Rocketry
Smoke | Space jellyfish | [
"Physics",
"Engineering"
] | 162 | [
"Physical phenomena",
"Earth phenomena",
"Optical phenomena",
"Rocketry",
"Aerospace engineering",
"Atmospheric optical phenomena"
] |
53,190,900 | https://en.wikipedia.org/wiki/Minor%20losses%20in%20pipe%20flow | Minor losses in pipe flow are a major part in calculating the flow, pressure, or energy reduction in piping systems. Liquid moving through pipes carries momentum and energy due to the forces acting upon it such as pressure and gravity. Just as certain aspects of the system can increase the fluids energy, there are components of the system that act against the fluid and reduce its energy, velocity, or momentum. Friction and minor losses in pipes are major contributing factors.
Friction Losses
Before being able to use the minor head losses in an equation, the losses in the system due to friction must also be calculated.
Equation for friction losses:
= Frictional head loss
= Downstream velocity
= Gravity of Earth
= Hydraulic radius
=Total length of piping
= Fanning friction factor
Total Head Loss
After both minor losses and friction losses have been calculated, these values can be summed to find the total head loss.
Equation for total head loss, , can be simplified and rewritten as:
= Frictional head loss
= Downstream velocity
= Gravity of Earth
= Hydraulic radius
=Total length of piping
= Fanning friction factor
= Sum of all kinetic energy factors in system
Once calculated, the total head loss can be used to solve the Bernoulli Equation and find unknown values of the system.
See also
Hydraulic head
Total dynamic head
Notes
Piping
Fluid dynamics | Minor losses in pipe flow | [
"Chemistry",
"Engineering"
] | 265 | [
"Building engineering",
"Chemical engineering",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
53,192,145 | https://en.wikipedia.org/wiki/Separation%20of%20prescribing%20and%20dispensing | Separation of prescribing and dispensing, also called dispensing separation, is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug.
In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries it is traditional for physicians to also provide drugs.
Contemporary research indicates that separation of prescribing and dispensing lowers expenditure on drugs, which is explained by the fact physician-prescribing gives doctors an incentive to over-prescribe. This is an example of a conflict of interest in the healthcare industry leading to unnecessary health care.
Background
In many Western jurisdictions such as the United States, pharmacists are regulated separately from physicians. These jurisdictions also usually specify that only pharmacists may supply scheduled pharmaceuticals to the public, and that pharmacists cannot form business partnerships with physicians or give them "kickback" payments. In other words, the diagnosing physicians' role is supposed to extend only as far as providing proper prescriptions to patients, who are then entitled to purchase the prescribed drugs at the pharmacies of their choice.
However, the American Medical Association (AMA) Code of Ethics provides that physicians may dispense drugs within their office practices as long as there is no patient exploitation and patients have the right to a written prescription that can be filled elsewhere. 7 to 10 percent of American physicians practices reportedly dispense drugs on their own.
In some rural areas in the United Kingdom, there are dispensing physicians who are allowed to both prescribe and dispense prescription-only medicines to their patients from within their practices. The law requires that the GP practice be located in a designated rural area and that there is also a specified, minimum distance (currently 1 mile; 1.6 kilometres) between a patient's home and the nearest retail pharmacy. See Dispensing Doctors' Association.
This law also exists in Austria for general physicians if the nearest pharmacy is more than 4 kilometers ( miles) away, or where none is registered in the city. Switzerland also allows dispensing physicians in several Kantons.
In other jurisdictions (particularly in Asian countries such as China, Malaysia, and Singapore), doctors are allowed to dispense drugs themselves and the practice of pharmacy is sometimes integrated with that of the physician, particularly in traditional Chinese medicine.
In Canada it is common for a medical clinic and a pharmacy to be located together and for the ownership in both enterprises to be common, but licensed separately.
The reason for the majority rule is the high risk of a conflict of interest and/or the avoidance of absolute powers. Otherwise, the physician has a financial self-interest in "diagnosing" as many conditions as possible, and in exaggerating their seriousness, because he or she can then sell more medications to the patient. Such self-interest directly conflicts with the patient's interest in obtaining cost-effective medication and avoiding the unnecessary use of medication that may have side-effects. This system reflects much similarity to the checks and balances system of the U.S. and many other governments.
A campaign for separation has begun in many countries and has already been successful (as in Korea). As many of the remaining nations move towards separation, resistance and lobbying from dispensing doctors who have pecuniary interests may prove a major stumbling block (e.g. in Malaysia).
Experience in Asian countries
In many Asian countries there is not a traditional separation between physician and pharmacist. In Taiwan, a plan initiated in March 1997 experimented with separating doctors who prescribe from pharmacists who fulfill prescriptions on the theory that this would reduce unnecessary health care. The plan had mixed results. The South Korean government passed a law in 2000 which separated drug prescribing from dispensing. The passing of the law achieved some of its intentions and also caused problems in unexpected ways. Japan also is experimenting with separation of prescribing and dispensing. In Malaysia, , separation of prescribing and dispensing only occurs in government hospitals.
References
Further reading
External links
Medical regulation
Pharmacy
Separation of powers | Separation of prescribing and dispensing | [
"Chemistry"
] | 866 | [
"Pharmacology",
"Pharmacy"
] |
53,193,155 | https://en.wikipedia.org/wiki/Interchange%20instability | The interchange instability, also known as the Kruskal–Schwarzschild instability or flute instability, is a type of plasma instability seen in magnetic fusion energy that is driven by the gradients in the magnetic pressure in areas where the confining magnetic field is curved.
The name of the instability refers to the action of the plasma changing position with the magnetic field lines (i.e. an interchange of the lines of force in space) without significant disturbance to the geometry of the external field. The instability causes flute-like structures to appear on the surface of the plasma, hence it is also referred to as the flute instability. The interchange instability is a key issue in the field of fusion energy, where magnetic fields are used to confine a plasma in a volume surrounded by the field.
The basic concept was first noted in a 1954 paper by Martin David Kruskal and Martin Schwarzschild, who demonstrated that a situation similar to the Rayleigh–Taylor instability in classic fluids existed in magnetically confined plasmas. The problem can occur anywhere where the magnetic field is concave with the plasma on the inside of the curve. Edward Teller gave a talk on the issue at a meeting later that year, pointing out that it appeared to be an issue in most of the fusion devices being studied at that time. He used the analogy of rubber bands on the outside of a blob of jelly; there is a natural tendency for the bands to snap together and eject the jelly from the center.
Most machines of that era suffered from other instabilities that were far more powerful, and whether or not the interchange instability was taking place could not be confirmed. This was finally demonstrated beyond doubt by a Soviet magnetic mirror machine during an international meeting in 1961. When the US delegation stated they were not seeing this problem in their mirrors, it was pointed out they were making an error in the use of their instrumentation. When that was considered, it was clear the US experiments were also being affected by the same problem. This led to a series of new mirror designs, as well as modifications to other designs like the stellarator to add negative curvature. These had cusp-shaped fields so that the plasma was contained within convex fields, the so-called "magnetic well" configuration.
In modern designs, the interchange instability is suppressed by the complex shaping of the fields. In the tokamak design there are still areas of "bad curvature", but particles within the plasma spend only a short time in those areas before being circulated to an area of "good curvature". Modern stellarators use similar configurations, differing from tokamaks largely in how that shaping is created.
Basic concept
Magnetic confinement systems attempt to hold the plasma within a vacuum chamber using magnetic fields. The plasma particles are electrically charged, and thus see a transverse force from the field due to the Lorentz force. When the particle's original linear motion is superimposed on this transverse force, its resulting path through space is a helix, or corkscrew shape. Such a field will thus trap the plasma by forcing it to flow along the lines.
One can produce a linear field using an electromagnet in the form of a solenoid wrapped around a tubular vacuum chamber. In this case, the plasma will orbit the lines running down the center of the chamber and be prevented from moving outward towards the walls. This does not confine the plasma along the length of the tube, and it will rapidly flow out the ends. Designs that prevented this from occurring appeared in the early 1950s and experiments began in earnest in 1953. However, all of these devices proved to leak plasma at rates far higher than expected.
In May 1954, Martin David Kruskal and Martin Schwarzschild published a paper demonstrating two effects that meant plasmas in magnetic fields were inherently unstable. One of the two effects, which became known as the kink instability, was already being seen in early z-pinch experiments and occurred slowly enough to be captured on movie film. The topic of stability immediately gained significance in the field.
The other instability noted in the paper considered an infinite sheet of plasma held up against gravity by a magnetic field. It suggested there would be behaviour similar to that in classical physics when one heavy fluid is supported by a lighter one, which leads to the Rayleigh–Taylor instability. Any small vertical disturbance in an initially uniform field would result in the field pulling on the charges laterally and causing the initial disturbance to be further magnified. As large sheets of plasma were not common in existing devices, the outcome of this effect was not immediately obvious. It was not long before a corollary became obvious; the initial disturbance resulted in a curved interface between the plasma and the external field, and this was inherent to any design that had a convex area in the field.
In October 1954 a meeting of the still-secret Project Sherwood researchers was held at Princeton University's Gun Club building. Edward Teller brought up the topic of this instability and noted that two of the major designs being considered, the stellarator and the magnetic mirror, both had large areas of such curvature and thus should be expected to be inherently unstable. He further illustrated it by comparing the situation to jello being held together with rubber bands; while such a setup might be created, any slight disturbance would cause the rubber bands to contract and eject the jello. This exchange of position appeared to be identical to the mirror case in particular, where the plasma naturally wanted to expand while the magnetic fields had an internal tension.
No such behaviour had been seen in experimental devices, but as the situation was considered further, it became clear it would be more obvious in areas of greater curvature, and existing devices used relatively weak magnetic fields with relatively flat fields. This nevertheless presented a significant problem; a key measure of the attractiveness of a reactor design was its beta, the ratio of magnetic field strength to confined plasma - higher beta meant more plasma for the same magnet, which was a significant factor in cost. However, higher beta also implied more curvature in these devices, which would make them increasingly unstable. This might force reactors to operate at low beta and be doomed to be economically unattractive.
As the magnitude of the problem became clear, the meeting turned to the question of whether or not there was any arrangement that was naturally stable. Jim Tuck was able to provide a solution; the picket fence reactor concept had been developed as a solution to another problem, bremsstrahlung losses, but he pointed out that its field arrangement would be naturally stable under the conditions shown in the Kruskal/Schwarzschild paper. Nevertheless, as Amasa Bishop noted;
The correctness of the simplified model was then called into question and led to further study. The answer appeared at a follow-up meeting at Berkeley in
February 1955, where Harold Grad of New York University, Conrad Longmire of Los Alamos and Edward A. Frieman of Princeton presented independent developments that all proved the effect to be real, and worse, should be expected at any beta, not just high beta. Further work at Los Alamos demonstrated that the effect should be seen in both the mirror and stellarator.
The effect is most obvious in the magnetic mirror device. The mirror has a field that runs along the open center of the cylinder and bundles together at the ends. In the center of the chamber the particles follow the lines and flow towards either end of the device. There, the increasing magnetic density causes them to "reflect", reversing direction and flowing back into the center again. Ideally, this will keep the plasma confined indefinitely, but even in theory there a critical angle between the particle trajectory and the axis of the mirror where particles can escape. Initial calculations showed that the loss rate through this process would be small enough to not be a concern.
In practice, all mirror machines demonstrated a loss rate far higher than these calculations suggested. The interchange instability was one of the major reasons for these losses. The mirror field has a cigar shape to it, with increasing curvature at the ends. When the plasma is located in its design location, the electrons and ions are roughly mixed. However, if the plasma is displaced, the non-uniform nature of the field means the ion's larger orbital radius takes them outside the confinement area while the electrons remain inside. It is possible the ion will hit the wall of the container, removing it from the plasma. If this occurs, the outer edge of the plasma is now net negatively charged, attracting more of the positively charged ions, which then escape as well.
This effect allows even a tiny displacement to drive the entire plasma mass to the walls of the container. The same effect occurs in any reactor design where the plasma is within a field of sufficient curvature, which includes the outside curve of toroidal machines like the tokamak and stellarator. As this process is highly non-linear, it tends to occur in isolated areas, giving rise to the flute-like expansions as opposed to mass movement of the plasma as a whole.
History
In the 1950s, the field of theoretical plasma physics emerged. The confidential research of the war became declassified and allowed the publication and spread of very influential papers. The world rushed to take advantage of the recent revelations on nuclear energy. Although never fully realized, the idea of controlled thermonuclear fusion motivated many to explore and research novel configurations in plasma physics. Instabilities plagued early designs of artificial plasma confinement devices and were quickly studied partly as a means to inhibit the effects. The analytical equations for interchange instabilities were first studied by Kruskal and Schwarzschild in 1954. They investigated several simple systems including the system in which an ideal fluid is supported against gravity by a magnetic field (the initial model described in the last section).
In 1958, Bernstein derived an energy principle that rigorously proved that the change in potential must be greater than zero for a system to be stable. This energy principle has been essential in establishing a stability condition for the possible instabilities of a specific configuration.
In 1959, Thomas Gold attempted to use the concept of interchange motion to explain the circulation of plasma around the Earth, using data from Pioneer III published by James Van Allen. Gold also coined the term “magnetosphere” to describe “the region above the ionosphere in which the magnetic field of the Earth has a dominant control over the motions of gas and fast charged particles.” Marshall Rosenthal and Conrad Longmire described in their 1957 paper how a flux tube in a planetary magnetic field accumulates charge because of opposing movement of the ions and electrons in the background plasma. Gradient, curvature and centrifugal drifts all send ions in the same direction along the planetary rotation, meaning that there is a positive build-up on one side of the flux tube and a negative build-up on the other. The separation of charges established an electric field across the flux tube and therefore adds an E x B motion, sending the flux tube toward the planet. This mechanism supports our interchange instability framework, resulting in the injection of less dense gas radially inward. Since Kruskal and Schwarzschild's papers a tremendous amount of theoretical work has been accomplished that handle multi-dimensional configurations, varying boundary conditions and complicated geometries.
Studies of planetary magnetospheres with space probes has helped the development of interchange instability theories, especially the comprehensive understanding of interchange motions in Jupiter and Saturn’s magnetospheres.
Instability in a plasma system
The single most important property of a plasma is its stability. MHD and its derived equilibrium equations offer a wide variety of plasmas configurations but the stability of those configurations have not been challenged. More specifically, the system must satisfy the simple condition
where ? is the change in potential energy for degrees of freedom. Failure to meet this condition indicates that there is a more energetically preferable state. The system will evolve and either shift into a different state or never reach a steady state. These instabilities pose great challenges to those aiming to make stable plasma configurations in the lab. However, they have also granted us an informative tool on the behavior of plasma, especially in the examination of planetary magnetospheres.
This process injects hotter, lower density plasma into a colder, higher density region. It is the MHD analog of the well-known Rayleigh-Taylor instability. The Rayleigh-Taylor instability occurs at an interface in which a lower density liquid pushes against a higher density liquid in a gravitational field. In a similar model with a gravitational field, the interchange instability acts in the same way. However, in planetary magnetospheres co-rotational forces are dominant and change the picture slightly.
Simple models
Let's first consider the simple model of a plasma supported by a magnetic field B in a uniform gravitational field g. To simplify matters, assume that the internal energy of the system is zero such that static equilibrium may be obtained from the balance of the gravitational force and the magnetic field pressure on the boundary of the plasma. The change in the potential is then given by the equation: ? If two adjacent flux tubes lying opposite along the boundary (one fluid tube and one magnetic flux tube) are interchanged the volume element doesn't change and the field lines are straight. Therefore, the magnetic potential doesn't change, but the gravitational potential changes since it was moved along the z axis. Since the change in is negative the potential is decreasing.
A decreasing potential indicates a more energetically favorable system and consequently an instability. The origin of this instability is in the J × B forces that occur at the boundary between the plasma and magnetic field. At this boundary there are slight ripple-like perturbations in which the low points must have a larger current than the high points since at the low point more gravity is being supported against the gravity. The difference in current allows negative and positive charge to build up along the opposite sides of the valley. The charge build-up produces an E field between the hill and the valley. The accompanying E × B drifts are in the same direction as the ripple, amplifying the effect. This is what is physically meant by the “interchange” motion.
These interchange motions also occur in plasmas that are in a system with a large centrifugal force. In a cylindrically symmetric plasma device, radial electric fields cause the plasma to rotate rapidly in a column around the axis. Acting opposite to the gravity in the simple model, the centrifugal force moves the plasma outward where the ripple-like perturbations (sometimes called “flute” instabilities) occur on the boundary. This is important for the study of the magnetosphere in which the co-rotational forces are stronger than the opposing gravity of the planet. Effectively, the less dense “bubbles” inject radially inward in this configuration.
Without gravity or an inertial force, interchange instabilities can still occur if the plasma is in a curved magnetic field. If we assume the potential energy to be purely magnetic then the change in potential energy is: . If the fluid is incompressible then the equation can be simplified into . Since (to maintain pressure balance), the above equation shows that if the system is unstable. Physically, this means that if the field lines are toward the region of higher plasma density then the system is susceptible to interchange motions. To derive a more rigorous stability condition, the perturbations that cause an instability must be generalized. The momentum equation for a resistive MHD is linearized and then manipulated into a linear force operator. Due to purely mathematical reasons, it is then possible to split the analysis into two approaches: the normal mode method and the energy method. The normal mode method essentially looks for the eigenmodes and eigenfrequencies and summing the solutions to form the general solution. The energy method is similar to the simpler approach outlined above where is found for any arbitrary perturbation in order to maintain the condition . These two methods are not exclusive and can be used together to establish a reliable diagnosis of the stability.
Observations in space
The strongest evidence for interchange transport of plasma in any magnetosphere is the observation of injection events. The recording of these events in the magnetospheres of Earth, Jupiter and Saturn are the main tool for the interpretation and analysis of interchange motion.
Earth
Although spacecraft have travelled many times in the inner and outer orbit of Earth since the 1960s, the spacecraft was the first major plasma experiment performed that could reliably determine the existence of radial injections driven by interchange motions. The analysis revealed the frequent injection of a hot plasma cloud is injected inward during a substorm in the outer layers of the magnetosphere. The injections occur predominantly in the night-time hemisphere, being associated with the depolarization of the neutral sheet configuration in the tail regions of the magnetosphere. This paper then implies that Earth's magnetotail region is a major mechanism in which the magnetosphere stores and releases energy through the interchange mechanism. The interchange instability also has been found to have a limiting factor on the night side plasmapause thickness [Wolf et al. 1990]. In this paper, the plasmapause is found to be near the geosynchronous orbit in which the centrifugal and gravitational potential exactly cancel out. This sharp change in plasma pressure associated with the plasma pause enables this instability. A mathematical treatment comparing the growth rate of the instability with the thickness of the plasmapause boundary revealed that the interchange instability limits the thickness of that boundary.
Jupiter
Interchange instability plays a major role in the radial transport of plasma in the Io plasma torus at Jupiter. The first evidence of this behavior was published by Thorne et al. in which they discovered “anomalous plasma signatures” in the Io torus of Jupiter's magnetosphere. Using the data from the spacecraft Galileo's energetic particle detector (EPD), the study looked at one specific event. In Thorne et al. they concluded that these events had a density differential of at least a factor of 2, a spatial scale of km and an inward velocity of about km/s. These results support the theoretical arguments for interchange transport.
Later, more injections events were discovered and analyzed from Galileo. Mauk et al. used over 100 Jovian injections to study how these events were dispersed in energy and time. Similar to injections of Earth, the events were often clustered in time. The authors concluded that this indicated the injection events were triggered by solar wind activity against the Jovian magnetosphere. This is very similar to the magnetic storm relationship injection events have on Earth. However, it was found that Jovian injections can occur at all local time positions and therefore can't be directly related to the situation in Earth's magnetosphere. Although the Jovian injections aren't a direct analog of Earth's injections, the similarities indicate that this process plays a vital role in the storage and release of energy. The difference may lie in the presence of Io in the Jovian system. Io is a large producer of plasma mass because of its volcanic activity. This explains why the bulk of interchange motions are seen in a small radial range near Io.
Saturn
Recent evidence from the spacecraft Cassini has confirmed that the same interchange process is prominent on Saturn. Unlike Jupiter, the events happen much more frequently and more clearly. The difference lies in the configuration of the magnetosphere. Since Saturn's gravity is much weaker, the gradient/curvature drift for a given particle energy and L value is about 25 times faster. Saturn's magnetosphere provides a much better environment for the study of interchange instability under these conditions even though the process is essential in both Jupiter and Saturn. In a case study of one injection event, the Cassini Plasma Spectrometer (CAPS) produced characteristic radial profiles of plasma densities and temperatures of the plasma particles that also allowed the calculation of the origin of the injection and the radial propagation velocity. The electron density inside the event was lowered by a factor of about 3, the electron temperature was higher by an order of magnitude than the background, and there was a slight increase in the magnetic field. The study also used a model of pitch angle distributions to estimate the event originated between and had a radial speed of about 260+60/-70 km/s. These results are similar to the Galileo results discussed earlier. The similarities imply that the Saturn and Jupiter processes are the same.
See also
Plasma stability
Magnetic mirror
Fusion power
References
Plasma instabilities | Interchange instability | [
"Physics"
] | 4,144 | [
"Plasma phenomena",
"Physical phenomena",
"Plasma instabilities"
] |
53,199,888 | https://en.wikipedia.org/wiki/Intectin | Intectin is a Ly-6 family protein which is anchored to glycosylphosphatidylinositol on intestinal epithelial cells. Intectin has been shown to maintain the integrity of the intestinal wall by inducing apoptosis of intestinal epithelial cells upon exposure to dietary palmitic acid. Mice treated with the prebiotic oligofructose showed improved intestinal homeostasis as indicated by increased intectin.
References
Proteins | Intectin | [
"Chemistry"
] | 100 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
59,468,856 | https://en.wikipedia.org/wiki/Snecked%20masonry | Snecked masonry has a mixture of roughly squared stones of different sizes. It is laid in horizontal courses with rising stones projecting through the courses of smaller stones. Yet smaller fillers called snecks also occur in the courses. The mixture of stone sizes produces a strong bond and an attractive finish. Large amounts of planning for bricklaying process should be considered, as the corners cannot mould perfectly into every size stone. Additional stonecutting and on-the-scene stonecrafting skills may be required.
References
Masonry
Building materials
Stonemasonry
Building stone | Snecked masonry | [
"Physics",
"Engineering"
] | 115 | [
"Masonry",
"Building engineering",
"Construction",
"Stonemasonry",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
59,471,688 | https://en.wikipedia.org/wiki/Thermoelectric%20acclimatization | Thermoelectric acclimatization depends on the possibility of a Peltier cell of absorbing heat on one side and rejecting heat on the other side. Consequently, it is possible to use them for heating on one side and cooling on the other and as a temperature control system.
Peltier cell heat pump
A typical Peltier cell based heat pump can be used by coupling the thermoelectric generators with photovoltaic air cooled panels as defined in the PhD thesis of Alexandra Thedeby. Considering the system with an air plant that ensures the possibility of heating on one side and cooling on the other. By changing the configuration it allows both winter and summer acclimatization. These elements are expected to be an effective element for zero-energy buildings, if coupled with solar thermal energy and photovoltaic with particular reference to create radiant heat pumps on the walls of a building.
It must be remarked that this acclimatization method ensures the ideal efficiency during summer cooling if coupled with a photovoltaic generator. The air circulation could be also used for cooling the temperature of PV modules.
The most important engineering requirement is the accurate design of heat sinks to optimize the heat exchange and minimize the fluiddynamic losses.
Thermodynamic parameters
The efficiency can be determined by the following relation:
where is the temperature of the cooling surface and is the temperature of the heating surface.
The key energy phenomena and the reason of defining a specific use of thermoelectric elements (Figure 1) as heat pump resides in the energy fluxes that those elements allow realizing:
Conductive power :
Heat flux on the cold side :
Heat flux on the hot side :
Electric power :
Where the following terms are used: , electric current; α Seebeck coefficient; R electric resistance, S surface area, d cell thickness, and k thermal conductivity.
The efficiencies of the system are:
Cooling efficiency:
Heating efficiency:
COP can be calculated according to Cannistraro.
Final uses
Thermoelectric heat pumps can be easily used for both local acclimatization for removing local discomfort situations. For example, thermoelectric ceilings are today in an advanced research stage with the aim of increasing indoor comfort conditions according to Fanger, such as the ones that may appear in presence of large glassed surfaces, and for small building acclimatization if coupled with solar systems.
Those systems have the key importance in the direction of new zero emissions passive building because of a very high COP value and the following high performances by an accurate exergy optimization of the system.
At industrial level thermoelectric acclimatization appliances are actually under development
References
Heating
Thermodynamics | Thermoelectric acclimatization | [
"Physics",
"Chemistry",
"Mathematics"
] | 554 | [
"Thermodynamics",
"Dynamical systems"
] |
54,494,255 | https://en.wikipedia.org/wiki/Chandrasekhar%27s%20white%20dwarf%20equation | In astrophysics, Chandrasekhar's white dwarf equation is an initial value ordinary differential equation introduced by the Indian American astrophysicist Subrahmanyan Chandrasekhar, in his study of the gravitational potential of completely degenerate white dwarf stars. The equation reads as
with initial conditions
where measures the density of white dwarf, is the non-dimensional radial distance from the center and is a constant which is related to the density of the white dwarf at the center. The boundary of the equation is defined by the condition
such that the range of becomes . This condition is equivalent to saying that the density vanishes at .
Derivation
From the quantum statistics of a completely degenerate electron gas (all the lowest quantum states are occupied), the pressure and the density of a white dwarf are calculated in terms of the maximum electron momentum standardized as , with pressure and density , where
is the mean molecular weight of the gas, and is Planck's constant.
When this is substituted into the hydrostatic equilibrium equation
where is the gravitational constant and is the radial distance, we get
and letting , we have
If we denote the density at the origin as , then a non-dimensional scale
gives
where . In other words, once the above equation is solved the density is given by
The mass interior to a specified point can then be calculated
The radius-mass relation of the white dwarf is usually plotted in the plane -.
Solution near the origin
In the neighborhood of the origin, , Chandrasekhar provided an asymptotic expansion as
where . He also provided numerical solutions for the range .
Equation for small central densities
When the central density is small, the equation can be reduced to a Lane–Emden equation by introducing
to obtain at leading order, the following equation
subjected to the conditions and . Note that although the equation reduces to the Lane–Emden equation with polytropic index , the initial condition is not that of the Lane–Emden equation.
Limiting mass for large central densities
When the central density becomes large, i.e., or equivalently , the governing equation reduces to
subjected to the conditions and . This is exactly the Lane–Emden equation with polytropic index . Note that in this limit of large densities, the radius
tends to zero. The mass of the white dwarf however tends to a finite limit
The Chandrasekhar limit follows from this limit.
See also
Emden–Chandrasekhar equation
Tolman–Oppenheimer–Volkoff equation
References
Equations of astronomy
Equations of physics
Fluid dynamics
Stellar dynamics
White dwarfs
Ordinary differential equations | Chandrasekhar's white dwarf equation | [
"Physics",
"Chemistry",
"Astronomy",
"Mathematics",
"Engineering"
] | 514 | [
"Equations of physics",
"Concepts in astronomy",
"Chemical engineering",
"Mathematical objects",
"Astrophysics",
"Equations",
"Equations of astronomy",
"Piping",
"Fluid dynamics",
"Stellar dynamics"
] |
54,494,563 | https://en.wikipedia.org/wiki/Limit%20and%20colimit%20of%20presheaves | In category theory, a branch of mathematics, a limit or a colimit of presheaves on a category C is a limit or colimit in the functor category .
The category admits small limits and small colimits. Explicitly, if is a functor from a small category I and U is an object in C, then is computed pointwise:
The same is true for small limits. Concretely this means that, for example, a fiber product exists and is computed pointwise.
When C is small, by the Yoneda lemma, one can view C as the full subcategory of . If is a functor, if is a functor from a small category I and if the colimit in is representable; i.e., isomorphic to an object in C, then, in D,
(in particular the colimit on the right exists in D.)
The density theorem states that every presheaf is a colimit of representable presheaves.
Notes
References
Category theory
Sheaf theory | Limit and colimit of presheaves | [
"Mathematics"
] | 215 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Sheaf theory",
"Mathematical relations",
"Category theory",
"Topology"
] |
51,624,530 | https://en.wikipedia.org/wiki/Working%20level | Working level (WL) is a historical unit of concentration of radioactive decay products of radon, applied to uranium mining environment. One working level refers to the concentration of short-lived decay products of radon in equilibrium with 3,700 Bq/m (100 pCi/L) in air. These decay products would emit 1.3 × 10 MeV in complete decay. The Nuclear Regulatory Commission uses this definition.
Working level month (WLM) is a closely related quantity, referring to exposure to one working level for 170 hours per month. This comes from assuming a 40-hour work week.
In 2002, the NRC regulations limited exposure in mines to 0.3 WL, which was comparable with the standards of International Commission on Radiological Protection at the time.
References
Units of radiation dose
Radon
Radiation protection
Mine safety | Working level | [
"Mathematics"
] | 172 | [
"Quantity",
"Units of radiation dose",
"Units of measurement"
] |
51,626,919 | https://en.wikipedia.org/wiki/IBI%20Group | IBI Group Inc. is a Canadian-based architecture, engineering, planning, and technology firm operating from over 60 offices in 12 countries across the world.
Founded in 1974 in Toronto, Canada, IBI Group has since been ranked as one of the largest architecture or architecture/engineering firms in the world: in 2011 it ranked 4th or 6th (depending on the methodology used); in 2016 it was ranked as the 8th largest architecture firm (with 836 fee-earning architects) by BD Online; and in 2016 its United States operations were ranked by ArchDaily as the 13th largest architecture firm in the USA.
As of 2022, IBI Group has approximately 3,400 employees and more than 60 offices located across six continents. IBI Group's consulting services business is concentrated in three practice areas: Intelligence, Buildings and Infrastructure. By integrating productivity tools, processes and technology innovations developed through IBI's Intelligence practice, the company has been able to drive incremental growth in its traditional Buildings and Infrastructures practices, while generating more efficient results for IBI clients.
On September 27, 2022, it was acquired by Arcadis.
History
The IBI Group was founded in Toronto by nine partners to provide professional planning and design services for urban development and transportation projects.
The firm merged with Robbie/Young + Wright Architects to become Robbie Young + Wright / IBI Group Architects, with noted Toronto architect Rod Robbie as chairman emeritus. In 2004 the firm became a publicly owned entity through the formation of the IBI Income Fund. In 2010 the Fund was converted to a corporation, IBI Group Inc.
The firm's name was derived from the last initials of its two founding principals, Neal Irwin and Phil Beinhaker. The firm has rebranded itself, stating the IBI stands for "Intelligence, Buildings, and Infrastructure."
In September 2022, IBI Group was acquired by Arcadis.
Major acquisitions
Since 2000 the firm has expanded through mergers and acquisitions of consulting firms in multiple locations. Some have been folded into the IBI Group brand and others have maintained a distinct identity. The major acquisitions below are listed in chronological order.
Cumming Cockburn
In 2004, IBI Group acquired the Ontario architecture and consulting firm Cumming Cockburn, as well as its subsidiaries CCL Consultants and Marshall Cumming & Associates.
Vancouver office
The Vancouver office expanded through the 2005 merger of Hancock Bruckner, Eng + Wright; Lawrence Doyle Architects; and Young + Wright Architects.
Grey-Noble & Grey-Noble
In 2005 the Newmarket, Ontario-based architectural firm of Grey-Noble & Grey-Noble was acquired.
Thomas Blurock Architects
In 2006 the Costa Mesa, California-based educational project-focused firm of Thomas Blurock Architects was acquired and incorporated.
Page+Steele
In 2008 the Toronto-based firm of Page+Steele, Architects was acquired and operates as Page+Steele/IBI Group.
Gruzen Samton Architects
In 2009 the New York City based firm of Gruzen Samton Architects, Planners & Interior Designers was acquired. The firm was founded in 1936 and operates as IBI Group.
Group Architects
In 2009 the small Toronto-based firm of Group Architects was acquired. IBI relocated and redistributed its team to a new location and the company and presence dissolved entirely through the following years.
BFGC Architects Planners
In 2009, BFGC Architects Planners, with offices in Bakersfield, San Luis Obispo and San Jose, California, was acquired.
Nightingale Architects
In 2010, Nightingale Architects, with four offices in the United Kingdom, including in London and Cardiff, was acquired for £13.1 million.
Dull Olson Weekes Architects
In 2010, IBI acquired the Portland, Oregon-based firm of Dull Olson Weekes Architects, a regional specialist in the design of educational facilities with offices in Portland and Seattle, Washington. It has received multiple awards for its work, including the CEFPI/A4LE James D. MacConnell Award for excellence in design and planning, in 2009 for the Rosa Parks School and Community Campus at New Columbia, in 2014 for Trillium Creek Primary School, and in 2020 as a finalist for Mary Lyon Elementary School. The firm operates as IBI Group Architects.
Cardinal Hardy Architectes
In 2011, the Quebec-based firm of Cardinal Hardy merged with Beinhaker Architecte (within the IBI Group), and became known as Cardinal Hardy Beinhaker Architecte. Groupe Cardinal Hardy merged into the IBI Group. Three years later, in late 2014, it was sold to Montreal-based architecture group Lemay.
Carol R. Johnson Associates
In 2011 the Boston based landscape architecture firm Carol R. Johnson Associates was acquired.
Bay Architects
In 2011 the Houston, Texas-based firm of Bay Architects was acquired.
Taylor Young
In 2012, Taylor Young, a United Kingdom-based architectural and master-planning practice headquartered in Cheshire and with offices in Liverpool and London, was acquired.
M-E Companies
In 2012, M-E Companies, an Ohio-based engineering firm with offices in Westerville, Cincinnati and Canton was acquired.
Aspyr
IBI acquired the British Columbia-based Aspyr Engineering on September 3, 2019.
Cole Engineering Group
IBI acquired the Cole Engineering Group on December 1, 2020.
Major projects
Major projects, ordered by type, are:
Masterplans
Benxi New City, Benxi, China
2012 Summer Olympics - Travel demand management program, London
CaféTO, Toronto
Al Bandar Development Master Plan, Muscat
Bhubaneswar Smart City Strategy and Implementation, Bhubaneswar
Government
States of Jersey Police headquarters, Jersey
Fire Station 16 and Calgary Fire Department Headquarter, Calgary
Cultural
Parliament of Canada Visitor Centre Phase 1 (with Moriyama & Teshima Architects), Ottawa
Boca Raton Center for the Arts and Innovation, Boca Raton
Education
41 Cooper Square, New York City
Diamond Ranch High School (executive architect), Pomona, California
École secondaire catholique Père-Philippe-Lamarche, Toronto, Ontario
Franklin High School renovation
Heschel School - Ronald P. Stanton Campus, New York City
Rosa Parks School and Community Campus at New Columbia
Ridgeview High School (Redmond, Oregon)
Sabine Pass K-12 School, Sabine Pass
San Jacinto College Maritime Center, Houston
Sandy High School, Sandy, Oregon
School of One, New York City
Stuyvesant High School, New York City
Trillium Creek Primary School, Portland
Transportation
Evergreen Point Floating Bridge, Lake Washington, Washington
Pioneer Village station, Toronto
Victoria Park station (Toronto) renovation
Confederation Line, Ottawa - station design
Bloomington GO Station, Richmond Hill, Ontario
Line 5 Eglinton, Toronto
Office
Ericsson R&D Complex, Research Triangle Park, North Carolina
Boston Landing, New Balance World Headquarters, Boston
Leisure
Delta Toronto Hotel, Toronto
Oceanside Dolphin Hotel, San Diego
Mixed use
Holt Renfrew, Calgary, Vancouver and Mississauga
Parq Vancouver, Vancouver
Residential
88 Scott Street, Toronto
Atlantis The Royal, Dubai
Healthcare
BC Cancer Research Centre, Vancouver (with Henriquez Partners Architects)
Optegra Eye Hospital, London
Queen Elizabeth University Hospital, Glasgow
Royal Hospital for Children, Glasgow
Products
HotSpot
IBI Group acquired HotSpot in June 2022 for $5.74 million. Founded in 2013, HotSpot allows users to pay for municipal parking from their phones, or pay for and receive real-time updates about bus services, as well as order and pay for taxis.
CurbIQ
CurbIQ is IBI Group's curbside management tool intended to allow municipalities and mobility companies to manage curbside operations by digitizing their regulation. It was created as a result of IBI Group's Curbside Management Strategy created for the City of Toronto for the 2015 Pan American and Parapan American Games.
CurbIQ consists of four modules:
Curb Viewer - map-based visualization tool allows municipalities to visualise their existing curbside regulations.
Curb Manager - simplified GIS platform for municipalities to efficiently manage their curbside by adding, removing, or modifying curbside regulations.
Curb Analyzer - quantifies the designations of curb spaces to provide city planners with trends on their usage
Curb Rules API - to allow transportation network companies, such as ridesharing applications, and commercial vehicle dispatches, to add information about curbside regulations to their own applications.
CurbIQ was used to launch a SENATOR pilot project in Dublin, Ireland that aimed to create a new logistics system to improve the city's transportation network in 2022.
Nspace
Nspace is a desk and conference room booking and visitor management application intended to support flexible work arrangements.
Acquisition by Arcadis
IBI Group announced on July 18, 2022, that it has entered into an agreement with the Dutch design, engineering and management consulting company Arcadis to "acquire all issued and outstanding shares" for $19.50 per share, a thirty percent premium on the day's closing price. The approximately $873 million acquisition was finalised in September 2022 after a shareholder vote.
References
External links
IBI Group corporate structure (see page 7)
2022 mergers and acquisitions
Architecture firms of Canada
Companies based in Toronto
Architecture firms based in Oregon
Engineering consulting firms of Canada
International engineering consulting firms
Companies formerly listed on the Toronto Stock Exchange | IBI Group | [
"Engineering"
] | 1,873 | [
"Engineering consulting firms",
"International engineering consulting firms"
] |
51,630,164 | https://en.wikipedia.org/wiki/NGC%20228 | NGC 228 is a spiral galaxy located in the constellation Andromeda. It was discovered on October 10, 1879 by Édouard Stephan.
References
External links
0228
Barred spiral galaxies
Andromeda (constellation)
Discoveries by Édouard Stephan
002563 | NGC 228 | [
"Astronomy"
] | 50 | [
"Andromeda (constellation)",
"Constellations"
] |
51,631,696 | https://en.wikipedia.org/wiki/Project%20VR-190 | VR-190 (; Vysotnaya Raketa, literally, high-altitude rocket) was the USSR's first rocket project designed to launch a human into suborbital space flight on a ballistic trajectory. The project ran in the 1940s and 1950s and, according to official sources, did not achieve its set goals. However, conspiracy theories surrounding the project claim that although crewed flights officially failed, cosmonauts were successfully sent into space in the 1950s.
History
Origins
On 13 May 1946, according to a secret decree by the Soviet Government, large-scale rocket research was established in the USSR. The official establishment of the rocket industry was preceded by a working group run by Mikhail Tikhonravov and Nikolai Chernyshov at the NII-4 in the Academy of Artillery Science. In the autumn of 1945, the group ran its own stratospheric rocket programme, which culminated in the development of the VR-190, a rocket system for vertical flight for two pilots up to an altitude of 200 km based on captured German V-2 (A-4) rockets.
In February 1946, the project was presented to the Secretary of the USSR Academy of Sciences, N. G. Bruevich, and then to the Academy's president Sergey Vavilov in March. Positively received, the project was introduced to the Minister of the aviation industry, Mikhail Khrunichev, in June.
Testing and development
Testing of uncrewed and crewed flights was conducted at Kapustin Yar in the Astrakhan region. The test flights were reported to have lasted about 20 minutes during which the rocket reached a height of more than 100 km (Kármán line) in the upper atmosphere, with their payloads separating from the warheads. All pilots descended back using parachutes and landed a few kilometers from the launch site.
The VR-190 project was implemented at roughly same time, with high-altitude rockets with sealed warheads and life-support systems being tested. They also conducted flights with animal passengers in order to assess the combined effects of various factors that could also affect human passengers. Several suborbital flights with dogs were carried out: with R-1B and R-1V rockets (1951) - when dogs Dezik and Roma were the first animals in history to successfully complete sub-orbital spaceflight - with R-1D and R-1E rockets (1954-1957), R-1E rockets (1957-1960) and R-2A and R-5A rockets.
According to official accounts, the project never reached the stage of human flight, and was canceled due to the lack of success in the late 1950s. Work refocused on creating the orbital crewed capsule Vostok.
The project was strictly kept secret, with designers, scientists, and even the dogs operating under pseudonyms. The first public information on the project became available in the 1980s and was of a purely theoretical nature. Its practical implementation and the first flight of the dogs on rockets was officially disclosed in 1991.
See also
Point-to-point sub-orbital spaceflight
Orbital spaceflight
Spaceflight
Spaceport
List of rocket launch sites
Office of Commercial Space Transportation
Canadian Arrow
Supersonic Transport
XCOR Lynx
Rocketplane XP
DH-1 (rocket)
McDonnell Douglas DC-X
Interorbital Systems
Quad (rocket)
Lunar Lander Challenge
Reusable Vehicle Testing program by JAXA
Project Morpheus NASA program to continue developing ALHAT and Quad landers
References
Human spaceflight programs
Space program of the Soviet Union
Suborbital spaceflight | Project VR-190 | [
"Engineering"
] | 725 | [
"Space programs",
"Human spaceflight programs"
] |
68,739,969 | https://en.wikipedia.org/wiki/Orca%20%28carbon%20capture%20plant%29 | The Orca carbon capture plant is a facility that uses direct air capture to remove carbon dioxide from the atmosphere (The name, "Orca" comes from the Icelandic word, "orka" which means "energy". It was constructed by Climeworks and is joint work with Carbfix, an academic-industrial partnership that has developed a novel approach to capture . The plant uses dozens of large fans to pull in air and pass it through a filter. The filter is then released of the it contains through heat. The extracted is later mixed with water and pushed into the ground, using a technology from Carbfix.
The plant started sequestering carbon dioxide in 2021. It is said to have cost between $10–15 million to build. It is located in Iceland and is the largest facility of its kind on earth. It is located about 50 kilometers outside Reykjavík next to the Hellisheiði Power Station, which is run by Reykjavík Energy. It was inaugurated on 8 September 2021 in presence of Katrín Jakobsdóttir, the Prime Minister of Iceland.
Carbon offsetting potential
Climeworks claims that the plant can capture 4000 tons of per year. This equates roughly to the emissions from about 870 cars. It counts Microsoft founder Bill Gates and the reinsurance company Swiss Re as current customers.
The thousands of tons of carbon dioxide being removed is owed to the nearly 20 direct air capture plants currently functioning in the world. As the world's climate climbs towards 2 degrees Celsius, more technology is needed desperately to sustain our climate, preventing it from reaching severe temperatures.
References
Carbon capture and storage
Buildings and structures in Iceland | Orca (carbon capture plant) | [
"Engineering"
] | 334 | [
"Geoengineering",
"Carbon capture and storage"
] |
73,078,000 | https://en.wikipedia.org/wiki/Porous%20medium%20equation | The porous medium equation, also called the nonlinear heat equation, is a nonlinear partial differential equation taking the form:where is the Laplace operator. It may also be put into its equivalent divergence form:where may be interpreted as a diffusion coefficient and is the divergence operator.
Solutions
Despite being a nonlinear equation, the porous medium equation may be solved exactly using separation of variables or a similarity solution. However, the separation of variables solution is known to blow up to infinity at a finite time.
Barenblatt-Kompaneets-Zeldovich similarity solution
The similarity approach to solving the porous medium equation was taken by Barenblatt and Kompaneets/Zeldovich, which for was to find a solution satisfying:for some unknown function and unknown constants . The final solution to the porous medium equation under these scalings is:where is the -norm, is the positive part, and the coefficients are given by:
Applications
The porous medium equation has been found to have a number of applications in gas flow, heat transfer, and groundwater flow.
Gas flow
The porous medium equation name originates from its use in describing the flow of an ideal gas in a homogeneous porous medium. We require three equations to completely specify the medium's density , flow velocity field , and pressure : the continuity equation for conservation of mass; Darcy's law for flow in a porous medium; and the ideal gas equation of state. These equations are summarized below:where is the porosity, is the permeability of the medium, is the dynamic viscosity, and is the polytropic exponent (equal to the heat capacity ratio for isentropic processes). Assuming constant porosity, permeability, and dynamic viscosity, the partial differential equation for the density is:where and .
Heat transfer
Using Fourier's law of heat conduction, the general equation for temperature change in a medium through conduction is:where is the medium's density, is the heat capacity at constant pressure, and is the thermal conductivity. If the thermal conductivity depends on temperature according to the power law:Then the heat transfer equation may be written as the porous medium equation:with and . The thermal conductivity of high-temperature plasmas seems to follow a power law.
See also
Diffusion equation
Porous medium
References
External links
The Porous Medium Equation: Mathematical theory
Partial differential equations
Diffusion
Hydrogeology
Heat transfer
Transport phenomena
Exactly solvable models | Porous medium equation | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 506 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Diffusion",
"Hydrology",
"Chemical engineering",
"Thermodynamics",
"Hydrogeology"
] |
73,080,425 | https://en.wikipedia.org/wiki/Leucocoprinus%20inflatus | Leucocoprinus inflatus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was described in 1987 by the mycologist Jörg Raithelhuber who classified it as Leucocoprinus inflatus.
Raithelhuber based his classification on Lepiota trombophora Berk. & Broome ss. Johannes Rick in Iheringia : Série Botânica 8 - Basidiomycetes Eubasidii in Rio Grande do Sul - Brasilia however this appears to be a typo as the correct name (and the name used by Rick) is Lepiota thrombophora.
Agaricus (Lepiota) thrombophorus was described in 1871 by the British mycologists Miles Joseph Berkeley and Christopher Edmund Broome and classified as Lepiota thrombophora by Pier Andrea Saccardo in 1887.
Raithelhuber states that the species described by Rick is not the same as that described by Berk. & Broome however and notes that no preserved material from Rick exists for this species so Raithelhuber's classification of Leucocoprinus inflatus is based on Rick's description with the spore size quoted directly from Rick. Raithelhuber notes that Leucocoprinus inflatus may possibly just be a variety of Leucocoprinus bulbipes and the spore size of the two species is very similar.
Rick also notes a similarity to Lepiota alborussa and Lepiota clypeolariae.
Description
Leucocoprinus inflatus is a small dapperling mushroom.
Cap: 2.5-3.5cm wide with thin, almost membranous flesh and striations across almost the entire cap. The surface is pale with black fibrous scales and a very dark umbo. Stem: 3.5-4.5cm long, 3mm wide at the top and 7mm at the bulbous base. The surface is white with a powdery coating and the stem ring is white, wide and hanging. Gills: White but discolouring slightly yellow with age. Somewhat distant and bulging in the middle with denticulate edges. Spores: Smooth with a visible pore. 10-12 x 7-9 μm.
Lepiota thrombophora was described by Berk and Broome as having a white, conical cap covered with lumpy brown scales, a furfuraceous stem and white gills.
Etymology
The specific epithet inflatus is Latin for swollen up.
Habitat and distribution
The specimens were found growing on the ground in Brazil.
References
inflatus
Fungi described in 1987
Fungi of South America
Fungus species | Leucocoprinus inflatus | [
"Biology"
] | 564 | [
"Fungi",
"Fungus species"
] |
73,081,805 | https://en.wikipedia.org/wiki/Superconcentrated%20electrolytes | Superconcentrated electrolytes, also known as water-in-salt or solvent-in-salt liquids, usually refer to chemical systems, which are liquid near room temperature and consist of a solvent-to-dissoved salt in a molar ratio near or smaller than ca. 4-8, i.e. where all solvent molecules are coordinated to cations, and no free solvent molecules remain. Since ca. 2010 such liquid electrolytes found several applications, primarily for batteries. In the case of lithium metal batteries and lithium-ion batteries most commonly used anions for superconcentrated electrolytes are those, that are large, asymmetric and rotationally-vibrationally flexible, such as bis(trifluoromethanesulfonyl)amide and bis(fluorosulfonyl)amide. Noteworthy, lithium chloride and sodium perchlorate also form water-in-salt solutions.
Advantages
Superconcentrated electrolytes demonstrate the following advantages:
(1) Many show a good oxidative stability. In particular, some can suppress oxidative corrosion of an Al current collector without a source of fluoride ion (such as hexafluorophosphate) and enable the use of 5 V lithium-ion battery cathode materials.
(2) Some are resistant to electrochemical reduction. It is believed, that some sulfonimides (e.g., those with S-F and F-(H)C-N fragments, form a solid electrolyte interface similar to that formed by some organic carbonate solvents. Properties #1 and #2 are responsible for very large (4-5 volt) voltage window, which is useful for advanced batteries.
(3) Related to #2 is the ability of some superconcentrated electrolytes to allow for reversible intercalation of Li+ ions into graphite in the absence of ethylene carbonate solvent, therefore enabling a new class of safer lithium-ion batteries.
(4) Solvent vapor pressure is lower, thermal stability is higher, and flammability is absent, which contributes to a better battery safety.
(5) The concentration of charge-carrying ion is larger, which translates into smaller ion travelling distances.
(6) In some cases, and contrary to expectations, faster rates of electrode reactions are observed, than in conventional low-salt-concentration electrolytes.
(7) Polysulfide dissolution is sometimes suppressed, which enables cycling of such batteries as lithium-sulfur.
(8) Some studies report, that Li+ transference number in such liquids is close to one, which means, that Li+ concentration gradient between anode and cathode does not develop during the battery's charge and discharge.
(9) Electrodeposition of lithium metal from superconcentrated electrolytes is often nodular (without dendrites) and reversible.
Disadvantages
At the same time, highly concentrated electrolytes are not without disadvantages:
(1) Their ionic conductivity is generally lower than that of corresponding dilute (~1 M) electrolytes.
(2) Their viscosity is higher than that of conventional electrolytes.
(3) Their cost is usually higher, because manufacturing of some anions, such as sulfonimides, requires several low-yield synthetic steps.
Origin of the unusual properties
The exact mechanism of high-voltage stability of superconcentrated electrolytes have not been established as of 2023. The two main proposed mechanisms are:
(1) a decrease of water molecules' thermodynamic activity, when all water molecules are coordinated to cations, such as Li+.
(2) decomposition of an anion with the formation of a solid electrolyte interface.
Most recent studies suggest, that the anion decomposition mechanism (2) dominates in a majority of cases.
References
Solutions
Electric battery | Superconcentrated electrolytes | [
"Chemistry"
] | 804 | [
"Homogeneous chemical mixtures",
"Solutions"
] |
73,088,800 | https://en.wikipedia.org/wiki/Conjugated%20oligoelectrolytes | Conjugated oligoelectrolytes, or COEs, are a class of synthetic antimicrobials designed to prevent and circumvent antimicrobial resistance via different mechanism of action than traditional antibiotics. COEs insert into cell membranes and can function as electron transporters, but were found to inhibit bacterial growth. They can also be used for tracking the progress of tumor growth.
References
Bactericides | Conjugated oligoelectrolytes | [
"Biology"
] | 88 | [
"Bactericides",
"Biotechnology stubs",
"Biocides"
] |
73,090,850 | https://en.wikipedia.org/wiki/Cadusafos | Cadusafos (2-[butan-2-ylsulfanyl(ethoxy)phosphoryl]sulfanylbutane) is a chemical insecticide and nematicide often used against parasitic nematode populations. The compound acts as a acetylcholinesterase inhibitor. It belongs the chemical class of synthetic organic thiophosphates and it is a volatile and persistent clear liquid. It is used on food crops such as tomatoes, bananas and chickpeas. It is currently not approved by the European Commission
for use in the EU. Exposure can occur through inhalation, ingestion or contact with the skin. The compound is highly toxic to nematodes,
earthworms and birds but poses no carcinogenic risk to humans.
History
A patent application for Cadusafos was first filed in Europe on August 13, 1982 by FMC Corporation, an American chemical company which originated as an insecticide producer. In their patent application, they claimed that the compound should preferably be used to “control nematodes and soil insects, but may also control some insects which feed on the above ground portions of the plant.” The patent is expired, meaning that the compound is commercially available from chemical vendors such as Sigma Aldrich. However, the pesticide is not approved for use in Europe due to the lack of information on consumer exposure and the risk to groundwater.
Structure and reactivity
Cadusafos is a synthetic organic thiophosphate compound which is observed as a volatile and
persistent clear liquid. The toxin is an organothiophosphate insecticide.
Organothiophosphorus compounds are identified as compounds which contain carbonphosphorus bonds where the phosphorus atom is also bound to sulphur. Many of these
compounds serve as insecticides and cholinergic agents.
Cadusafos contains the phosphorus atom bound to two sulphurs which are attached to iso-butyl
substituents. The phosphorus is also connected to oxygen by a double bond and is bound to an
ethyl ether group.
The exact reactivity of Cadusafos as well as that of organothiophosphate compounds in general
is, as of yet, unknown. However, the cholinesterase enzyme inhibition mechanism of action of
these compounds works similarly to other organophosphates. Examples of
organophosphates include nerve gasses such as sarin and VX as well as pesticides like
malathion.
Synthesis
The synthesis of Cadusafos can be performed via the substitution reaction of O-ethyl phosphoric dichloride and two equivalents of 2-butanethiol.
Mechanism of action
Cadusafos is an inhibitor of the enzyme acetylcholinesterase. This enzyme binds to
acetylcholine and cleaves it into choline and acetate. Acetylcholine is a neurotransmitter which
is used in neurons to pass on a neural stimulus. Cadusafos inhibits the function of
acetylcholinesterase by occupying the active site of the enzyme which will no longer be able to
function properly, resulting in the accumulation of acetylcholine. This might result in excessive
nervous stimulation, respiratory failure and death.
Cadusafos is an organothiophosphate, which is a subclass of organophosphates.
Organophosphates can act as an inhibitor for acetylcholinesterase in a way for which the
mechanism is known. The active site of acetylcholinesterase contains an anionic site and
an esteratic site. This esteratic side contains a serine at the 200th position, which usually binds
acetylcholine. Organophosphate inhibitors can phosphorylate this serine and with that inhibit
the enzyme.
Metabolism and biotransformation
In a study, 14C radiolabeled Cadusafos was administered orally to rats. The excretion of feces, urine and CO2 was monitored for seven days. This showed that cadusafos is readily absorbed (90-100%) and mainly eliminated via urine (around 75%), followed by elimination via expired air (10-15%) and via feces (5-15%). Over 90% of the administered dose was eliminated within 48 hours after administration. Analysis of tissue and blood samples collected after seven days showed a remaining radioactivity between 1-3%. The majority of this radioactivity was found in fat, liver, kidney and lung tissue and no evidence of accumulation was found.
A different study was performed in order to identify the metabolites formed in rats after receiving either an oral or intravenous dose of Cadusafos. The metabolic products were analyzed using several analysis methods (HPLC, TLC, GC-MS, 1H-NMR and liquid scintillation). This indicated the presence of the parent compound, Cadusafos, as well as 10 other metabolites. The main pathway of metabolism involves the cleavage of the thio-(sec-butyl) group, forming two primary products: Sec-butyl mercaptan and Oethyl-S-(2-butyl) phosphorothioic acid (OSPA). These intermediate compounds are then degraded further into several metabolites. The major metabolites were hydroxysulfones, followed by phosphorothionic acids and sulfonic acids, which then form conjugates.
Toxicity
A study has been conducted by the Joint FAO/WHO Meeting on Pesticide Residues (JMPR),
on rats in which the lethal dose of Cadusafos was investigated. The researchers found a median
lethal dose via the oral pathway of 68.4 mg/kg bodyweight (bw) in male rats and 82.1 mg/kg
bw in female rats. The rats died of typical symptoms of acetylcholinesterase inhibition. Via the
dermal pathway, lower median lethal doses were found; mg/kg bw in males and 41.8 mg/kg bw
in females.
Considering the toxicity in humans, there is no data available yet regarding the median lethal
dose for a human. The United States Environmental Protection Agency (EPA), did publish a
report on the safety concerns of Cadusafos used as a pesticide on bananas and concluded that
“Potential acute and chronic dietary exposures from eating bananas treated with Cadusafos are
below the level of concern for the entire U.S. population, including infants and children.”
Effects on animals
Cadusafos has been proved to be toxic to fish, aquatic invertebrates, bees, earthworms and other
arthropods. Further research was conducted on terrestrial vertebrates, and it is expected to have
toxic effects on mammals. Besides its direct toxicity to multiple species, Cadusafos also
has a potential to bioaccumulate so secondary poisoning for earthworm eating mammals and
birds should also be taken into consideration. The estimated risk to bees and aquatic
organisms is low due to the application of Cadusafos, even though the toxicity to bees is high.
The compound is also estimated to be highly toxic to earthworms and birds. A multigeneration
study in rats has established a No Adverse Effect Level (NOAEL) of 0.03 mg/kg bw per day
for the inhibition of cholinesterase activity in plasma and erythrocytes. There has been no
adequate evidence that Cadusafos could prove a genotoxic compound. Due to this and
additional research on mice and rats which proved Cadusafos as non-carcinogenic, it can be
concluded that Cadusafos is non-carcinogenic for humans.
Efficacy
Cadusafos proved to be very effective against parasitic nematode populations such as Rotylenchulus reniformis and Meloidogyne incognita. It showed to be more effective against endoparasitic nematodes
than ectoparasitic nematodes and when compared to other nematicides like triazophos,
methyl bromide, aldicarb, carbofuran and phorate, Cadusafos proved to be the most efficient.
The effectiveness of Cadusafos improves when increasing the dosage or the exposure time.
Efficacy after application for several successive cropping seasons seemed to remain
the same for up to four seasons. However, when it is used for more than 4 consecutive seasons, this can cause a linear decrease in the efficacy.
References
Nematicides
Ethyl esters
Phosphorodithioates
Insecticides
Thioesters
Sec-Butyl compounds | Cadusafos | [
"Chemistry"
] | 1,796 | [
"Thioesters",
"Functional groups",
"Phosphorodithioates"
] |
44,450,362 | https://en.wikipedia.org/wiki/Network%20medicine | Network medicine is the application of network science towards identifying, preventing, and treating diseases. This field focuses on using network topology and network dynamics towards identifying diseases and developing medical drugs. Biological networks, such as protein-protein interactions and metabolic pathways, are utilized by network medicine. Disease networks, which map relationships between diseases and biological factors, also play an important role in the field. Epidemiology is extensively studied using network science as well; social networks and transportation networks are used to model the spreading of disease across populations. Network medicine is a medically focused area of systems biology.
Background
The term "network medicine" was introduced by Albert-László Barabási in an the article "Network Medicine – From Obesity to the 'Diseasome, published in The New England Journal of Medicine, in 2007. Barabási states that biological systems, similarly to social and technological systems, contain many components that are connected in complicated relationships but are organized by simple principles. Relaying on the tools and principles of network theory, the organizing principles can be analyzed by representing systems as complex networks, which are collections of nodes linked together by a particular biological or molecular relationship. For networks pertaining to medicine, nodes represent biological factors (biomolecules, diseases, phenotypes, etc.) and links (edges) represent their relationships (physical interactions, shared metabolic pathway, shared gene, shared trait, etc.).
Barabasi suggested that understanding human disease requires us to focus on three key networks, the metabolic network, the disease network, and the social network. The network medicine is based on the idea that understanding complexity of gene regulation, metabolic reactions, and protein-protein interactions and that representing these as complex networks will shed light on the causes and mechanisms of diseases. It is possible, for example, to infer a bipartite graph representing the connections of diseases to their associated genes using the OMIM database. The projection of the diseases, called the human disease network (HDN), is a network of diseases connected to each other if they share a common gene. Using the HDN, diseases can be classified and analyzed through the genetic relationships between them. Network medicine has proven to be a valuable tool in analyzing big biomedical data.
Research areas
Interactome
The whole set of molecular interactions in the human cell, also known as the interactome, can be used for disease identification and prevention. These networks have been technically classified as scale-free, disassortative, small-world networks, having a high betweenness centrality.
Protein-protein interactions have been mapped, using proteins as nodes and their interactions between each other as links. These maps utilize databases such as BioGRID and the Human Protein Reference Database. The metabolic network encompasses the biochemical reactions in metabolic pathways, connecting two metabolites if they are in the same pathway. Researchers have used databases such as KEGG to map these networks. Others networks include cell signaling networks, gene regulatory networks, and RNA networks.
Using interactome networks, one can discover and classify diseases, as well as develop treatments through knowledge of its associations and their role in the networks. One observation is that diseases can be classified not by their principle phenotypes (pathophenotype) but by their disease module, which is a neighborhood or group of components in the interactome that, if disrupted, results in a specific pathophenotype. Disease modules can be used in a variety of ways, such as predicting disease genes that have not been discovered yet. Therefore, network medicine looks to identify the disease module for a specific pathophenotype using clustering algorithms.
Diseasome
Human disease networks, also called the diseasome, are networks in which the nodes are diseases and the links, the strength of correlation between them. This correlation is commonly quantified based on associated cellular components that two diseases share. The first-published human disease network (HDN) looked at genes, finding that many of the disease associated genes are non-essential genes, as these are the genes that do not completely disrupt the network and are able to be passed down generations. Metabolic disease networks (MDN), in which two diseases are connected by a shared metabolite or metabolic pathway, have also been extensively studied and is especially relevant in the case of metabolic disorders.
Three representations of the diseasome are:
Shared gene formalism states that if a gene is linked to two different disease phenotypes, then the two diseases likely have a common genetic origin (genetic disorders).
Shared metabolic pathway formalism states that if a metabolic pathway is linked to two different diseases, then the two diseases likely have a shared metabolic origin (metabolic disorders).
Disease comorbidity formalism uses phenotypic disease networks (PDN), where two diseases are linked if the observed comorbidity between their phenotypes exceeds a predefined threshold. This does not look at the mechanism of action of diseases, but captures disease progression and how highly connected diseases correlate to higher mortality rates.
Some disease networks connect diseases to associated factors outside the human cell. Networks of environmental and genetic etiological factors linked with shared diseases, called the "etiome", can be also used to assess the clustering of environmental factors in these networks and understand the role of the environment on the interactome. The human symptom-disease network (HSDN), published in June 2014, showed that the symptoms of disease and disease associated cellular components were strongly correlated and that diseases of the same categories tend to form highly connected communities, with respect to their symptoms.
Pharmacology
Network pharmacology is a developing field based in systems pharmacology that looks at the effect of drugs on both the interactome and the diseasome. The topology of a biochemical reaction network determines the shape of drug dose-response curve as well as the type of drug-drug interactions, thus can help design efficient and safe therapeutic strategies. In addition, the drug-target network (DTN) can play an important role in understanding the mechanisms of action of approved and experimental drugs. The network theory view of pharmaceuticals is based on the effect of the drug in the interactome, especially the region that the drug target occupies. Combination therapy for a complex disease (polypharmacology) is suggested in this field since one active pharmaceutical ingredient (API) aimed at one target may not affect the entire disease module. The concept of disease modules can be used to aid in drug discovery, drug design, and the development of biomarkers for disease detection. There can be a variety of ways to identifying drugs using network pharmacology; a simple example of this is the "guilt by association" method. This states if two diseases are treated by the same drug, a drug that treats one disease may treat the other. Drug repurposing, drug-drug interactions and drug side-effects have also been studied in this field. The next iteration of network pharmacology used entirely different disease definitions, defined as dysfunction in signaling modules derived from protein-protein interaction modules. The latter as well as the interactome had many conceptual shortcomings, e.g., each protein appears only once in the interactome, whereas in reality, one protein can occur in different contexts and different cellular locations. Such signaling modules are therapeutically best targeted at several sites, which is now the new and clinically applied definition of network pharmacology. To achieve higher than current precision, patients must not be selected solely on descriptive phenotypes but also based on diagnostics that detect the module dysregulation. Moreover, such mechanism-based network pharmacology has the advantage that each of the drugs used within one module is highly synergistic, which allows for reducing the doses of each drug, which then reduces the potential of these drugs acting on other proteins outside the module and hence the chance for unwanted side effects.
Network epidemics
Network epidemics has been built by applying network science to existing epidemic models, as many transportation networks and social networks play a role in the spread of disease. Social networks have been used to assess the role of social ties in the spread of obesity in populations. Epidemic models and concepts, such as spreading and contact tracing, have been adapted to be used in network analysis. These models can be used in public health policies, in order to implement strategies such as targeted immunization and has been recently used to model the spread of the Ebola virus epidemic in West Africa across countries and continents.
Drug prescription networks (DPNs)
Recently, some researchers tended to represent medication use in form of networks. The nodes in these networks represent medications and the edges represent some sort of relationship between these medications. Cavallo et al. (2013) described the topology of a co-prescription network to demonstrate which drug classes are most co-prescribed. Bazzoni et al. (2015) concluded that the DPNs of co-prescribed medications are dense, highly clustered, modular and assortative. Askar et al. (2021) created a network of the severe drug-drug interactions (DDIs) showing that it consisted of many clusters.
Other networks
The development of organs and other biological systems can be modelled as network structures where the clinical (e.g., radiographic, functional) characteristics can be represented as nodes and the relationships between these characteristics are represented as the links
among such nodes. Therefore, it is possible to use networks to model how organ systems dynamically interact.
Educational and clinical implementation
The Channing Division of Network Medicine at Brigham and Women's Hospital was created in 2012 to study, reclassify, and develop treatments for complex diseases using network science and systems biology. It currently involves more than 80 Harvard Medical School (HMS) faculty and focuses on three areas:
Chronic Disease Epidemiology uses genomics and metabolomics in large, long-term epidemiology studies, such as the Nurses' Health Study.
Systems Genetics & Genomics focuses on complex respiratory diseases, specifically COPD and asthma, in smaller population studies.
Systems Pathology uses multidisciplinary approaches, including as control theory, dynamical systems, and combinatorial optimization, to understand complex diseases and guide biomarker design.
Massachusetts Institute of Technology offers an undergraduate course called "Network Medicine: Using Systems Biology and Signaling Networks to Create Novel Cancer Therapeutics". Also, Harvard Catalyst (The Harvard Clinical and Translational Science Center) offers a three-day course entitled "Introduction to Network Medicine", open to clinical and science professionals with doctorate degrees.
Current worldwide efforts in network medicine are coordinated by the Network Medicine Institute and Global Alliance, representing 33 leading universities and institutions around the world committed to improving global health.
See also
Biological network
Biological network inference
Bioinformatics
Complex network
Glossary of graph theory
Graph theory
Graphical models
Human disease network
Interactome
Metabolic network
Network dynamics
Network science
Network theory
Network topology
Pharmacology
Systems biology
Systems pharmacology
Targeted immunization strategies
References
Network theory | Network medicine | [
"Mathematics"
] | 2,238 | [
"Network theory",
"Mathematical relations",
"Graph theory"
] |
44,450,755 | https://en.wikipedia.org/wiki/Lysophosphatidylinositol | Lysophosphatidylinositol (LPI, lysoPI), or L-α-lysophosphatidylinositol, is an endogenous lysophospholipid and endocannabinoid neurotransmitter. LPI, along with its 2-arachidonoyl- derivative, 2-arachidonoyl lysophosphatidylinositol (2-ALPI), have been proposed as the endogenous ligands of GPR55.
See also
Phosphatidylinositol
Cannabinoid receptor
References
Endocannabinoids
Neurotransmitters
Phospholipids | Lysophosphatidylinositol | [
"Chemistry",
"Biology"
] | 151 | [
"Phospholipids",
"Inositol",
"Biotechnology stubs",
"Neurotransmitters",
"Signal transduction",
"Biochemistry stubs",
"Biochemistry",
"Neurochemistry"
] |
44,455,145 | https://en.wikipedia.org/wiki/Automation%20engineering | Automation engineering is the provision of automated solutions to physical activities and industries.
Automation engineer
Automation engineers are experts who have the knowledge and ability to design, create, develop and manage machines and systems, for example, factory automation, process automation and warehouse automation.
Automation technicians are also involved.
Scope
Automation engineering is the integration of standard engineering fields. Automatic control of various control systems for operating various systems or machines to reduce human efforts & time to increase accuracy. Automation engineers design and service electromechanical devices and systems for high-speed robotics and programmable logic controllers (PLCs).
Work and career after graduation
Graduates can work for both government and private sector entities such as industrial production,
companies that create and use automation systems, for example the paper industry, automotive industry, metallurgical industry, food and agricultural industry, water treatment, and oil & gas sectors such as refineries, rolling mills and power plants.
Job Description
Automation engineers can design, program, simulate and test automated machinery and processes, and are usually employed in industries such as the energy sector in plants, car manufacturing facilities, food processing plants, and robots. Automation engineers are responsible for creating detailed design specifications and other documents, developing automation based on specific requirements for the process involved, and conforming to international standards like IEC-61508, local standards, and other process specific guidelines and specifications, simulate, test and commission electronic equipment for automation.
See also
Automation
Artificial intelligence
Control engineering
Mechatronics engineering
References
Engineering disciplines
Knowledge economy
Automation software | Automation engineering | [
"Engineering"
] | 305 | [
"Control engineering",
"nan",
"Automation software",
"Automation"
] |
44,455,674 | https://en.wikipedia.org/wiki/Conflict-free%20replicated%20data%20type | In distributed computing, a conflict-free replicated data type (CRDT) is a data structure that is replicated across multiple computers in a network, with the following features:
The application can update any replica independently, concurrently and without coordinating with other replicas.
An algorithm (itself part of the data type) automatically resolves any inconsistencies that might occur.
Although replicas may have different state at any particular point in time, they are guaranteed to eventually converge.
The CRDT concept was formally defined in 2011 by Marc Shapiro, Nuno Preguiça, Carlos Baquero and Marek Zawirski. Development was initially motivated by collaborative text editing and mobile computing. CRDTs have also been used in online chat systems, online gambling, and in the SoundCloud audio distribution platform. The NoSQL distributed databases Redis, Riak and Cosmos DB have CRDT data types.
Background
Concurrent updates to multiple replicas of the same data, without coordination between the computers hosting the replicas, can result in inconsistencies between the replicas, which in the general case may not be resolvable. Restoring consistency and data integrity when there are conflicts between updates may require some or all of the updates to be entirely or partially dropped.
Accordingly, much of distributed computing focuses on the problem of how to prevent concurrent updates to replicated data. But another possible approach is optimistic replication, where all concurrent updates are allowed to go through, with inconsistencies possibly created, and the results are merged or "resolved" later. In this approach, consistency between the replicas is eventually re-established via "merges" of differing replicas. While optimistic replication might not work in the general case, there is a significant and practically useful class of data structures, CRDTs, where it does work — where it is always possible to merge or resolve concurrent updates on different replicas of the data structure without conflicts. This makes CRDTs ideal for optimistic replication.
As an example, a one-way Boolean event flag is a trivial CRDT: one bit, with a value of true or false. True means some particular event has occurred at least once. False means the event has not occurred. Once set to true, the flag cannot be set back to false (an event having occurred cannot un-occur). The resolution method is "true wins": when merging a replica where the flag is true (that replica has observed the event), and another one where the flag is false (that replica hasn't observed the event), the resolved result is true — the event has been observed.
Types of CRDTs
There are two approaches to CRDTs, both of which can provide strong eventual consistency: state-based CRDTs and operation-based CRDTs.
State-based CRDTs
State-based CRDTs (also called convergent replicated data types, or CvRDTs) are defined by two types, a type for local states and a type for actions on the state, together with three functions: A function to produce an initial state, a merge function of states, and a function to apply an action to update a state. State-based CRDTs simply send their full local state to other replicas on every update, where the received new state is then merged into the local state. To ensure eventual convergence the functions should fulfill the following properties:
The merge function should compute the join for any pair of replica states, and should form a semilattice with the initial state as the neutral element. In particular this means, that the merge function must be commutative, associative, and idempotent. The intuition behind commutativity, associativity and idempotence is that these properties are used to make the CRDT invariant under package re-ordering and duplication. Furthermore, the update function must be monotone with regard to the partial order defined by said semilattice.
Delta state CRDTs (or simply Delta CRDTs) are optimized state-based CRDTs where only recently applied changes to a state are disseminated instead of the entire state.
Operation-based CRDTs
Operation-based CRDTs (also called commutative replicated data types, or CmRDTs) are defined without a merge function. Instead of transmitting states, the update actions are transmitted directly to replicas and applied. For example, an operation-based CRDT of a single integer might broadcast the operations (+10) or (−20). The application of operations should still be commutative and associative. However, instead of requiring that application of operations is idempotent, stronger assumptions on the communications infrastructure are expected -- all operations must be delivered to the other replicas without duplication.
Pure operation-based CRDTs are a variant of operation-based CRDTs that reduces the metadata size.
Comparison
The two alternatives are theoretically equivalent, as each can emulate the other.
However, there are practical differences.
State-based CRDTs are often simpler to design and to implement; their only requirement from the communication substrate is some kind of gossip protocol.
Their drawback is that the entire state of every CRDT must be transmitted eventually to every other replica, which may be costly.
In contrast, operation-based CRDTs transmit only the update operations, which are typically small.
However, operation-based CRDTs require guarantees from the communication middleware; that the operations are not dropped or duplicated when transmitted to the other replicas, and that they are delivered in causal order.
While operations-based CRDTs place more requirements on the protocol for transmitting operations between replicas, they use less bandwidth than state-based CRDTs when the number of transactions is small in comparison to the size of internal state. However, since the state-based CRDT merge function is associative, merging with the state of some replica yields all previous updates to that replica. Gossip protocols work well for propagating state-based CRDT state to other replicas while reducing network use and handling topology changes.
Some lower bounds on the storage complexity of state-based CRDTs are known.
Known CRDTs
G-Counter (Grow-only Counter)
This state-based CRDT implements a counter for a cluster of n nodes. Each node in the cluster is assigned an ID from 0 to n - 1, which is retrieved with a call to myId(). Thus each node is assigned its own slot in the array P, which it increments locally. Updates are propagated in the background, and merged by taking the max() of every element in P. The compare function is included to illustrate a partial order on the states. The merge function is commutative, associative, and idempotent. The update function monotonically increases the internal state according to the compare function. This is thus a correctly defined state-based CRDT and will provide strong eventual consistency. The operations-based CRDT equivalent broadcasts increment operations as they are received.
PN-Counter (Positive-Negative Counter)
A common strategy in CRDT development is to combine multiple CRDTs to make a more complex CRDT. In this case, two G-Counters are combined to create a data type supporting both increment and decrement operations. The "P" G-Counter counts increments; and the "N" G-Counter counts decrements. The value of the PN-Counter is the value of the P counter minus the value of the N counter. Merge is handled by letting the merged P counter be the merge of the two P G-Counters, and similarly for N counters. Note that the CRDT's internal state must increase monotonically, even though its external state as exposed through query can return to previous values.
G-Set (Grow-only Set)
The G-Set (grow-only set) is a set which only allows adds. An element, once added, cannot be removed. The merger of two G-Sets is their union.
2P-Set (Two-Phase Set)
Two G-Sets (grow-only sets) are combined to create the 2P-set. With the addition of a remove set (called the "tombstone" set), elements can be added and also removed. Once removed, an element cannot be re-added; that is, once an element e is in the tombstone set, query will never again return True for that element. The 2P-set uses "remove-wins" semantics, so remove(e) takes precedence over add(e).
LWW-Element-Set (Last-Write-Wins-Element-Set)
LWW-Element-Set is similar to 2P-Set in that it consists of an "add set" and a "remove set", with a timestamp for each element. Elements are added to an LWW-Element-Set by inserting the element into the add set, with a timestamp. Elements are removed from the LWW-Element-Set by being added to the remove set, again with a timestamp. An element is a member of the LWW-Element-Set if it is in the add set, and either not in the remove set, or in the remove set but with an earlier timestamp than the latest timestamp in the add set. Merging two replicas of the LWW-Element-Set consists of taking the union of the add sets and the union of the remove sets. When timestamps are equal, the "bias" of the LWW-Element-Set comes into play. A LWW-Element-Set can be biased towards adds or removals. The advantage of LWW-Element-Set over 2P-Set is that, unlike 2P-Set, LWW-Element-Set allows an element to be reinserted after having been removed.
OR-Set (Observed-Remove Set)
OR-Set resembles LWW-Element-Set, but using unique tags instead of timestamps. For each element in the set, a list of add-tags and a list of remove-tags are maintained. An element is inserted into the OR-Set by having a new unique tag generated and added to the add-tag list for the element. Elements are removed from the OR-Set by having all the tags in the element's add-tag list added to the element's remove-tag (tombstone) list. To merge two OR-Sets, for each element, let its add-tag list be the union of the two add-tag lists, and likewise for the two remove-tag lists. An element is a member of the set if and only if the add-tag list less the remove-tag list is nonempty. An optimization that eliminates the need for maintaining a tombstone set is possible; this avoids the potentially unbounded growth of the tombstone set. The optimization is achieved by maintaining a vector of timestamps for each replica.
Sequence CRDTs
A sequence, list, or ordered set CRDT can be used to build a collaborative real-time editor, as an alternative to operational transformation (OT).
Some known Sequence CRDTs are Treedoc,
RGA, Woot,
Logoot, and LSEQ.
CRATE is a decentralized real-time editor built on top of LSEQSplit (an extension of LSEQ) and runnable on a network of browsers using WebRTC.
LogootSplit was proposed as an extension of Logoot in order to reduce the metadata for sequence CRDTs. MUTE is an online web-based peer-to-peer real-time collaborative editor relying on the LogootSplit algorithm.
Industrial sequence CRDTs, including open-source ones, are known to out-perform academic implementations due to optimizations and a more realistic testing methodology. The main popular example is Yjs CRDT, a pioneer in using a plainlist instead of a tree (ala Kleppmann's automerge).
Industry use
Fluid Framework is an open-source collaborative platform built by Microsoft that provides both server reference implementations and client-side SDKs for creating modern real-time web applications using CRDTs.
Nimbus Note is a collaborative note-taking application that uses the Yjs CRDT for collaborative editing.
Redis is a distributed, highly available, and scalable in-memory database with a "CRDT-enabled database" feature.
SoundCloud open-sourced Roshi, a LWW-element-set CRDT for the SoundCloud stream implemented on top of Redis.
Riak is a distributed NoSQL key-value data store based on CRDTs. League of Legends uses the Riak CRDT implementation for its in-game chat system, which handles 7.5 million concurrent users and 11,000 messages per second.
Bet365 stores hundreds of megabytes of data in the Riak implementation of OR-Set.
TomTom employs CRDTs to synchronize navigation data between the devices of a user.
Phoenix, a web framework written in Elixir, uses CRDTs to support real-time multi-node information sharing in version 1.2.
Facebook implements CRDTs in their Apollo low-latency "consistency at scale" database.
Facebook uses CRDTs in their FlightTracker system for managing the Facebook graph internally.
Teletype for Atom employs CRDTs to enable developers share their workspace with team members and collaborate on code in real time.
Apple implements CRDTs in the Notes app for syncing offline edits between multiple devices.
Novell, Inc. introduced a state-based CRDT with "loosely consistent" directory replication (NetWare Directory Services), included in NetWare 4.0 in 1995. The successor product, eDirectory, delivered improvements to the replication process.
See also
Data synchronization
Collaborative real-time editors
Consistency models
Optimistic replication
Operational transformation
Self-stabilizing algorithms
References
External links
A collection of resources and papers on CRDTs
"Strong Eventual Consistency and Conflict-free Replicated Data Types" (A talk on CRDTs) by Marc Shapiro
Readings in conflict-free replicated data types by Christopher Meiklejohn
CAP theorem and CRDTs: CAP 12 years later. How the rules have changed by Eric Brewer
Distributed data structures
Distributed algorithms
Fault-tolerant computer systems | Conflict-free replicated data type | [
"Technology",
"Engineering"
] | 2,993 | [
"Fault-tolerant computer systems",
"Reliability engineering",
"Computer systems"
] |
44,456,093 | https://en.wikipedia.org/wiki/Cohomological%20descent | In algebraic geometry, a cohomological descent is, roughly, a "derived" version of a fully faithful descent in the classical descent theory. This point is made precise by the below: the following are equivalent: in an appropriate setting, given a map a from a simplicial space X to a space S,
is fully faithful.
The natural transformation is an isomorphism.
The map a is then said to be a morphism of cohomological descent.
The treatment in SGA uses a lot of topos theory. Conrad's notes gives a more down-to-earth exposition.
See also
hypercovering, of which a cohomological descent is a generalization
References
SGA4 Vbis
P. Deligne, Théorie des Hodge III, Publ. Math. IHÉS 44 (1975), pp. 6–77.
External links
http://ncatlab.org/nlab/show/cohomological+descent
Algebraic geometry | Cohomological descent | [
"Mathematics"
] | 202 | [
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Algebraic geometry"
] |
44,457,563 | https://en.wikipedia.org/wiki/O-GlcNAc | O-GlcNAc (short for O-linked GlcNAc or O-linked β-N-acetylglucosamine) is a reversible enzymatic post-translational modification that is found on serine and threonine residues of nucleocytoplasmic proteins. The modification is characterized by a β-glycosidic bond between the hydroxyl group of serine or threonine side chains and N-acetylglucosamine (GlcNAc). O-GlcNAc differs from other forms of protein glycosylation: (i) O-GlcNAc is not elongated or modified to form more complex glycan structures, (ii) O-GlcNAc is almost exclusively found on nuclear and cytoplasmic proteins rather than membrane proteins and secretory proteins, and (iii) O-GlcNAc is a highly dynamic modification that turns over more rapidly than the proteins which it modifies. O-GlcNAc is conserved across metazoans.
Due to the dynamic nature of O-GlcNAc and its presence on serine and threonine residues, O-GlcNAcylation is similar to protein phosphorylation in some respects. While there are roughly 500 kinases and 150 phosphatases that regulate protein phosphorylation in humans, there are only 2 enzymes that regulate the cycling of O-GlcNAc: O-GlcNAc transferase (OGT) and O-GlcNAcase (OGA) catalyze the addition and removal of O-GlcNAc, respectively. OGT utilizes UDP-GlcNAc as the donor sugar for sugar transfer.
First reported in 1984, this post-translational modification has since been identified on over 9,000 proteins in H. sapiens. Numerous functional roles for O-GlcNAcylation have been reported including crosstalking with serine/threonine phosphorylation, regulating protein-protein interactions, altering protein structure or enzyme activity, changing protein subcellular localization, and modulating protein stability and degradation. Numerous components of the cell's transcription machinery have been identified as being modified by O-GlcNAc, and many studies have reported links between O-GlcNAc, transcription, and epigenetics. Many other cellular processes are influenced by O-GlcNAc such as apoptosis, the cell cycle, and stress responses. As UDP-GlcNAc is the final product of the hexosamine biosynthetic pathway, which integrates amino acid, carbohydrate, fatty acid, and nucleotide metabolism, it has been suggested that O-GlcNAc acts as a "nutrient sensor" and responds to the cell's metabolic status. Dysregulation of O-GlcNAc has been implicated in many pathologies including Alzheimer's disease, cancer, diabetes, and neurodegenerative disorders.
Discovery
In 1984, the Hart lab was probing for terminal GlcNAc residues on the surfaces of thymocytes and lymphocytes. Bovine milk β-1,4-galactosyltransferase, which reacts with terminal GlcNAc residues, was used to perform radiolabeling with UDP-[3H]galactose. β-elimination of serine and threonine residues demonstrated that most of the [3H]galactose was attached to proteins O-glycosidically; chromatography revealed that the major β-elimination product was Galβ1-4GlcNAcitol. Insensitivity to peptide N-glycosidase treatment provided additional evidence for O-linked GlcNAc. Permeabilizing cells with detergent prior to radiolabeling greatly increased the amount of [3H]galactose incorporated into Galβ1-4GlcNAcitol, leading the authors to conclude that most of the O-linked GlcNAc monosaccharide residues were intracellular.
Mechanism
O-GlcNAc is generally a dynamic modification that can be cycled on and off various proteins. Some residues are thought to be constitutively modified by O-GlcNAc. The O-GlcNAc modification is installed by OGT in a sequential bi-bi mechanism where the donor sugar, UDP-GlcNAc, binds to OGT first followed by the substrate protein. The O-GlcNAc modification is removed by OGA in a hydrolysis mechanism involving anchimeric assistance (substrate-assisted catalysis) to yield the unmodified protein and GlcNAc. While crystal structures have been reported for both OGT and OGA, the exact mechanisms by which OGT and OGA recognize substrates have not been completely elucidated. Unlike N-linked glycosylation, for which glycosylation occurs in a specific consensus sequence (Asn-X-Ser/Thr, where X is any amino acid except Pro), no definitive consensus sequence has been identified for O-GlcNAc. Consequently, predicting sites of O-GlcNAc modification is challenging, and identifying modification sites generally requires mass spectrometry methods. For OGT, studies have shown that substrate recognition is regulated by a number of factors including aspartate and asparagine ladder motifs in the lumen of the superhelical TPR domain, active site residues, and adaptor proteins. As crystal structures have shown that OGT requires its substrate to be in an extended conformation, it has been proposed that OGT has a preference for flexible substrates. In in vitro kinetic experiments measuring OGT and OGA activity on a panel of protein substrates, kinetic parameters for OGT were shown to be variable between various proteins while kinetic parameters for OGA were relatively constant between various proteins. This result suggested that OGT is the "senior partner" in regulating O-GlcNAc and OGA primarily recognizes substrates via the presence of O-GlcNAc rather than the identity of the modified protein.
Detection and characterization
Several methods exist to detect the presence of O-GlcNAc and characterize the specific residues modified.
Lectins
Wheat germ agglutinin, a plant lectin, is able to recognize terminal GlcNAc residues and is thus often used for detection of O-GlcNAc. This lectin has been applied in lectin affinity chromatography for the enrichment and detection of O-GlcNAc.
Antibodies
Pan-O-GlcNAc antibodies that recognize the O-GlcNAc modification largely irrespective of the modified protein's identity are commonly used. These include RL2, an IgG antibody raised against O-GlcNAcylated nuclear pore complex proteins, and CTD110.6, an IgM antibody raised against an immunogenic peptide with a single serine O-GlcNAc modification. Other O-GlcNAc-specific antibodies have been reported and demonstrated to have some dependence on the identity of the modified protein.
Metabolic labeling
Many metabolic chemical reporters have been developed to identify O-GlcNAc. Metabolic chemical reporters are generally sugar analogues that bear an additional chemical moiety allowing for additional reactivity. For example, peracetylated GlcNAc (Ac4GlcNAz) is a cell-permeable azido sugar that is de-esterified intracellularly by esterases to GlcNAz and converted to UDP-GlcNAz in the hexosamine salvage pathway. UDP-GlcNAz can be utilized as a sugar donor by OGT to yield the O-GlcNAz modification. The presence of the azido sugar can then be visualized via alkyne-containing bioorthogonal chemical probes in an azide-alkyne cycloaddition reaction. These probes can incorporate easily identifiable tags such as the FLAG peptide, biotin, and dye molecules. Mass tags based on polyethylene glycol (PEG) have also been used to measure O-GlcNAc stoichiometry. Conjugation of 5 kDa PEG molecules leads to a mass shift for modified proteins - more heavily O-GlcNAcylated proteins will have multiple PEG molecules and thus migrate more slowly in gel electrophoresis. Other metabolic chemical reporters bearing azides or alkynes (generally at the 2 or 6 positions) have been reported. Instead of GlcNAc analogues, GalNAc analogues may be used as well as UDP-GalNAc is in equilibrium with UDP-GlcNAc in cells due to the action of UDP-galactose-4'-epimerase (GALE). Ac4GalNAz shows enhanced labeling of O-GlcNAc versus Ac4GlcNAz, possibly due to a bottleneck in UDP-GlcNAc pyrophosphorylase processing of GlcNAz-1-P to UDP-GlcNAz. Ac3GlcN-β-Ala-NBD-α-1-P(Ac-SATE)2, a metabolic chemical reporter that is processed intracellularly to a fluorophore-labeled UDP-GlcNAc analogue, has been shown to achieve one-step fluorescent labeling of O-GlcNAc in live cells. Metabolic labeling may also be used to identify binding partners of O-GlcNAcylated proteins. The N-acetyl group may be elongated to incorporate a diazirine moiety. Treatment of cells with peracetylated, phosphate-protected Ac3GlcNDAz-1-P(Ac-SATE)2 leads to modification of proteins with O-GlcNDAz. UV irradiation then induces photocrosslinking between proteins bearing the O-GlcNDaz modification and interacting proteins.
Some issues have been identified with various metabolic chemical reporters, e.g., their use may inhibit the hexosamine biosynthetic pathway, they may not be recognized by OGA and therefore are not able to capture O-GlcNAc cycling, or they may be incorporated into glycosylation modifications besides O-GlcNAc as seen in secreted proteins. Metabolic chemical reporters with chemical handles at the N-acetyl position may also label acetylated proteins as the acetyl group may be hydrolyzed into acetate analogues that can be utilized for protein acetylation. Additionally, per-O-acetylated monosaccharides have been identified to react with cysteines leading to artificial S-glycosylation via an elimination-addition mechanism. Next-generation metabolic chemical reporters have been developed to overcome this off-target reactivity.
Chemoenzymatic labeling
Chemoenzymatic labeling provides an alternative strategy to incorporate handles for click chemistry. The Click-IT O-GlcNAc Enzymatic Labeling System, developed by the Hsieh-Wilson group and subsequently commercialized by Invitrogen, utilizes a mutant GalT Y289L enzyme that is able to transfer azidogalactose (GalNAz) onto O-GlcNAc. The presence of GalNAz (and therefore also O-GlcNAc) can be detected with various alkyne-containing probes with identifiable tags such as biotin, dye molecules, and PEG.
Förster resonance energy transfer biosensor
An engineered protein biosensor has been developed that can detect changes in O-GlcNAc levels using Förster resonance energy transfer. This sensor consists of four components linked together in the following order: cyan fluorescent protein (CFP), an O-GlcNAc binding domain (based on GafD, a lectin sensitive for terminal β-O-GlcNAc), a CKII peptide that is a known OGT substrate, and yellow fluorescent protein (YFP). Upon O-GlcNAcylation of the CKII peptide, the GafD domain binds the O-GlcNAc moiety, bringing the CFP and YFP domains into close proximity and generating a FRET signal. Generation of this signal is reversible and can be used to monitor O-GlcNAc dynamics in response to various treatments. This sensor may be genetically encoded and used in cells. Addition of a localization sequence allows for targeting of this O-GlcNAc sensor to the nucleus, cytoplasm, or plasma membrane.
Mass spectrometry
Biochemical approaches such as Western blotting may provide supporting evidence that a protein is modified by O-GlcNAc; mass spectrometry (MS) is able to provide definitive evidence as to the presence of O-GlcNAc. Glycoproteomic studies applying MS have contributed to the identification of proteins modified by O-GlcNAc.
As O-GlcNAc is substoichiometric and ion suppression occurs in the presence of unmodified peptides, an enrichment step is usually performed prior to mass spectrometry analysis. This may be accomplished using lectins, antibodies, or chemical tagging. The O-GlcNAc modification is labile under collision-induced fragmentation methods such as collision-induced dissociation (CID) and higher-energy collisional dissociation (HCD), so these methods in isolation are not readily applicable for O-GlcNAc site mapping. HCD generates fragment ions characteristic of N-acetylhexosamines that can be used to determine O-GlcNAcylation status. In order to facilitate site mapping with HCD, β-elimination followed by Michael addition with dithiothreitol (BEMAD) may be used to convert the labile O-GlcNAc modification into a more stable mass tag. For BEMAD mapping of O-GlcNAc, the sample must be treated with phosphatatase otherwise other serine/threonine post-translational modifications such as phosphorylation may be detected. Electron-transfer dissociation (ETD) is used for site mapping as ETD causes peptide backbone cleavage while leaving post-translational modifications such as O-GlcNAc intact.
Traditional proteomic studies perform tandem MS on the most abundant species in the full-scan mass spectra, prohibiting full characterization of lower-abundance species. One modern strategy for targeted proteomics uses isotopic labels, e.g., dibromide, to tag O-GlcNAcylated proteins. This method allows for algorithmic detection of low-abundance species, which are then sequenced by tandem MS. Directed tandem MS and targeted glycopeptide assignment allow for identification of O-GlcNAcylated peptide sequences. One example probe consists of a biotin affinity tag, an acid-cleavable silane, an isotopic recoding motif, and an alkyne. Unambiguous site mapping is possible for peptides with only one serine/threonine residue.
The general procedure for this isotope-targeted glycoproteomics (IsoTaG) method is the following:
Metabolically label O-GlcNAc to install O-GlcNAz onto proteins
Use click chemistry to link IsoTaG probe to O-GlcNAz
Use streptavidin beads to enrich for tagged proteins
Treat beads with trypsin to release non-modified peptides
Cleave isotopically recoded glycopeptides from beads using mild acid
Obtain a full-scan mass spectrum from isotopically recoded glycopeptides
Apply algorithm to detect unique isotope signature from probe
Perform tandem MS on the isotopically recoded species to obtain glycopeptide amino acid sequences
Search protein database for identified sequences
Other methodologies have been developed for quantitative profiling of O-GlcNAc using differential isotopic labeling. Example probes generally consist of a biotin affinity tag, a cleavable linker (acid- or photo-cleavable), a heavy or light isotopic tag, and an alkyne.
O-GlcNAc modification has also been recently reported on tyrosine residues, though these represent roughly 5% of all O-GlcNAc modifications.
Strategies for manipulating O-GlcNAc
Various chemical and genetic strategies have been developed to manipulate O-GlcNAc, both on a proteome-wide basis and on specific proteins.
Chemical methods
Small molecule inhibitors have been reported for both OGT and OGA that function in cells or in vivo. OGT inhibitors result in a global decrease of O-GlcNAc while OGA inhibitors result in a global increase of O-GlcNAc; these inhibitors are not able to modulate O-GlcNAc on specific proteins.
Inhibition of the hexosamine biosynthetic pathway is also able to decrease O-GlcNAc levels. For instance, glutamine analogues azaserine and 6-diazo-5-oxo-L-norleucine (DON) can inhibit GFAT, though these molecules may also non-specifically affect other pathways.
Protein synthesis
Expressed protein ligation has been used to prepare O-GlcNAc-modified proteins in a site-specific manner. Methods exist for solid-phase peptide synthesis incorporation of GlcNAc-modified serine, threonine, or cysteine.
Genetic methods
Site-directed mutagenesis
Site-directed mutagenesis of O-GlcNAc-modified serine or threonine residues to alanine may be used to evaluate the function of O-GlcNAc at specific residues. As alanine's side chain is a methyl group and is thus not able to act as an O-GlcNAc site, this mutation effectively permanently removes O-GlcNAc at a specific residue. While serine/threonine phosphorylation may be modeled by mutagenesis to aspartate or glutamate, which have negatively charged carboxylate side chains, none of the 20 canonical amino acids sufficiently recapitulate the properties of O-GlcNAc. Mutagenesis to tryptophan has been used to mimic the steric bulk of O-GlcNAc, though tryptophan is much more hydrophobic than O-GlcNAc. Mutagenesis may also perturb other post-translational modifications, e.g., if a serine is alternatively phosphorylated or O-GlcNAcylated, alanine mutagenesis permanently eliminates the possibilities of both phosphorylation and O-GlcNAcylation.
S-GlcNAc
Mass spectrometry identified S-GlcNAc as a post-translational modification found on cysteine residues. In vitro experiments demonstrated that OGT could catalyze the formation of S-GlcNAc and that OGA is incapable of hydrolyzing S-GlcNAc. Though a previous report suggested that OGA is capable of hydrolyzing thioglycosides, this was only demonstrated on the aryl thioglycoside para-nitrophenol-S-GlcNAc; para-nitrothiophenol is a more activated leaving group than a cysteine residue. Recent studies have supported the use of S-GlcNAc as an enzymatically stable structural model of O-GlcNAc that can be incorporated through solid-phase peptide synthesis or site-directed mutagenesis.
Engineered OGT
Fusion constructs of a nanobody and TPR-truncated OGT allow for proximity-induced protein-specific O-GlcNAcylation in cells. The nanobody may be directed towards protein tags, e.g., GFP, that are fused to the target protein, or the nanobody may be directed towards endogenous proteins. For example, a nanobody recognizing a C-terminal EPEA sequence can direct OGT enzymatic activity to α-synuclein.
Functions of O-GlcNAc
Apoptosis
Apoptosis, a form of controlled cell death, has been suggested to be regulated by O-GlcNAc. In various cancers, elevated O-GlcNAc levels have been reported to suppress apoptosis. Caspase-3, caspase-8, and caspase-9 have been reported to be modified by O-GlcNAc. Caspase-8 is modified near its cleavage/activation sites; O-GlcNAc modification may block caspase-8 cleavage and activation by steric hindrance. Pharmacological lowering of O-GlcNAc with 5S-GlcNAc accelerated caspase activation while pharmacological raising of O-GlcNAc with thiamet-G inhibited caspase activation.
Epigenetics
Writers and Erasers
The proteins that regulate genetics are often categorized as writers, readers, and erasers, i.e., enzymes that install epigenetic modifications, proteins that recognize these modifications, and enzymes that remove these modifications. To date, O-GlcNAc has been identified on writer and eraser enzymes. O-GlcNAc is found in multiple locations on EZH2, the catalytic methyltransferase subunit of PRC2, and is thought to stabilize EZH2 prior to PRC2 complex formation and regulate di- and tri-methyltransferase activity. All three members of the ten-eleven translocation (TET) family of dioxygenases (TET1, TET2, and TET3) are known to be modified by O-GlcNAc. O-GlcNAc has been suggested to cause nuclear export of TET3, reducing its enzymatic activity by depleting it from the nucleus. O-GlcNAcylation of HDAC1 is associated with elevated activating phosphorylation of HDAC1.
Histone O-GlcNAcylation
Histone proteins, the primary protein component of chromatin, have been reported to be modified by O-GlcNAc, though other studies have not been able to detect histone O-GlcNAc. The presence of O-GlcNAc on histones has been suggested to affect gene transcription as well as other histone marks such as acetylation and monoubiquitination. TET2 has been reported to interact with the TPR domain of OGT and facilitate recruitment of OGT to histones. Phosphorylation of OGT T444 via AMPK has been found to inhibit OGT-chromatin association and downregulate H2B S112 O-GlcNAc.
Nutrient sensing
The hexosamine biosynthetic pathway's product, UDP-GlcNAc, is utilized by OGT to catalyze the addition of O-GlcNAc. This pathway integrates information about the concentrations of various metabolites including amino acids, carbohydrates, fatty acids, and nucleotides. Consequently, UDP-GlcNAc levels are sensitive to cellular metabolite levels. OGT activity is in part regulated by UDP-GlcNAc concentration, making a link between cellular nutrient status and O-GlcNAc.
Glucose deprivation causes a decline in UDP-GlcNAc levels and an initial decline in O-GlcNAc, but counterintuitively, O-GlcNAc is later significantly upregulated. This later increase has been shown to be dependent on AMPK and p38 MAPK activation, and this effect is partially due to increases in OGT mRNA and protein levels. It has also been suggested that this effect is dependent on calcium and CaMKII. Activated p38 is able to recruit OGT to specific protein targets, including neurofilament H; O-GlcNAc modification of neurofilament H enhances its solubility. During glucose deprivation, glycogen synthase is modified by O-GlcNAc which inhibits its activity.
Oxidative stress
NRF2, a transcription factor associated with the cellular response to oxidative stress, has been found to be indirectly regulated by O-GlcNAc. KEAP1, an adaptor protein for the cullin 3-dependent E3 ubiquitin ligase complex, mediates the degradation of NRF2; oxidative stress leads to conformational changes in KEAP1 that repress degradation of NRF2. O-GlcNAc modification of KEAP1 at S104 is required for efficient ubiquitination and subsequent degradation of NRF2, linking O-GlcNAc to oxidative stress. Glucose deprivation leads to a reduction in O-GlcNAc and reduces NRF2 degradation. Cells expressing a KEAP1 S104A mutant are resistant to erastin-induced ferroptosis, consistent with higher NRF2 levels upon removal of S104 O-GlcNAc.
Elevated O-GlcNAc levels have been associated with diminished synthesis of hepatic glutathione, an important cellular antioxidant. Acetaminophen overdose leads to accumulation of the strongly oxidizing metabolite NAPQI in the liver, which is detoxified by glutathione. In mice, OGT knockout has a protective effect against acetaminophen-induced liver injury, while OGA inhibition with thiamet-G exacerbates acetaminophen-induced liver injury.
Protein aggregation
O-GlcNAc has been found to slow protein aggregation, though the generality of this phenomenon is unknown.
Solid-phase peptide synthesis was used to prepare full-length α-synuclein with an O-GlcNAc modification at T72. Thioflavin T aggregation assays and transmission electron microscopy demonstrated that this modified α-synuclein does not readily form aggregates.
Treatment of JNPL3 tau transgenic mice with an OGA inhibitor was shown to increase microtubule-associated protein tau O-GlcNAcylation. Immunohistochemistry analysis of the brainstem revealed decreased formation of neurofibrillary tangles. Recombinant O-GlcNAcylated tau was shown to aggregate slower than unmodified tau in an in vitro thioflavin S aggregation assay. Similar results were obtained for a recombinantly prepared O-GlcNAcylated TAB1 construct versus its unmodified form.
Protein phosphorylation
Crosstalk
Many known phosphorylation sites and O-GlcNAcylation sites are nearby each other or overlapping. As protein O-GlcNAcylation and phosphorylation both occur on serine and threonine residues, these post-translational modifications can regulate each other. For example, in CKIIα, S347 O-GlcNAc has been shown to antagonize T344 phosphorylation. Reciprocal inhibition, i.e., phosphorylation inhibition of O-GlcNAcylation and O-GlcNAcylation of phosphorylation, has been observed on other proteins including murine estrogen receptor β, RNA Pol II, tau, p53, CaMKIV, p65, β-catenin, and α-synuclein. Positive cooperativity has also been observed between these two post-translational modifications, i.e., phosphorylation induces O-GlcNAcylation or O-GlcNAcylation induces phosphorylation. This has been demonstrated on MeCP2 and HDAC1. In other proteins, e.g., cofilin, phosphorylation and O-GlcNAcylation appear to occur independently of each other.
In some cases, therapeutic strategies are under investigation to modulate O-GlcNAcylation to have a downstream effect on phosphorylation. For instance, elevating tau O-GlcNAcylation may offer therapeutic benefit by inhibiting pathological tau hyperphosphorylation.
Besides phosphorylation, O-GlcNAc has been found to influence other post-translational modifications such as lysine acetylation and monoubiquitination.
Kinases
Protein kinases are the enzymes responsible for phosphorylation of serine and threonine residues. O-GlcNAc has been identified on over 100 (~20% of the human kinome) kinases, and this modification is often associated with alterations in kinase activity or kinase substrate scope. O-GlcNAc may have diverse functional consequences on kinases such as interfering with ATP binding, altering substrate recognition, or regulating other PTMs on kinases. Complex cross-talk relations can also exist where OGT and a kinase, e.g., AMPK, modify each other.
Phosphatases
Protein phosphatase 1 subunits PP1β and PP1γ have been shown to form functional complexes with OGT. A synthetic phosphopeptide was able to be dephosphorylated and O-GlcNAcylated by an OGT immunoprecipitate. This complex has been referred to as a "yin-yang complex" as it replaces a phosphate modification with an O-GlcNAc modification. PP1γ also exists in a heterotrimer with OGT and URI under high glucose conditions.
MYPT1 is another protein phosphatase subunit that forms complexes with OGT and is itself O-GlcNAcylated. MYPT1 appears to have a role in directing OGT towards specific substrates.
Protein-protein interactions
O-GlcNAcylation of a protein can alter its interactome. As O-GlcNAc is highly hydrophilic, its presence may disrupt hydrophobic protein-protein interactions. For example, O-GlcNAc disrupts Sp1 interaction with TAFII110, and O-GlcNAc disrupts CREB interaction with TAFII130 and CRTC.
Some studies have also identified instances where protein-protein interactions are induced by O-GlcNAc. Metabolic labeling with the diazirine-containing O-GlcNDAz has been applied to identify protein-protein interactions induced by O-GlcNAc. Using a bait glycopeptide based roughly on a consensus sequence for O-GlcNAc, α-enolase, EBP1, and 14-3-3 were identified as potential O-GlcNAc readers. X-ray crystallography showed that 14-3-3 recognized O-GlcNAc through an amphipathic groove that also binds phosphorylated ligands. Hsp70 has also been proposed to act as a lectin to recognize O-GlcNAc. It has been suggested that O-GlcNAc plays a role in the interaction of α-catenin and β-catenin.
Protein stability and degradation
Co-translational O-GlcNAc has been identified on Sp1 and Nup62. This modification suppresses co-translational ubiquitination and thus protects nascent polypeptides from proteasomal degradation. Similar protective effects of O-GlcNAc on full-length Sp1 have been observed. It is unknown if this pattern is universal or only applicable to specific proteins.
Protein phosphorylation is often used as a mark for subsequent degradation. Tumor suppressor protein p53 is targeted for proteasomal degradation via COP9 signalosome-mediated phosphorylation of T155. O-GlcNAcylation of p53 S149 has been associated with decreased T155 phosphorylation and protection of p53 from degradation. β-catenin O-GlcNAcylation competes with T41 phosphorylation, which signals β-catenin for degradation, stabilizing the protein.
O-GlcNAcylation of the Rpt2 ATPase subunit of the 26S proteasome has been shown to inhibit proteasome activity. Testing various peptide sequences revealed that this modification slows proteasomal degradation of hydrophobic peptides, degradation of hydrophilic peptides does not appear to be affected. This modification has been shown to suppress other pathways that activate the proteasome such as Rpt6 phosphorylation by cAMP-dependent protein kinase.
OGA-S localizes to lipid droplets and has been proposed to locally activate the proteasome to promote remodeling of lipid droplet surface proteins.
Stress response
Various cellular stress stimuli have been associated with changes in O-GlcNAc. Treatment with hydrogen peroxide, cobalt(II) chloride, UVB light, ethanol, sodium chloride, heat shock, and sodium arsenite, all result in elevated O-GlcNAc. Knockout of OGT sensitizes cells to thermal stress. Elevated O-GlcNAc has been associated with expression of Hsp40 and Hsp70.
Therapeutic relevance
Neurodegeneration
Pathological protein aggregation is a major hallmark of multiple neurodegenerative diseases. O-GlcNAc on various proteins has been found to play roles in suppressing protein aggregation, motivating clinical efforts to inhibit OGA and elevate cellular O-GlcNAc levels. This strategy is being evaluated by companies for Alzheimer's disease, Parkinson's disease, progressive supranuclear palsy, and amyotrophic lateral sclerosis (ALS). Multiple companies have advanced OGA inhibitors into the clinic including Alectos Therapeutics, Asceneuron, Biogen, Eli Lilly, and Merck.
Alzheimer's disease
Numerous studies have identified aberrant phosphorylation of tau as a hallmark of Alzheimer's disease. O-GlcNAcylation of bovine tau was first characterized in 1996. A subsequent report in 2004 demonstrated that human brain tau is also modified by O-GlcNAc. O-GlcNAcylation of tau was demonstrated to regulate tau phosphorylation with hyperphosphorylation of tau observed in the brain of mice lacking OGT, which has been associated with the formation of neurofibrillary tangles. Analysis of brain samples showed that protein O-GlcNAcylation is compromised in Alzheimer's disease and paired helical fragment-tau was not recognized by traditional O-GlcNAc detection methods, suggesting that pathological tau has impaired O-GlcNAcylation relative to tau isolated from control brain samples. Elevating tau O-GlcNAcylation was proposed as a therapeutic strategy for reducing tau phosphorylation.
To test this therapeutic hypothesis, a selective and blood-brain barrier-permeable OGA inhibitor, thiamet-G, was developed. Thiamet-G treatment was able to increase tau O-GlcNAcylation and suppress tau phosphorylation in cell culture and in vivo in healthy Sprague-Dawley rats. A subsequent study showed that thiamet-G treatment also increased tau O-GlcNAcylation in a JNPL3 tau transgenic mouse model. In this model, tau phosphorylation was not significantly affected by thiamet-G treatment, though decreased numbers of neurofibrillary tangles and slower motor neuron loss were observed. Additionally, O-GlcNAcylation of tau was noted to slow tau aggregation in vitro.
OGA inhibition with MK-8719 is being investigated in clinical trials as a potential treatment strategy for Alzheimer's disease and other tauopathies including progressive supranuclear palsy.
Parkinson's disease
Parkinson's disease is associated with aggregation of α-synuclein. As O-GlcNAc modification of α-synuclein has been found to inhibit its aggregation, elevating α-synuclein O-GlcNAc is being explored as a therapeutic strategy to treat Parkinson's disease.
Cancer
Dysregulation of O-GlcNAc is associated with cancer cell proliferation and tumor growth.
O-GlcNAcylation of the glycolytic enzyme PFK1 at S529 has been found to inhibit PFK1 enzymatic activity, reducing glycolytic flux and redirecting glucose towards the pentose phosphate pathway. Structural modeling and biochemical experiments suggested that O-GlcNAc at S529 would inhibit PFK1 allosteric activation by fructose 2,6-bisphosphate and oligomerization into active forms. In a mouse model, mice injected with cells expressing PFK1 S529A mutant showed lower tumor growth than mice injected with cells expressing PFK1 wild-type. Additionally, OGT overexpression enhanced tumor growth in the latter system but had no significant effect on the system with mutant PFK1. Hypoxia induces PFK1 S529 O-GlcNAc and increases flux through the pentose phosphate pathway to generate more NADPH, which maintains glutathione levels and detoxifies reactive oxygen species, imparting a growth advantage to cancer cells. PFK1 was found to be glycosylated in human breast and lung tumor tissues. OGT has also been reported to positively regulate HIF-1α. HIF-1α is normally degraded under normoxic conditions by prolyl hydroxylases that utilize α-ketoglutarate as a co-substrate. OGT suppresses α-ketoglutarate levels, protecting HIF-1α from proteasomal degradation by pVHL and promoting aerobic glycolysis. In contrast with the previous study on PFK1, this study found that elevating OGT or O-GlcNAc upregulated PFK1, though the two studies are consistent in finding that O-GlcNAc levels are positively associated with flux through the pentose phosphate pathway. This study also found that decreasing O-GlcNAc selectively killed cancer cells via ER stress-induced apoptosis.
Human pancreatic ductal adenocarcinoma (PDAC) cell lines have higher O-GlcNAc levels than human pancreatic duct epithelial (HPDE) cells. PDAC cells have some dependency upon O-GlcNAc for survival as OGT knockdown selectively inhibited PDAC cell proliferation (OGT knockdown did not significantly affect HPDE cell proliferation), and inhibition of OGT with 5S-GlcNAc showed the same result. Hyper-O-GlcNAcylation in PDAC cells appeared to be anti-apoptotic, inhibiting cleavage and activation of caspase-3 and caspase-9. Numerous sites on the p65 subunit of NF-κB were found to be modified by O-GlcNAc in a dynamic manner; O-GlcNAc at p65 T305 and S319 in turn positively regulate other modifications associated with NF-κB activation such as p300-mediated K310 acetylation and IKK-mediated S536 phosphorylation. These results suggested that NF-κB is constitutively activated by O-GlcNAc in pancreatic cancer.
OGT stabilization of EZH2 in various breast cancer cell lines has been found to inhibit expression of tumor suppressor genes. In hepatocellular carcinoma models, O-GlcNAc is associated with activating phosphorylation of HDAC1, which in turn regulates expression of the cell cycle regulator p21Waf1/Cip1 and cell motility regulator E-cadherin.
OGT has been found to stabilize SREBP-1 and activate lipogenesis in breast cancer cell lines. This stabilization was dependent on the proteasome and AMPK. OGT knockdown resulted in decreased nuclear SREBP-1, but proteasomal inhibition with MG132 blocked this effect. OGT knockdown also increased the interaction between SREBP-1 and the E3 ubiquitin ligase FBW7. AMPK is activated by T172 phosphorylation upon OGT knockdown, and AMPK phosphorylates SREBP-1 S372 to inhibit its cleavage and maturation. OGT knockdown had a diminished effect on SREBP-1 levels in AMPK-null cell lines. In a mouse model, OGT knockdown inhibited tumor growth but SREBP-1 overexpression partly rescued this effect. These results contrast from those of a previous study which found that OGT knockdown/inhibition inhibited AMPK T172 phosphorylation and increased lipogenesis.
In breast and prostate cancer cell lines, high levels of OGT and O-GlcNAc have been associated both in vitro and in vivo with processes associated with disease progression, e.g., angiogenesis, invasion, and metastasis. OGT knockdown or inhibition was found to downregulate the transcription factor FoxM1 and upregulate the cell-cycle inhibitor p27Kip1 (which is regulated by FoxM1-dependent expression of the E3 ubiquitin ligase component Skp2), causing G1 cell cycle arrest. This appeared to be dependent on proteasomal degradation of FoxM1, as expression of a FoxM1 mutant lacking a degron rescued the effects of OGT knockdown. FoxM1 was found not to be directly modified by O-GlcNAc, suggesting that hyper-O-GlcNAcylation of FoxM1 regulators impairs FoxM1 degradation. Targeting OGT also lowered levels of FoxM1-regulated proteins associated with cancer invasion and metastasis (MMP-2 & MMP-9), and angiogenesis (VEGF). O-GlcNAc modification of cofilin S108 has also been reported to be important for breast cancer cell invasion by regulating cofilin subcellular localization in invadopodia.
Diabetes
Dysregulation of O-GlcNAc has been associated with diabetes and associated diabetic complications. In general, elevated O-GlcNAc is associated with an insulin resistance phenotype.
Pancreatic β cells synthesize and secrete insulin to regulate blood glucose levels. One study found that inhibition of OGA with streptozotocin followed by glucosamine treatment resulted in O-GlcNAc accumulation and apoptosis in β cells; a subsequent study showed that a galactose-based analogue of streptozotocin was unable to inhibit OGA but still resulted in apoptosis, suggesting that the apoptotic effects of streptozotocin are not directly due to OGA inhibition.
O-GlcNAc has been suggested to attenuate insulin signaling. In 3T3-L1 adipocytes, OGA inhibition with PUGNAc inhibited insulin-mediated glucose uptake. PUGNAc treatment also inhibited insulin-stimulated Akt T308 phosphorylation and downstream GSK3β S9 phosphorylation. In a later study, insulin stimulation of COS-7 cells caused OGT to localize to the plasma membrane. Inhibition of PI3K with wortmannin reversed this effect, suggesting dependence on phosphatidylinositol(3,4,5)-triphosphate. Increasing O-GlcNAc levels by subjecting cells to high glucose conditions or PUGNAc treatment inhibited insulin-stimulated phosphorylation of Akt T308 and Akt activity. IRS1 phosphorylation at S307 and S632/S635, which is associated with attenuated insulin signaling, was enhanced. Subsequent experiments in mice with adenoviral delivery of OGT showed that OGT overexpression negatively regulated insulin signaling in vivo. Many components of the insulin signaling pathway, including β-catenin, IR-β, IRS1, Akt, PDK1, and the p110α subunit of PI3K were found to be directly modified by O-GlcNAc. Insulin signaling has also been reported to lead to OGT tyrosine phosphorylation and OGT activation, resulting in increased O-GlcNAc levels.
As PUGNAc also inhibits lysosomal β-hexosaminidases, the OGA-selective inhibitor NButGT was developed to further probe the relationship between O-GlcNAc and insulin signaling in 3T3-L1 adipocytes. This study also found that PUGNAc resulted in impaired insulin signaling, but NButGT did not, as measured by changes in phosphorylation of Akt T308, suggesting that the effects observed with PUGNAc may be due to off-target effects besides OGA inhibition.
Infectious disease
Bacterial
Treatment of macrophages with lipopolysaccharide (LPS), a major component of the Gram-negative bacteria outer membrane, results in elevated O-GlcNAc in cellular and mouse models. During infection, cytosolic OGT was de-S-nitrosylated and activated. Suppressing O-GlcNAc with DON inhibited the O-GlcNAcylation and nuclear translocation of NF-κB, as well as downstream induction of inducible nitric oxide synthase and IL-1β production. DON treatment also improved cell survival during LPS treatment.
Viral
O-GlcNAc has been implicated in influenza A virus (IAV)-induced cytokine storm. Specifically, O-GlcNAcylation of S430 on interferon regulatory factor-5 (IRF5) has been shown to promote its interaction with TNF receptor-associated factor 6 (TRAF6) in cellular and mouse models. TRAF6 mediates K63-linked ubiquitination of IRF5 which is necessary for IRF5 activity and subsequent cytokine production. Analysis of clinical samples showed that blood glucose levels were elevated in IAV-infected patients compared to healthy individuals. In IAV-infected patients, blood glucose levels positively correlated with IL-6 and IL-8 levels. O-GlcNAcylation of IRF5 was also relatively higher in peripheral blood mononuclear cells of IAV-infected patients.
Other applications
Peptide therapeutics such as are attractive for their high specificity and potency, but they often have poor pharmacokinetic profiles due to their degradation by serum proteases. Though O-GlcNAc is generally associated with intracellular proteins, it has been found that engineered peptide therapeutics modified by O-GlcNAc have enhanced serum stability in a mouse model and have similar structure and activity compared to the respective unmodified peptides. This method has been applied to engineer GLP-1 and PTH peptides.
See also
O-GlcNAc transferase (OGT)
O-GlcNAcase (OGA)
O-linked glycosylation
References
Further reading
Zachara, Natasha; Akimoto, Yoshihiro; Hart, Gerald W. (2015), Varki, Ajit; Cummings, Richard D.; Esko, Jeffrey D.; Stanley, Pamela (eds.), "The O-GlcNAc Modification", Essentials of Glycobiology (3rd ed.), Cold Spring Harbor Laboratory Press, PMID 28876858.
External links
Post-translational modification
Carbohydrates
Biochemistry
Cell signaling
Cell biology
Signal transduction | O-GlcNAc | [
"Chemistry",
"Biology"
] | 9,957 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Cell biology",
"Gene expression",
"Biochemical reactions",
"Signal transduction",
"Organic compounds",
"Post-translational modification",
"Carbohydrate chemistry",
"nan",
"Biochemistry",
"Neurochemistry"
] |
44,459,119 | https://en.wikipedia.org/wiki/Detekt | Detekt is a discontinued free tool by Amnesty International, Digitale Gesellschaft, EFF, and Privacy International to scan for surveillance software on Microsoft Windows.
It was intended for use by activists and journalists to scan for known spyware.
The tool
Detekt was available for free download.
The tool did not guarantee detection of all spyware, nor was it meant to give a false sense of security, and was meant to be used with other methods to combat malware and spyware.
In 2014, the Coalition Against Unlawful Surveillance Exports estimated that the global trade in surveillance technologies was worth more than 3 billion GBP annually.
Detekt was available in Amharic, Arabic, English, German, Italian, and Spanish.
Technical
The tool required no installation, and was designed to scan for surveillance software on Windows PCs, from XP to Windows 8.1.
The tool scanned for current surveillance software, and after scanning, it would display a summary indicating if any spyware was found or not. It would generate a log file containing the details.
The tool did not guarantee absolute protection from surveillance software, as it scanned for known spyware (at the time of release), which could be modified to circumvent detection, or as new software would become available. Therefore, a clean bill of health didn't necessarily mean that the PC was free of surveillance software.
The website instructed the user to disconnect the internet connection, and close all applications, before running, and not to turn the connection back on if any spyware was found.
Detekt was released under the GPLv3 free license.
Detekt was developed by Claudio Guarnieri with the help of Bill Marczak, Morgan Marquis-Boire, Eva Galperin, Tanya O'Carroll, Andre Meister, Jillian York, Michael Ligh, Endalkachew Chala.
It was provided with patterns for the following malware: DarkComet RAT, XtremeRAT, BlackShades RAT, njRAT, FinFisher FinSpy, HackingTeam RCS, ShadowTech RAT, Gh0st RAT.
See also
Computer and network surveillance
Computer surveillance in the workplace
Internet censorship
Internet privacy
Freedom of information
Tor (anonymity network)
2013 mass surveillance disclosures
References
External links
Computer forensics
Computer surveillance
Internet security | Detekt | [
"Engineering"
] | 473 | [
"Cybersecurity engineering",
"Computer forensics"
] |
44,460,166 | https://en.wikipedia.org/wiki/Staggered%20tuning | Staggered tuning is a technique used in the design of multi-stage tuned amplifiers whereby each stage is tuned to a slightly different frequency. In comparison to synchronous tuning (where each stage is tuned identically) it produces a wider bandwidth at the expense of reduced gain. It also produces a sharper transition from the passband to the stopband. Both staggered tuning and synchronous tuning circuits are easier to tune and manufacture than many other filter types.
The function of stagger-tuned circuits can be expressed as a rational function and hence they can be designed to any of the major filter responses such as Butterworth and Chebyshev. The poles of the circuit are easy to manipulate to achieve the desired response because of the amplifier buffering between stages.
Applications include television IF amplifiers (mostly 20th century receivers) and wireless LAN.
Rationale
Staggered tuning improves the bandwidth of a multi-stage tuned amplifier at the expense of the overall gain. Staggered tuning also increases the steepness of passband skirts and hence improves selectivity.
The value of staggered tuning is best explained by first looking at the shortcomings of tuning every stage identically. This method is called synchronous tuning. Each stage of the amplifier will reduce the bandwidth. In an amplifier with multiple identical stages, the of the response after the first stage will become the points of the second stage. Each successive stage will add a further to what was the band edge of the first stage. Thus the bandwidth becomes progressively narrower with each additional stage.
As an example, a four-stage amplifier will have its points at the points of an individual stage. The fractional bandwidth of an LC circuit is given by,
where m is the power ratio of the power at resonance to that at the band edge frequency (equal to 2 for the point and 1.19 for the point) and Q is the quality factor.
The bandwidth is thus reduced by a factor of . In terms of the number of stages . Thus, the four stage synchronously tuned amplifier will have a bandwidth of only 19% of a single stage. Even in a two-stage amplifier the bandwidth is reduced to 41% of the original. Staggered tuning allows the bandwidth to be widened at the expense of overall gain. The overall gain is reduced because when any one stage is at resonance (and thus maximum gain) the others are not, unlike synchronous tuning where all stages are at maximum gain at the same frequency. A two-stage stagger-tuned amplifier will have a gain less than a synchronously tuned amplifier.
Even in a design that is intended to be synchronously tuned, some staggered tuning effect is inevitable because of the practical impossibility of keeping all tuned circuits perfectly in step and because of feedback effects. This can be a problem in very narrow band applications where essentially only one spot frequency is of interest, such as a local oscillator feed or a wave trap. The overall gain of a synchronously tuned amplifier will always be less than the theoretical maximum because of this.
Both synchronously tuned and stagger-tuned schemes have a number of advantages over schemes that place all the tuning components in a single aggregated filter circuit separate from the amplifier such as ladder networks or coupled resonators. One advantage is that they are easy to tune. Each resonator is buffered from the others by the amplifier stages so have little effect on each other. The resonators in aggregated circuits, on the other hand, will all interact with each other, particularly their nearest neighbours. Another advantage is that the components need not be close to ideal. Every LC resonator is directly working into a resistor which lowers the Q anyway so any losses in the L and C components can be absorbed into this resistor in the design. Aggregated designs usually require high Q resonators. Also, stagger-tuned circuits have resonator components with values that are quite close to each other and in synchronously tuned circuits they can be identical. The spread of component values is thus less in stagger-tuned circuits than in aggregated circuits.
Design
Tuned amplifiers such as the one illustrated at the beginning of this article can be more generically depicted as a chain of transconductance amplifiers each loaded with a tuned circuit.
where for each stage (omitting the suffixes)
gm is the amplifier transconductance
C is the tuned circuit capacitance
L is the tuned circuit inductance
G is the sum of the amplifier output conductance and the input conductance of the next amplifier.
Stage gain
The gain A(s), of one stage of this amplifier is given by;
where s is the complex frequency operator.
This can be written in a more generic form, that is, not assuming that the resonators are the LC type, with the following substitutions,
(the resonant frequency)
(the gain at resonance)
(the stage quality factor)
Resulting in,
Stage bandwidth
The gain expression can be given as a function of (angular) frequency by making the substitution where i is the imaginary unit and ω is the angular frequency
The frequency at the band edges, ωc, can be found from this expression by equating the value of the gain at the band edge to the magnitude of the expression,
where m is defined as above and equal to two if the points are desired.
Solving this for ωc and taking the difference between the two positive solutions finds the bandwidth Δω,
and the fractional bandwidth B,
Overall response
The overall response of the amplifier is given by the product of the individual stages,
It is desirable to be able to design the filter from a standard low-pass prototype filter of the required specification. Frequently, a smooth Butterworth response will be chosen but other polynomial functions can be used that allow ripple in the response. A popular choice for a polynomial with ripple is the Chebyshev response for its steep skirt. For the purpose of transformation, the stage gain expression can be rewritten in the more suggestive form,
This can be transformed into a low-pass prototype filter with the transform
where ω'''c is the cutoff frequency of the low-pass prototype.
This can be done straightforwardly for the complete filter in the case of synchronously tuned amplifiers where every stage has the same ω0 but for a stagger-tuned amplifier there is no simple analytical solution to the transform. Stagger-tuned designs can be approached instead by calculating the poles of a low-pass prototype of the desired form (e.g. Butterworth) and then transforming those poles to a band-pass response. The poles so calculated can then be used to define the tuned circuits of the individual stages.
Poles
The stage gain can be rewritten in terms of the poles by factorising the denominator;
where p, p* are a complex conjugate pair of poles
and the overall response is,
where the ak = A0kω0k/Q0k
From the band-pass to low-pass transform given above, an expression can be found for the poles in terms of the poles of the low-pass prototype, qk,
where ω0B is the desired band-pass centre frequency and Qeff is the effective Q of the overall circuit.
Each pole in the prototype transforms to a complex conjugate pair of poles in the band-pass and corresponds to one stage of the amplifier. This expression is greatly simplified if the cutoff frequency of the prototype, ω'c, is set to the final filter bandwidth ω0B/Qeff.
In the case of a narrowband design which can be used to make a further simplification with the approximation,
These poles can be inserted into the stage gain expression in terms of poles. By comparing with the stage gain expression in terms of component values, those component values can then be calculated.
Applications
Staggered tuning is of most benefit in wideband applications. It was formerly commonly used in television receiver IF amplifiers. However, SAW filters are more likely to be used in that role nowadays. Staggered tuning has advantages in VLSI for radio applications such as wireless LAN. The low spread of component values make it much easier to implement in integrated circuits than traditional ladder networks.
See also
Double-tuned amplifier
References
Bibliography
Chattopadhyay, D., Electronics: Fundamentals and Applications, New Age International, 2006 .
Gulati, R. R., Modern Television Practice Principles, Technology and Servicing, New Age International, 2002 .
Iniewski, Krzysztof, CMOS Nanoelectronics: Analog and RF VLSI Circuits, McGraw Hill Professional, 2011 .
Maheswari, L. K.; Anand, M. M. S., Analog Electronics, PHI Learning, 2009 .
Moxon, L. A., Recent Advances in Radio Receivers, Cambridge University Press, 1949 .
Pederson, Donald O.; Mayaram, Kartikeya, Analog Integrated Circuits for Communication, Springer, 2007 .
Sedha, R. S., A Textbook of Electronic Circuits, S. Chand, 2008 .
Wiser, Robert, Tunable Bandpass RF Filters for CMOS Wireless Transmitters'', ProQuest, 2008 .
Electronic amplifiers
Signal processing filter | Staggered tuning | [
"Chemistry",
"Technology"
] | 1,888 | [
"Amplifiers",
"Electronic amplifiers",
"Filters",
"Signal processing filter"
] |
74,509,192 | https://en.wikipedia.org/wiki/Chirality-induced%20spin%20selectivity | Chirality-induced spin selectivity (CISS) refers to multiple phenomena where handedness of a chiral chemical compound influences the spin of transmitted or emitted electrons. This effect was discovered by Prof. Ron Naaman and co-workers.
Experiments were able to demonstrate the effect in the form of polarization of electrons scattered from chiral molecules, spin-dependent transmission probabilities through layers of chiral molecules, spin-selectivity of electron-transport in a chiral medium and enantio-selectivity in chemical reactions induced by spin-polarized electrons.
Theoretical models were able to qualitatively explain the effect using spin-orbit coupling (SOC). But quantitatively the predicted effect was always orders of magnitude smaller than what was measured in experiments. The mechanism underlying CISS is not completely understood.
References
Stereochemistry | Chirality-induced spin selectivity | [
"Physics",
"Chemistry"
] | 171 | [
" and optical physics stubs",
"Stereochemistry",
"Space",
" molecular",
"nan",
"Atomic",
"Spacetime",
"Physical chemistry stubs",
" and optical physics"
] |
74,514,095 | https://en.wikipedia.org/wiki/Robert%20Ramage%20%28chemist%29 | Robert 'Bob' Ramage FRS (4 October 1935 — 16 October 2019) was an organic chemist, born in Glasgow, who specialised in the synthesis and biosynthesis of natural products, peptides, and proteins.
Following his undergraduate degree in chemistry and the University of Glasgow, he stayed on for a PhD in organic chemistry. After his time at Glasgow, he followed his interest in natural products synthesis to Harvard and then Basel, before taking up a lectureship in organic chemistry at the University of Liverpool where his attention was drawn to peptides.
His peptide synthesis research continued at the University of Manchester Institute of Science and Technology (UMIST), where he also served as head of department. He returned to Scotland in 1984, taking up the Forbes chair of organic chemistry at the University of Edinburgh, where he remained until retirement in 2000.
Outside of academia, in 1994 he founded the company Albachem, which utilised his work with peptides.
He was elected Fellow of the Royal Society of Chemistry (1977), Royal Society of Edinburgh (1986), and the Royal Society (1992).
References
1935 births
2019 deaths
Scottish chemists
Organic chemists
Scientists from Glasgow
Alumni of the University of Glasgow | Robert Ramage (chemist) | [
"Chemistry"
] | 243 | [
"Organic chemists"
] |
74,514,146 | https://en.wikipedia.org/wiki/UP%20Diliman%20Department%20of%20Chemical%20Engineering | The Department of Chemical Engineering (DChE) is an academic department operating under the College of Engineering of the University of the Philippines Diliman.
The department was established in 1956 and has an overall 90% passing rate in the licensure examinations held in the Philippines. It also contributes about 10% to 60% of the total number of new chemical engineers in the Philippines every year.
Course offerings
The department offers undergraduate and graduate programs leading to the degree of chemical engineering:
Bachelor of Science in Chemical Engineering (BS ChE) — five-year program leading to the understanding of transport processes, chemical engineering thermodynamics and their applications to unit operations design, thermodynamics and reaction kinetics.
Master of Science in Chemical Engineering (MS ChE) — 24-unit coursework that includes core and elective courses related to chemical engineering and six units of master's thesis.
Doctor of Philosophy in Chemical Engineering (PhD ChE)
Research laboratories
The department consists of thirteen (13) research laboratories in different fields of chemical engineering and allied fields, and also hosts the Chemical Engineering Analytical Laboratory (CEAL), which offers analytical services to the university and industry.
CEAL houses a Scanning Electron Microscope (SEM), a Fourier-Transform Infrared (FTIR) Spectroscope, a Universal Testing Machine (UTM); gas chromatographs (FID, TCD, MS), Ion Chromatographs, and high-performance liquid chromatograph (HPLC); the Department also has a Kjeldahl apparatus, a Karl Fischer apparatus, and an atomic absorption spectrophotometer (AAS). There is a real-time PCR, and digital gradient electrophoresis, shaking incubators and refrigerated incubators for biological studies.
The thirteen (13) research laboratories are the following:
Advanced Materials and Organic Synthesis Laboratory
Bioprocess Engineering Laboratory
Catalysis Research Laboratory
Chemical Engineering Intelligence Learning Laboratory
Environmental Process Engineering Laboratory
Fuels, Energy and Thermal Systems Laboratory
Green Materials Laboratory
Inorganic Synthesis Laboratory
Laboratory of Electrochemical Engineering
Molecular Modelling Laboratory
Nanotechnology Research Laboratory
Process Systems Engineering Laboratory
Sustainable Production & Responsible Consumption Laboratory
References
External links
Official website
Facebook page
UP Diliman College of Engineering
Chemical engineering organizations | UP Diliman Department of Chemical Engineering | [
"Chemistry",
"Engineering"
] | 455 | [
"Chemical engineering",
"Chemical engineering organizations"
] |
60,885,357 | https://en.wikipedia.org/wiki/Mutation%20accumulation%20theory | The mutation accumulation theory of aging was first proposed by Peter Medawar in 1952 as an evolutionary explanation for biological aging and the associated decline in fitness that accompanies it. Medawar used the term 'senescence' to refer to this process. The theory explains that, in the case where harmful mutations are only expressed later in life, when reproduction has ceased and future survival is increasingly unlikely, then these mutations are likely to be unknowingly passed on to future generations. In this situation the force of natural selection will be weak, and so insufficient to consistently eliminate these mutations. Medawar posited that over time these mutations would accumulate due to genetic drift and lead to the evolution of what is now referred to as aging.
Background and history
Despite Charles Darwin's completion of his theory of biological evolution in the 19th century, the modern logical framework for evolutionary theories of aging wouldn't emerge until almost a century later. Though August Weismann did propose his theory of programmed death, it was met with criticism and never gained mainstream attention. It wasn't until 1930 that Ronald Fisher first noted the conceptual insight which prompted the development of modern aging theories. This concept, namely that the force of natural selection on an individual decreases with age, was analysed further by J. B. S. Haldane, who suggested it as an explanation for the relatively high prevalence of Huntington's disease despite the autosomal dominant nature of the mutation. Specifically, as Huntington's only presents after the age of 30, the force of natural selection against it would have been relatively low in pre-modern societies. It was based on the ideas of Fisher and Haldane that Peter Medawar was able to work out the first complete model explaining why aging occurs, which he presented in a lecture in 1951 and then published in 1952
Mechanism of action
Amongst almost all populations, the likelihood that an individual will reproduce is related directly to their age. Starting at 0 at birth, the probability increases to its maximum in young adulthood once sexual maturity has been reached, before gradually decreasing with age. This decrease is caused by the increasing likelihood of death due to external pressures such as predation or illness, as well as the internal pressures inherent to organisms that experience senescence. In such cases deleterious mutations which are expressed early on are strongly selected against due to their major impact on the number of offspring produced by that individual. Mutations that present later in life, by contrast, are relatively unaffected by selective pressure, as their carriers have already passed on their genes, assuming they survive long enough for the mutation to be expressed at all. The result, as predicted by Medawar, is that deleterious late-life mutations will accumulate and result in the evolution of aging as it is known colloquially. This concept is portrayed graphically by Medawar through the concept of a "selection shadow". The shaded region represents the 'shadow' of time during which selective pressure has no effect. Mutations that are expressed within this selection shadow will remain as long as reproductive probability within that age range remains low.
Evidence supporting the mutation accumulation theory
Predation and Delayed Senescence
In populations where extrinsic mortality is low, the drop in reproductive probability after maturity is less severe than in other cases. The mutation accumulation theory therefore predicts that such populations would evolve delayed senescence. One such example of this scenario can be seen when comparing birds to organisms of equivalent size. It has been suggested that their ability to fly, and therefore lower relative risk of predation, is the cause of their longer than expected life span. The implication that flight, and therefore lower predation, increases lifespan is further born out by the fact that bats live on average 3 times longer than similarly sized mammals with comparable metabolic rates. Providing further evidence, insect populations are known to experience very high rates of extrinsic mortality, and as such would be expected to experience rapid senescence and short life spans. The exception to this rule, however, is found in the longevity of eusocial insect queens. As expected when applying the mutation accumulation theory, established queens are at almost no risk of predation or other forms of extrinsic mortality, and consequently age far more slowly than others of their species.
Age-specific reproductive success of Drosophila Melanogaster
In the interest of finding specific evidence for the mutation accumulation theory, separate from that which also supports the similar antagonistic pleiotropy hypothesis, an experiment was conducted involving the breeding of successive generations of Drosophila Melanogaster. Genetic models predict that, in the case of mutation accumulation, elements of fitness, such as reproductive success and survival, will show age-related increases in dominance, homozygous genetic variance and additive variance. Inbreeding depression will also increase with age. This is because these variables are proportional to the equilibrium frequencies of deleterious alleles, which are expected to increase with age under mutation accumulation but not under the antagonistic pleiotropy hypothesis. This was tested experimentally by measuring age specific reproductive success in 100 different genotypes of Drosophila Melanogaster, with findings ultimately supporting the mutation accumulation theory of aging.
Criticisms of the mutation accumulation theory
Under most assumptions, the mutation accumulation theory predicts that mortality rates will reach close to 100% shortly after reaching post-reproductive age. Experimental populations of Drosophila Melanogaster, and other organisms, however, exhibit age-specific mortality rates that plateau well before reaching 100%, making mutation accumulation alone an insufficient explanation. It is suggested instead that mutation accumulation is only one factor among many, which together form the cause of aging. In particular, the mutation accumulation theory, the antagonistic pleiotropy hypothesis and the disposable soma theory of aging are all believed to contribute in some way to senescence.
References
Senescence
Evolutionary biology
Genetics | Mutation accumulation theory | [
"Chemistry",
"Biology"
] | 1,192 | [
"Evolutionary biology",
"Genetics",
"Senescence",
"Cellular processes",
"Metabolism"
] |
60,891,797 | https://en.wikipedia.org/wiki/NGC%203928 | NGC 3928, also known as the Miniature Spiral, is a lenticular galaxy, sometimes classified as a dwarf spiral galaxy, in the constellation Ursa Major. It was discovered by William Herschel on March 9, 1788.
Gallery
References
External links
Ursa Major
Lenticular galaxies
3928
037136
Markarian galaxies | NGC 3928 | [
"Astronomy"
] | 66 | [
"Ursa Major",
"Constellations"
] |
63,018,939 | https://en.wikipedia.org/wiki/Quasicrystals%20and%20Geometry | Quasicrystals and Geometry is a book on quasicrystals and aperiodic tiling by Marjorie Senechal, published in 1995 by Cambridge University Press ().
One of the main themes of the book is to understand how the mathematical properties of aperiodic tilings such as the Penrose tiling, and in particular the existence of arbitrarily large patches of five-way rotational symmetry throughout these tilings, correspond to the properties of quasicrystals including the five-way symmetry of their Bragg peaks. Neither kind of symmetry is possible for a traditional periodic tiling or periodic crystal structure, and the interplay between these topics led from the 1960s into the 1990s to new developments and new fundamental definitions in both mathematics and crystallography.
Topics
The book is divided into two parts. The first part covers the history of crystallography, the use of X-ray diffraction to study crystal structures through the Bragg peaks formed on their diffraction patterns, and the discovery in the early 1980s of quasicrystals, materials that form Bragg peaks in patterns with five-way symmetry, impossible for a repeating crystal structure. It models the arrangement of atoms in a substance by a Delone set, a set of points in the plane or in Euclidean space that are neither too closely spaced nor too far apart, and it discusses the mathematical and computational issues in X-ray diffraction and the construction of the diffraction spectrum from a Delone set.
Finally, it discusses a method for constructing Delone sets that have Bragg peaks by projecting bounded subsets of higher-dimensional lattices into lower-dimensional spaces.
This material also has strong connections to spectral theory and ergodic theory, deep topics in pure mathematics, but these were omitted in order to make the book accessible to non-specialists in those topics.
Another method for the construction of Delone sets that have Bragg peaks is to choose as points the vertices of certain aperiodic tilings such as the Penrose tiling. (There also exist other aperiodic tilings, such as the pinwheel tiling, for which the existence of discrete peaks in the diffraction pattern is less clear.) The second part of the book discusses methods for generating these tilings, including projections of higher-dimensional lattices as well as recursive constructions with hierarchical structure, and it discusses the long-range patterns that can be shown to exist in tilings constructed in these ways.
Included in the book are software for generating diffraction patterns and Penrose tilings, and a "pictorial atlas" of the diffraction patterns of known aperiodic tilings.
Audience
Although the discovery of quasicrystals immediately set off a rush for applications in materials capable of withstanding high temperature, providing non-stick surfaces, or having other useful material properties, this book is more abstract and mathematical, and concerns mathematical models of quasicrystals rather than physical materials. Nevertheless, chemist István Hargittai writes that it can be read with interest by "students and researchers in mathematics, physics, materials science, and crystallography".
References
External links
Quasicrystals and Geometry on the Internet Archive
Aperiodic tilings
Mathematics books
1995 non-fiction books
Quasicrystals | Quasicrystals and Geometry | [
"Physics",
"Chemistry",
"Materials_science"
] | 663 | [
"Tessellation",
"Crystallography",
"Aperiodic tilings",
"Quasicrystals",
"Symmetry"
] |
63,019,363 | https://en.wikipedia.org/wiki/Ziresovir | Ziresovir (RO-0529, AK0529) is an antiviral drug which was developed as a treatment for respiratory syncytial virus. It acts as a fusion inhibitor, and has shown good results in Phase II and III clinical trials.
See also
Palivizumab
Presatovir
Lumicitabine
References
Anti–RNA virus drugs
Antiviral drugs
Sulfones
Oxetanes
Benzothiazepines
Quinazolines
Amines | Ziresovir | [
"Chemistry",
"Biology"
] | 99 | [
"Antiviral drugs",
"Functional groups",
"Sulfones",
"Amines",
"Biocides",
"Bases (chemistry)"
] |
63,019,482 | https://en.wikipedia.org/wiki/Absorption%20rate%20constant | The absorption rate constant Ka is a value used in pharmacokinetics to describe the rate at which a drug enters into the system. It is expressed in units of time−1. The Ka is related to the absorption half-life (t1/2a) per the following equation: Ka = ln(2) / t1/2a.
Ka values can typically only be found in research articles. This is in contrast to parameters like bioavailability and elimination half-life, which can often be found in drug and pharmacology handbooks.
References
Pharmacokinetic metrics | Absorption rate constant | [
"Chemistry"
] | 125 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
63,027,196 | https://en.wikipedia.org/wiki/Binakael | Binakael (binakel, binakol, binakul) (transliterated, "to do a sphere") is a type of weaving pattern traditional in the Philippines. Patterns consisting entirely of straight lines are woven so as to create the illusion of curves and volumes. A sense of motion is also sought. Designs are geometric, but often representational. The techniques create illusionistic designs similar to op art patterns and were popular by the late 19th century, when the United States colonized the Philippines and American museums collected many traditional Philippine textiles.
thumb|right|175px| Ilocos Sur weaver
Binakael patterns may use a two-block rep weave, making them double-sided, but with colour reversal.
In culture
Mara Coson's novel "Aliasing" was inspired by binakael weave.
Cebu Pacific introduced its QR Flight codes pattered after traditional weaving of nature-inspired design of Ilocos Norte's Binakol to promote local tourism.
See also
Op art
Inabel
T'nalak
References
Weaving
Optical illusions
Philippine handicrafts | Binakael | [
"Physics"
] | 222 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
63,028,251 | https://en.wikipedia.org/wiki/Chinese%20Society%20for%20Rock%20Mechanics%20%26%20Engineering | The Chinese Society for Rock Mechanics & Engineering (; abbreviated CSRME) is a professional body and learned society in the field of rock mechanics in China with a focus on water conservation and hydropower, geology and mining, railway transport, national defense engineering, disaster control, environmental protection. As of 2018, it has 6 subordinate working committees, 13 specialized committees, 12 branches, 19 local societies, and 12,674 individual members. It is a constituent of the China Association for Science and Technology (CAST) and a member of the International Society for Rock Mechanics (ISRM).
History
The Chinese Society for Rock Mechanics & Engineering started in 1978 as NG China in the International Society for Rock Mechanics (ISRM). The preparatory committee was founded in 1981 and it was officially established in June 1985.
Scientific publishing
Chinese Journal of Rock Mechanics and Engineering
List of presidents
References
External links
Geotechnical organizations
Rock mechanics
Scientific organizations established in 1985
Organizations based in Beijing
1985 establishments in China
1985 in Beijing | Chinese Society for Rock Mechanics & Engineering | [
"Engineering"
] | 199 | [
"Geotechnical organizations",
"Civil engineering organizations"
] |
63,029,619 | https://en.wikipedia.org/wiki/Pokhozhaev%27s%20identity | Pokhozhaev's identity is an integral relation satisfied by stationary localized solutions to a nonlinear Schrödinger equation or nonlinear Klein–Gordon equation. It was obtained by S.I. Pokhozhaev and is similar to the virial theorem. This relation is also known as G.H. Derrick's theorem. Similar identities can be derived for other equations of mathematical physics.
The Pokhozhaev identity for the stationary nonlinear Schrödinger equation
Here is a general form due to H. Berestycki and P.-L. Lions.
Let be continuous and real-valued, with .
Denote .
Let
be a solution to the equation
,
in the sense of distributions.
Then satisfies the relation
The Pokhozhaev identity for the stationary nonlinear Dirac equation
There is a form of the virial identity for the stationary nonlinear Dirac equation in three spatial dimensions (and also the Maxwell-Dirac equations) and in arbitrary spatial dimension.
Let
and let and be the self-adjoint Dirac matrices of size :
Let be the massless Dirac operator.
Let be continuous and real-valued, with .
Denote .
Let be a spinor-valued solution that satisfies the stationary form of the nonlinear Dirac equation,
in the sense of distributions,
with some .
Assume that
Then satisfies the relation
See also
Virial theorem
Derrick's theorem
References
Mathematical_identities
Theorems in mathematical physics
Physics theorems | Pokhozhaev's identity | [
"Physics",
"Mathematics"
] | 300 | [
"Mathematical theorems",
"Equations of physics",
"Theorems in mathematical physics",
"Mathematical identities",
"Mathematical problems",
"Algebra",
"Physics theorems"
] |
63,029,820 | https://en.wikipedia.org/wiki/Photoactivated%20adenylyl%20cyclase | Photoactivated adenylyl cyclase (PAC) is a protein consisting of an adenylyl cyclase enzyme domain directly linked to a BLUF (blue light receptor using FAD) type light sensor domain. When illuminated with blue light, the enzyme domain becomes active and converts ATP to cAMP, an important second messenger in many cells. In the unicellular flagellate Euglena gracilis, PACα and PACβ (euPACs) serve as a photoreceptor complex that senses light for photophobic responses and phototaxis. Small but potent PACs were identified in the genome of the bacteria Beggiatoa (bPAC) and Oscillatoria acuminata (OaPAC). While natural bPAC has some enzymatic activity in the absence of light, variants with no dark activity have been engineered (PACmn).
Use of PACs as optogenetic tools
As PACs consist of a light sensor and an enzyme in a single protein, they can be expressed in other species and cell types to manipulate cAMP levels with light. When bPAC is expressed in mouse sperm, blue light illumination speeds up the swimming of transgenic sperm cells and aids fertilization. When expressed in neurons, illumination changes the branching pattern of growing axons. PAC has been used in mice to clarify the function of neurons in the hypothalamus, which use cAMP signaling to control mating behavior. Expression of PAC together with K+-specific cyclic-nucleotide-gated ion channels (CNGs) has been used to hyperpolarize neurons at very low light levels, which prevents them from firing action potentials.
Rhodopsin guanylyl cyclases
Photoactivated guanylyl cyclases have been discovered in the aquatic fungi Blastocladiella emersonii and Catenaria anguillulae. Unlike PACs, these light-activated cyclases use retinal as their light sensor and are therefore rhodopsin guanylyl cyclases (RhGC). When expressed in Xenopus oocytes or mammalian neurons, RhGCs generate cGMP in response to green light. Therefore, they are considered useful optogenetic tools to investigate cGMP signaling.
References
Protein families
EC 4.6.1
Cell biology
Neurochemistry | Photoactivated adenylyl cyclase | [
"Chemistry",
"Biology"
] | 494 | [
"Cell biology",
"Protein classification",
"Biochemistry",
"Protein families",
"Neurochemistry"
] |
67,349,603 | https://en.wikipedia.org/wiki/Mass%20spectrometry%20at%20Swansea | Swansea University has had a long established history of development and innovation in mass spectrometry and chromatography.
Mass Spectrometry Research Unit
In 1975, John H. Beynon was appointed the Royal Society Research Professor and established the Mass Spectrometry Research Unit at Swansea University (at that time known as the University College of Swansea). In 1986, Dai Games moved from Cardiff University to become the Units new Director.
In 1984, the first observation of He22+ was made at the unit, its the same as molecular hydrogen (isolectronic molecules) except it has lots more energy 3310 kJ per mole.
National Mass Spectrometry Service
A grant of £670,000 was awarded in 1985 by the then Science and Engineering Research Council (SERC) to establish a national Mass Spectrometry Center at Swansea University to provide an analytical service to British Universities. It was officially opened in April 1987 by Lord Callaghan. In 2002, the center was enlarged and the new laboratories were opened by Lord Morgan. Following successful £3,000,000 contract renewal Edwina Hart, the Minister for Economy, Science and Transport, officially re-opened the EPSRC National Research Facility after refurbishment in 2015.
Biomolecular Analysis Mass Spectrometry
A Biomolecular Analysis Mass Spectrometry (BAMS) facility was officially opened in 2003, headed by Professor Newton and Dr Dudley. It was a collaborative entity between the Department of Biological Sciences and the Medical School. It focused on the study of nucleosides, nucleotides and cyclic nucleotides.
Stable isotope mass spectrometry
Stable isotope mass spectrometry is conducted in the Department of Geography, and was recently used by the Landmark Trust to determine very precisely the age of the timber from Llwyn Celyn farmhouse to the year 1420.
References
External links
National Mass Spectrometry Service
EPSRC National Research Facilities
Mass spectrometry
Chromatography
Swansea University | Mass spectrometry at Swansea | [
"Physics",
"Chemistry"
] | 407 | [
"Chromatography",
"Spectrum (physical sciences)",
"Separation processes",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
67,352,502 | https://en.wikipedia.org/wiki/Nano%20tape | Nano tape, also called gecko tape is a synthetic adhesive tape consisting of arrays of carbon nanotubes transferred onto a backing material of flexible polymer tape. These arrays are called synthetic setae and mimic the nanostructures found on the toes of a gecko; this is an example of biomimicry. The adhesion is achieved not with chemical adhesives, but via van der Waals forces, which are weak electric forces generated between two atoms or molecules that are very close to each other.
Explanation
Geckos show a remarkable ability to climb smooth vertical surfaces at high speeds, exhibiting both strong attachment and easy rapid removal, or shear adhesion, of their feet.
On a gecko's foot, micrometer-sized elastic hairs called setae are split into nanometer-sized structures called spatulas. The shear adhesion is achieved by forming and breaking van der Waals forces between these microscopic structures and the substrate.
Nano tapes mimic these structures with carbon nanotube bundles, which simulate setae and individual nanotubes, which simulate spatulas, to achieve macroscopic shear adhesion and to translate the weak van der Waals interactions into high shear forces. The shear adhesion allows the tape to be easily peeled off in the manner a gecko lifts its foot. Since the carbon nanotube arrays leave no residue on the substrate, the tape can be reused many times.
History
Nano tape is one of the first developments of synthetic setae, which arose from a collaboration between the Manchester Centre for Mesoscience and Nanotechnology, and the Institute for Microelectronics Technology in Russia. Work started in 2001 and two years later results were published in Nature Materials.
The group prepared flexible fibers of polyimide as the synthetic setae structures on the surface of a 5 μm thick film of the same material using electron beam lithography and dry etching in an oxygen plasma. The fibres were 2 μm long, with a diameter of around 500 nm and a periodicity of 1.6 μm, and covered an area of roughly 1 cm2 (see figure on the left). Initially, the team used a silicon wafer as a substrate, but found that the tape's adhesive power increased by almost 1,000 times if they used a soft bonding substrate such as Scotch tape. This is because the flexible substrate yields a much higher ratio of the number of setae in contact with the surface over the total number of setae.
The result of this "gecko tape" was tested by attaching a sample to the hand of a 15 cm high plastic Spider-Man figure weighing 40 g, which enabled it to stick to a glass ceiling, as is shown in the figure. The tape, which had a contact area of around with the glass, was able to carry a load of more than . However, the adhesion coefficient was only 0.06, which is low compared with real geckos (8~16).
Commercial use
Commercial nano tape is usually sold as double-sided tape that is useful for hanging lightweight items, such as pictures and decorative items on smooth walls. Using superaligned carbon nanotubes, some nano tapes can stay sticky in extreme temperatures.
Gallery
References
Adhesive tape
Biophysics
Biomimetics
Nanotechnology
Carbon nanotubes | Nano tape | [
"Physics",
"Materials_science",
"Engineering",
"Biology"
] | 679 | [
"Biological engineering",
"Applied and interdisciplinary physics",
"Bionics",
"Materials science",
"Bioinformatics",
"Biophysics",
"Nanotechnology",
"Biomimetics"
] |
67,359,130 | https://en.wikipedia.org/wiki/Manganese%20arsenide | Manganese arsenide (MnAs) is an intermetallic compound, an arsenide of manganese. It forms ferromagnetic crystals with hexagonal (NiAs-type) crystal structure, which convert to the paramagnetic orthorhombic β-phase upon heating to . MnAs has potential applications in spintronics, for electrical spin injection into GaAs and Si based devices.
References
Manganese compounds
Arsenides
Nickel arsenide structure type
Ferromagnetic materials | Manganese arsenide | [
"Physics"
] | 108 | [
"Materials",
"Ferromagnetic materials",
"Matter"
] |
71,682,162 | https://en.wikipedia.org/wiki/Katharina%20Lodders | Katharina Lodders is a German-American planetary scientist and cosmochemist who works as a research professor in the Department of Earth and Planetary Sciences at Washington University in St. Louis, where she co-directs the Planetary Chemistry Laboratory. Her research concerns the chemical composition of solar and stellar environments, including the atmospheres of planets, exoplanets, and brown dwarfs, and the study of the temperatures at which elements condense in stellar environments.
Education and career
Lodders completed her doctorate in 1991 at the University of Mainz, with research on the cosmochemistry of trace elements performed at the Max Planck Institute for Chemistry. She joined Washington University in St. Louis as a postdoctoral researcher in 1992 before continuing there as a research professor.
She served as a program director for galactic astronomy at the National Science Foundation from 2010 to 2013.
Books
Lodders is the coauthor of books including:
The Planetary Scientist's Companion (with Bruce Fegley, Jr., Oxford University Press, 1998)
Chemistry of the Solar System (with Bruce Fegley, Jr., Royal Society of Chemistry, 2010)
Recognition
Lodders won the 2021 Leonard Medal of The Meteoritical Society, its highest award, "for her work on the condensation of presolar grains in stellar atmospheres and her compilation of the Solar System Abundances of the Elements and the condensation temperatures of the elements".
References
External links
Year of birth missing (living people)
Living people
Planetary scientists
Women planetary scientists
Astrochemists
German chemists
German women chemists
American chemists
American women chemists
Johannes Gutenberg University Mainz alumni
Washington University in St. Louis faculty | Katharina Lodders | [
"Chemistry"
] | 335 | [
"Astrochemists"
] |
56,064,069 | https://en.wikipedia.org/wiki/Purging%20%28gas%29 | In fire and explosion prevention engineering, purging refers to the introduction of an inert (i.e. non-combustible) purge gas into a closed system (e.g. a container or a process vessel) to prevent the formation of an ignitable atmosphere. Purging relies on the principle that a combustible (or flammable) gas is able to undergo combustion (explode) only if mixed with air in the right proportions. The flammability limits of the gas define those proportions, i.e. the ignitable range.
Purge into service
Assume a closed system (e.g. a container or process vessel), initially containing air, which shall be prepared for safe introduction of a flammable gas, for instance as part of a start-up procedure. The system can be flushed with an inert gas to reduce the concentration of oxygen so that when the flammable gas is admitted, an ignitable mixture cannot form. In NFPA 56, this is known as purge-into-service. In combustion engineering terms, the admission of inert gas dilutes the oxygen below the limiting oxygen concentration.
Purge out of service
Assume a closed system containing a flammable gas, which shall be prepared for safe ingress of air, for instance as part of a shut-down procedure. The system can be flushed with an inert gas to reduce the concentration of the flammable gas so that when air is introduced, an ignitable mixture cannot form. In NFPA 56 this is known as purge-out-of-service.
Benefits of having two purging terms
It is useful with two terms for purging because purge-out-of-service requires much larger quantities of inert agent than purge-into-service. The terminology of German standards refers to purge-into-service as partial inerting, and purge-out-of-service as total inerting, clearly indicating the difference between the two purging practices, although the choice of the term inerting, rather than purging, can be confusing, see below.
Comparison with other explosion prevention practices
Prevention of accidental fires and explosions can also be achieved by controlling sources of ignition. Purging with an inert gas provides a higher degree of safety however, because the practice ensures that an ignitable mixture never forms. Purging can therefore be said to rely on primary prevention, reducing the possibility of an explosion, whereas control of sources of ignition relies on secondary prevention, reducing the probability of an explosion. Primary prevention is also known as inherent safety.
Confusion with inerting
The purge gas is inert, i.e. by definition non-combustible, or more precisely, non-reactive. The most common purge gases commercially available in large quantities are nitrogen and carbon dioxide. Other inert gases, e.g. argon or helium may be used. Nitrogen and carbon dioxide are unsuitable purge gases in some applications, as these gases may undergo chemical reaction with fine dusts of certain light metals.
Because an inert purge gas is used, the purge procedure may (erroneously) be referred to as inerting in everyday language. This confusion may lead to dangerous situations. Carbon dioxide is a safe inert gas for purging. Carbon dioxide is an unsafe inert gas for inerting, as it may ignite the vapors and result in an explosion.
See also
ATEX
Flammability limits
Limiting oxygen concentration
Inerting (gas)
External links
Fighting Smoldering Fires in Silos – A Cautionary Note on Using Carbon Dioxide. Guest post at www.mydustexplosionresearch.com blog, Nov 27, 2017
References
Explosion protection
Fire
Safety | Purging (gas) | [
"Chemistry",
"Engineering"
] | 769 | [
"Explosion protection",
"Combustion engineering",
"Combustion",
"Explosions",
"Fire"
] |
56,066,481 | https://en.wikipedia.org/wiki/Sara%20Rankin | Sara Margaret Rankin is a professor of Leukocyte and Stem Cell Biology at Imperial College London. She is known for her work in stimulating endogenous bone marrow stem cells to repair the body. Rankin identifies as being neurodiverse.
Personal life
After visiting the Bristol Radiotherapy Centre as a teenager, Rankin knew she wanted a career in research. Since 2011, Rankin has identified as being neurodiverse, with characteristics of dyslexia and dyspraxia.
Education
Rankin received first class honours for a BSc in pharmacology at King's College London in 1985. Rankin continued at the same institution for her PhD, completing in 1989.
Research
After her PhD, Rankin moved to the University of California, San Diego as a postdoctoral research fellow. She joined Imperial as a postdoctoral researcher in 1992.
, Rankin is based in the Faculty of Medicine at the National Heart and Lung Institute (NHLI), where she was appointed Professor in 2010. Rankin and her team are trying to navigate the mesenchymal stem cells found in bone marrow to injured sites around the body, where they can promote regeneration in nearby tissue and dampen the immune system. The regulated movement of stem cells from bone marrow to sites of tissue damage could treat broken bones or heart disease.
She is the lead for biology and therapeutics at the Blast Injury Centre at Imperial College London, where she studies heterotopic ossification. She is a leader of the London Stem Cell Network. Rankin holds research grants from the Wellcome Trust, European Commission and British Legion.
Public engagement
Rankin is the NHLI division lead for Outreach and engagement. She is the co-founder of The Curious Act, a science public engagement initiative who run creative science-based activities for the public. In 2011, she collaborated with Gina Czarnecki, acting as the lead scientist in "Wasted". In 2012 Czarnecki and Rankin created Palaces, a crystal resin sculpture embedded with milk teeth donated by children across the UK.
Rankin and The Curious Act have hosted a number of science-themed pop-up shops. The Heart and Lung Repair Shop, a two-week pop-up science shop in Hammersmith's Kings Mall, opened in July 2014. The Heart and Lung Convenience Store opened in Hammersmith in 2015.
In 2017 Rankin launched 2eMpowerUK, which runs STEM workshops for neurodiverse teenagers.
Honours and awards
Her awards and honours include:
2016 Imperial College London Collaboration Award for Societal Engagement
2011 Wellcome Trust Senior Investigator Award
2010 Imperial College London Rector's Award for Excellence in Pastoral Care
2019 Imperial College Julia Higgins Award for contribution to gender equality
References
21st-century British biologists
21st-century British women scientists
Academics of Imperial College London
Alumni of King's College London
British women biologists
Fellows of the Royal Society of Biology
Living people
Stem cell researchers
Year of birth missing (living people)
Scientists with dyslexia
British scientists with disabilities | Sara Rankin | [
"Biology"
] | 594 | [
"Stem cell researchers",
"Stem cell research"
] |
56,067,306 | https://en.wikipedia.org/wiki/SDS-PAGE | SDS-PAGE (sodium dodecyl sulfate–polyacrylamide gel electrophoresis) is a discontinuous electrophoretic system developed by Ulrich K. Laemmli which is commonly used as a method to separate proteins with molecular masses between 5 and 250 kDa. The combined use of sodium dodecyl sulfate (SDS, also known as sodium lauryl sulfate) and polyacrylamide gel eliminates the influence of structure and charge, and proteins are separated by differences in their size. At least up to 2012, the publication describing it was the most frequently cited paper by a single author, and the second most cited overall.
Properties
SDS-PAGE is an electrophoresis method that allows protein separation by mass. The medium (also referred to as ′matrix′) is a polyacrylamide-based discontinuous gel. The polyacrylamide-gel is typically sandwiched between two glass plates in a slab gel. Although tube gels (in glass cylinders) were used historically, they were rapidly made obsolete with the invention of the more convenient slab gels. In addition, SDS (sodium dodecyl sulfate) is used. About 1.4 grams of SDS bind to a gram of protein, corresponding to one SDS molecule charges per two amino acids. SDS acts as a surfactant, masking the protein's intrinsic charge and conferring them very similar charge-to-mass ratios. The intrinsic charges of the proteins are negligible in comparison to the SDS loading, and the positive charges are also greatly reduced in the basic pH range of a separating gel. Upon application of a constant electric field, the proteins migrate towards the anode, each with a different speed, depending on their mass. This simple procedure allows precise protein separation by mass.
SDS tends to form spherical micelles in aqueous solutions above a certain concentration called the critical micellar concentration (CMC). Above the critical micellar concentration of 7 to 10 millimolar in solutions, the SDS simultaneously occurs as single molecules (monomer) and as micelles, below the CMC SDS occurs only as monomers in aqueous solutions. At the critical micellar concentration, a micelle consists of about 62 SDS molecules. However, only SDS monomers bind to proteins via hydrophobic interactions, whereas the SDS micelles are anionic on the outside and do not adsorb any protein. SDS is amphipathic in nature, which allows it to unfold both polar and nonpolar sections of protein structure. In SDS concentrations above 0.1 millimolar, the unfolding of proteins begins, and above 1 mM, most proteins are denatured. Due to the strong denaturing effect of SDS and the subsequent dissociation of protein complexes, quaternary structures can generally not be determined with SDS. Exceptions are proteins that are stabilised by covalent cross-linking (e.g. -S-S- linkages) and the SDS-resistant protein complexes, which are stable even in the presence of SDS (the latter, however, only at room temperature). To denature the SDS-resistant complexes a high activation energy is required, which is achieved by heating. SDS resistance is based on a metastability of the protein fold. Although the native, fully folded, SDS-resistant protein does not have sufficient stability in the presence of SDS, the chemical equilibrium of denaturation at room temperature occurs slowly. Stable protein complexes are characterised not only by SDS resistance but also by stability against proteases and an increased biological half-life.
Alternatively, polyacrylamide gel electrophoresis can also be performed with the cationic surfactants CTAB in a CTAB-PAGE, or 16-BAC in a BAC-PAGE.
Procedure
The SDS-PAGE method is composed of gel preparation, sample preparation, electrophoresis, protein staining or western blotting and analysis of the generated banding pattern.
Gel production
When using different buffers in the gel (discontinuous gel electrophoresis), the gels are made up to one day prior to electrophoresis, so that the diffusion does not lead to a mixing of the buffers. The gel is produced by free radical polymerization in a mold consisting of two sealed glass plates with spacers between the glass plates. In a typical mini-gel setting, the spacers have a thickness of 0.75 mm or 1.5 mm, which determines the loading capacity of the gel. For pouring the gel solution, the plates are usually clamped in a stand which temporarily seals the otherwise open underside of the glass plates with the two spacers. For the gel solution, acrylamide is mixed as gel-former (usually 4% V/V in the stacking gel and 10-12 % in the separating gel), methylenebisacrylamide as a cross-linker, stacking or separating gel buffer, water and SDS. By adding the catalyst TEMED and the radical initiator ammonium persulfate (APS) the polymerisation is started. The solution is then poured between the glass plates without creating bubbles. Depending on the amount of catalyst and radical starter and depending on the temperature, the polymerisation lasts between a quarter of an hour and several hours. The lower gel (separating gel) is poured first and covered with a few drops of a barely water-soluble alcohol (usually buffer-saturated butanol or isopropanol), which eliminates bubbles from the meniscus and protects the gel solution of the radical scavenger oxygen. After the polymerisation of the separating gel, the alcohol is discarded and the residual alcohol is removed with filter paper. After addition of APS and TEMED to the stacking gel solution, it is poured on top of the solid separation gel. Afterwards, a suitable sample comb is inserted between the glass plates without creating bubbles. The sample comb is carefully pulled out after polymerisation, leaving pockets for the sample application. For later use of proteins for protein sequencing, the gels are often prepared the day before electrophoresis to reduce reactions of unpolymerised acrylamide with cysteines in proteins.
By using a gradient mixer, gradient gels with a gradient of acrylamide (usually from 4 to 12%) can be cast, which have a larger separation range of the molecular masses. Commercial gel systems (so-called pre-cast gels) usually use the buffer substance Bis-tris methane with a pH value between 6.4 and 7.2 both in the stacking gel and in the separating gel. These gels are delivered cast and ready-to-use. Since they use only one buffer (continuous gel electrophoresis) and have a nearly neutral pH, they can be stored for several weeks. The more neutral pH slows the hydrolysis and thus the decomposition of the polyacrylamide. Furthermore, there are fewer acrylamide-modified cysteines in the proteins. Due to the constant pH in collecting and separating gel there is no stacking effect. Proteins in BisTris gels can not be stained with ruthenium complexes. This gel system has a comparatively large separation range, which can be varied by using MES or MOPS in the running buffer.
Sample preparation
During sample preparation, the sample buffer, and thus SDS, is added in excess to the proteins, and the sample is then heated to 95 °C for five minutes, or alternatively 70 °C for ten minutes. Heating disrupts the secondary and tertiary structures of the protein by disrupting hydrogen bonds and stretching the molecules. Optionally, disulfide bridges can be cleaved by reduction. For this purpose, reducing thiols such as β-mercaptoethanol (β-ME, 5% by volume), dithiothreitol (DTT, 10–100 millimolar), dithioerythritol (DTE, 10 millimolar), tris(2-carboxyethyl)phosphine or tributylphosphine are added to the sample buffer. After cooling to room temperature, each sample is pipetted into its own well in the gel, which was previously immersed in electrophoresis buffer in the electrophoresis apparatus.
In addition to the samples, a molecular-weight size marker is usually loaded onto the gel. This consists of proteins of known sizes and thereby allows the estimation (with an error of ± 10%) of the sizes of the proteins in the actual samples, which migrate in parallel in different tracks of the gel. The size marker is often pipetted into the first or last pocket of a gel.
Electrophoresis
For separation, the denatured samples are loaded onto a gel of polyacrylamide, which is placed in an electrophoresis buffer with suitable electrolytes. Thereafter, a voltage (usually around 100 V, 10-20 V per cm gel length) is applied, which causes a migration of negatively charged molecules through the gel in the direction of the positively charged anode. The gel acts like a sieve. Small proteins migrate relatively easily through the mesh of the gel, while larger proteins are more likely to be retained and thereby migrate more slowly through the gel, thereby allowing proteins to be separated by molecular size. The electrophoresis lasts between half an hour to several hours depending on the voltage and length of gel used.
The fastest-migrating proteins (with a molecular weight of less than 5 kDa) form the buffer front together with the anionic components of the electrophoresis buffer, which also migrate through the gel. The area of the buffer front is made visible by adding the comparatively small, anionic dye bromophenol blue to the sample buffer. Due to the relatively small molecule size of bromophenol blue, it migrates faster than proteins. By optical control of the migrating colored band, the electrophoresis can be stopped before the dye and also the samples have completely migrated through the gel and leave it.
The most commonly used method is the discontinuous SDS-PAGE. In this method, the proteins migrate first into a collecting gel with neutral pH, in which they are concentrated and then they migrate into a separating gel with basic pH, in which the actual separation takes place. Stacking and separating gels differ by different pore size (4-6 % T and 10-20 % T), ionic strength and pH values (pH 6.8 or pH 8.8). The electrolyte most frequently used is an SDS-containing Tris-glycine-chloride buffer system. At neutral pH, glycine predominantly forms the zwitterionic form, at high pH the glycines lose positive charges and become predominantly anionic. In the collection gel, the smaller, negatively charged chloride ions migrate in front of the proteins (as leading ions) and the slightly larger, negatively and partially positively charged glycinate ions migrate behind the proteins (as initial trailing ions), whereas in the comparatively basic separating gel both ions migrate in front of the proteins. The pH gradient between the stacking and separation gel buffers leads to a stacking effect at the border of the stacking gel to the separation gel, since the glycinate partially loses its slowing positive charges as the pH increases and then, as the former trailing ion, overtakes the proteins and becomes a leading ion, which causes the bands of the different proteins (visible after a staining) to become narrower and sharper - the stacking effect. For the separation of smaller proteins and peptides, the TRIS-Tricine buffer system of Schägger and von Jagow is used due to the higher spread of the proteins in the range of 0.5 to 50 kDa.
Gel staining
At the end of the electrophoretic separation, all proteins are sorted by size and can then be analyzed by other methods, e. g. protein staining such as Coomassie staining (most common and easy to use), silver staining (highest sensitivity), stains all staining, Amido black 10B staining, Fast green FCF staining, fluorescent stains such as epicocconone stain and SYPRO orange stain, and immunological detection such as the Western Blot. The fluorescent dyes have a comparatively higher linearity between protein quantity and color intensity of about three orders of magnitude above the detection limit (the quantity of protein that can be estimated by color intensity). When using the fluorescent protein dye trichloroethanol, a subsequent protein staining is omitted if it was added to the gel solution and the gel was irradiated with UV light after electrophoresis.
In Coomassie staining, gel is fixed in a 50% ethanol 10% glacial acetic acid solution for 1 hr. Then the solution is changed for fresh one and after 1 to 12 hrs gel is changed to a staining solution (50% methanol, 10% glacial acetic acid, 0.1% coomassie brilliant blue) followed by destaining changing several times a destaining solution of 40% methanol, 10% glacial acetic acid.
Analysis
Protein staining in the gel creates a documentable banding pattern of the various proteins.
Glycoproteins have differential levels of glycosylations and adsorb SDS more unevenly at the glycosylations, resulting in broader and blurred bands.
Membrane proteins, because of their transmembrane domain, are often composed of the more hydrophobic amino acids, have lower solubility in aqueous solutions, tend to bind lipids, and tend to precipitate in aqueous solutions due to hydrophobic effects when sufficient amounts of detergent are not present. This precipitation manifests itself for membrane proteins in a SDS-PAGE in "tailing" above the band of the transmembrane protein. In this case, more SDS can be used (by using more or more concentrated sample buffer) and the amount of protein in the sample application can be reduced.
An overloading of the gel with a soluble protein creates a semicircular band of this protein (e. g. in the marker lane of the image at 66 kDa), allowing other proteins with similar molecular weights to be covered.
A low contrast (as in the marker lane of the image) between bands within a lane indicates either the presence of many proteins (low purity) or, if using purified proteins and a low contrast occurs only below one band, it indicates a proteolytic degradation of the protein, which first causes degradation bands, and after further degradation produces a homogeneous color ("smear") below a band.
The documentation of the banding pattern is usually done by photographing or scanning. For a subsequent recovery of the molecules in individual bands, a gel extraction can be performed.
Archiving
After protein staining and documentation of the banding pattern, the polyacrylamide gel can be dried for archival storage. Proteins can be extracted from it at a later date. The gel is either placed in a drying frame (with or without the use of heat) or in a vacuum dryer. The drying frame consists of two parts, one of which serves as a base for a wet cellophane film to which the gel and a one percent glycerol solution are added. Then a second wet cellophane film is applied bubble-free, the second frame part is put on top and the frame is sealed with clips. The removal of the air bubbles avoids a fragmentation of the gel during drying. The water evaporates through the cellophane film. In contrast to the drying frame, a vacuum dryer generates a vacuum and heats the gel to about 50 °C.
Molecular mass determination
For a more accurate determination of the molecular weight, the relative migration distances of the individual protein bands are measured in the separating gel. The measurements are usually performed in triplicate for increased accuracy. The relative mobility (called Rf value or Rm value) is defined as the distance migrated by the protein band divided by the distance migrated by the buffer front. The distances are each measured from the beginning of the separation gel. The migration of the buffer front roughly corresponds to the migration of the dye contained in the sample buffer. The Rf's of the size marker are plotted semi-logarithmically against their known molecular weights. By comparison with the linear part of the generated graph or by a regression analysis, the molecular weight of an unknown protein can be determined by its relative mobility.
Bands of proteins with glycosylations can be blurred, as glycosylation is often heterogenous. Proteins with many basic amino acids (e. g. histones) can lead to an overestimation of the molecular weight or even not migrate into the gel at all, because they move slower in the electrophoresis due to the positive charges or even to the opposite direction. On the other hand, many acidic amino acids can lead to accelerated migration of a protein and an underestimation of its molecular mass.
Applications
The SDS-PAGE in combination with a protein stain is widely used in biochemistry for the quick and exact separation and subsequent analysis of proteins. It has comparatively low instrument and reagent costs and is an easy-to-use method. Because of its low scalability, it is mostly used for analytical purposes and less for preparative purposes, especially when larger amounts of a protein are to be isolated.
Additionally, SDS-PAGE is used in combination with the western blot for the determination of the presence of a specific protein in a mixture of proteins - or for the analysis of post-translational modifications. Post-translational modifications of proteins can lead to a different relative mobility (i.e. a band shift) or to a change in the binding of a detection antibody used in the western blot (i.e. a band disappears or appears).
In mass spectrometry of proteins, SDS-PAGE is a widely used method for sample preparation prior to spectrometry, mostly using in-gel digestion. In regards to determining the molecular mass of a protein, the SDS-PAGE is a bit more exact than an analytical ultracentrifugation, but less exact than a mass spectrometry or - ignoring post-translational modifications - a calculation of the protein molecular mass from the DNA sequence.
In medical diagnostics, SDS-PAGE is used as part of the HIV test and to evaluate proteinuria. In the HIV test, HIV proteins are separated by SDS-PAGE and subsequently detected by Western Blot with HIV-specific antibodies of the patient, if they are present in his blood serum. SDS-PAGE for proteinuria evaluates the levels of various serum proteins in the urine, e.g. Albumin, Alpha-2-macroglobulin and IgG.
Variants
SDS-PAGE is the most widely used method for gel electrophoretic separation of proteins. Two-dimensional gel electrophoresis sequentially combines isoelectric focusing or BAC-PAGE with a SDS-PAGE. Native PAGE is used if native protein folding is to be maintained. For separation of membrane proteins, BAC-PAGE or CTAB-PAGE may be used as an alternative to SDS-PAGE. For electrophoretic separation of larger protein complexes, agarose gel electrophoresis can be used, e.g. the SDD-AGE. Some enzymes can be detected via their enzyme activity by zymography.
Alternatives
While being one of the more precise and low-cost protein separation and analysis methods, the SDS-PAGE denatures proteins. Where non-denaturing conditions are necessary, proteins are separated by a native PAGE or different chromatographic methods with subsequent photometric quantification, for example affinity chromatography (or even tandem affinity purification), size exclusion chromatography, ion exchange chromatography. Proteins can also be separated by size in a tangential flow filtration or an ultrafiltration. Single proteins can be isolated from a mixture by affinity chromatography or by a pull-down assay. Some historically early and cost effective but crude separation methods usually based upon a series of extractions and precipitations using kosmotropic molecules, for example the ammonium sulfate precipitation and the polyethyleneglycol precipitation.
History
In 1948, Arne Tiselius was awarded the Nobel Prize in Chemistry for the discovery of the principle of electrophoresis as the migration of charged and dissolved atoms or molecules in an electric field. The use of a solid matrix (initially paper discs) in a zone electrophoresis improved the separation. The discontinuous electrophoresis of 1964 by L. Ornstein and B. J. Davis made it possible to improve the separation by the stacking effect. The use of cross-linked polyacrylamide hydrogels, in contrast to the previously used paper discs or starch gels, provided a higher stability of the gel and no microbial decomposition. The denaturing effect of SDS in continuous polyacrylamide gels and the consequent improvement in resolution was first described in 1965 by David F. Summers in the working group of James E. Darnell to separate poliovirus proteins. The current variant of the SDS-PAGE was described in 1970 by Ulrich K. Laemmli and initially used to characterise the proteins in the head of bacteriophage T4.
References
External links
Protocol for BisTris SDS-PAGE at OpenWetWare.org
Electrophoresis | SDS-PAGE | [
"Chemistry",
"Biology"
] | 4,519 | [
"Instrumental analysis",
"Molecular biology techniques",
"Electrophoresis",
"Biochemical separation processes"
] |
56,070,044 | https://en.wikipedia.org/wiki/Russell%20Varian%20Prize | The Russell Varian Prize was an international scientific prize awarded for a single, high-impact and innovative contribution in the field of nuclear magnetic resonance (NMR), that laid the foundation for the development of new technologies in the field. It honored the memory of Russell Varian, the pioneer behind the creation of the first commercial NMR spectrometer and the co-founder, in 1948, of Varian Associates, one of the first high-tech companies in Silicon Valley. The prize carried a monetary award of €15,000 and it was awarded annually between the years 2002 and 2015 (except for 2003) by a committee of experts in the field. The award ceremony alternated between the European Magnetic Resonance (EUROMAR) Conference and the International Council on Magnetic Resonance in Biological Systems (ICMRBS) Conference. Originally, the prize was sponsored by Varian, Inc. and later by Agilent Technologies, after the latter acquired Varian, Inc. in 2010. The prize was discontinued in 2016 after Agilent Technologies closed its NMR division.
Russell Varian Prize Awardees
2002 Jean Jeener. Contribution: Multi-dimensional Fourier NMR spectroscopy.
2004 Erwin L. Hahn. Contribution: Spin echo phenomena and experiments.
2005 Nicolaas Bloembergen. Contribution: Nuclear magnetic relaxation.
2006 John S. Waugh. Contribution: Average Hamiltonian theory.
2007 Alfred G. Redfield. Contribution: Relaxation Theory.
2008 Alexander Pines. Contribution: Cross-polarization method for NMR in solids.
2009 Albert W. Overhauser. Contribution: Nuclear Overhauser effect (NOE).
2010 Martin Karplus. Contribution: Karplus equation.
2011 Gareth A. Morris. Contribution: INEPT pulse sequence.
2012 Ray Freeman and Weston A. Anderson. Contribution: Double resonance.
2013 Lucio Frydman. Contribution: Ultrafast NMR.
2014 Ad Bax. Contribution: Homonuclear broad band decoupled absorption spectra.
2015 Malcolm Levitt. Contribution: Composite pulses.
See also
List of physics awards
References
Science and technology awards
Physics awards
Nuclear magnetic resonance
Awards established in 2002
Awards disestablished in 2016 | Russell Varian Prize | [
"Physics",
"Chemistry",
"Technology"
] | 434 | [
"Science and technology awards",
"Nuclear magnetic resonance",
"Physics awards",
"Nuclear physics"
] |
56,072,400 | https://en.wikipedia.org/wiki/Spin%20squeezing | Spin squeezing is a quantum process that decreases the variance of one of the angular momentum components in an ensemble of particles with a spin. The quantum states obtained are called spin squeezed states. Such states have been proposed for quantum metrology, to allow a better precision for estimating a rotation angle than classical interferometers. However a wide body of work contradicts this analysis. In particular, these works show that the estimation precision obtainable for any quantum state can be expressed solely in terms of the state response to the signal. As squeezing does not increase the state response to the signal, it cannot fundamentally improve the measurement precision.
Mathematical definition
Spin squeezed states for an ensemble of spins have been defined analogously to squeezed states of a bosonic mode. For any quantum state (not necessarily a pure state), let be the direction of its mean spin, so that . By the Heisenberg uncertainty relation,where are the collective angular momentum components defined as and are the single particle angular momentum components.
We say that the state is spin-squeezed in the -direction, if the variance of the -component is smaller than the square root of the right-hand side of the inequality aboveA different definition was based on using states with a reduced spin-variance for metrology.
Relations to quantum entanglement
Spin squeezed states can be proven to be entangled based on measuring the spin length and the variance of the spin in an orthogonal direction. Let us define the spin squeezing parameter
,
where is the number of the spin- particles in the ensemble. Then, if is smaller than then the state is entangled. It has also been shown that a higher and higher level of multipartite entanglement is needed to achieve a larger and larger degree of spin squeezing.
Experiments with atomic ensembles
Experiments have been carried out with cold or even room temperature atomic ensembles. In this case, the atoms do not interact with each other. Hence, in order to entangle them, they make them interact with light which is then measured. A 20 dB (100 times) spin squeezing has been obtained in such a system. Simultaneous spin squeezing of two ensembles, which interact with the same light field, has been used to entangle the two ensembles. Spin squeezing can be enhanced by using cavities.
Cold gas experiments have also been carried out with Bose-Einstein Condensates (BEC). In this case, the spin squeezing is due to the interaction between the atoms.
Most experiments have been carried out using only two internal states of the particles, hence, effectively with spin- particles. There are also experiments aiming at spin squeezing with particles of a higher spin. Nuclear-electron spin squeezing within the atoms, rather than interatomic spin squeezing, has also been created in room temperature gases.
Creating large spin squeezing
Experiments with atomic ensembles are usually implemented in free-space with Gaussian laser beams. To enhance the spin squeezing effect towards generating non-Gaussian states, which are metrologically useful, the free-space apparatuses are not enough. Cavities and nanophotonic waveguides have been used to enhance the squeezing effect with less atoms.
For the waveguide systems, the atom-light coupling and the squeezing effect can be enhanced using the evanescent field near to the waveguides, and the type of atom-light interaction can be controlled by choosing a proper polarization state of the guided input light, the internal state subspace of the atoms and the geometry of the trapping shape. Spin squeezing protocols using nanophotonic waveguides based on the birefringence effect and the Faraday effect have been proposed. By optimizing the optical depth or cooperativity through controlling the geometric factors mentioned above, the Faraday protocol demonstrates that, to enhance the squeezing effect, one needs to find a geometry that generates weaker local electric field at the atom positions. This is counterintuitive, because usually to enhance atom-light coupling, a strong local field is required. But it opens the door to perform very precise measurement with little disruptions to the quantum system, which cannot be simultaneously satisfied with a strong field.
Generalized spin squeezing
In entanglement theory, generalized spin squeezing also refers to any criterion that is given with the first and second moments of angular momentum coordinates, and detects entanglement in a quantum state. For a large ensemble of spin-1/2 particles a complete set of such relations have been found, which have been generalized to particles with an arbitrary spin. Apart from detecting entanglement in general, there are relations that detect multipartite entanglement. Some of the generalized spin-squeezing entanglement criteria have also a relation to quantum metrological tasks. For instance, planar squeezed states can be used to measure an unknown rotation angle optimally.
References
Quantum information science
Quantum optics | Spin squeezing | [
"Physics"
] | 978 | [
"Quantum optics",
"Quantum mechanics"
] |
56,073,671 | https://en.wikipedia.org/wiki/Solovay%E2%80%93Kitaev%20theorem | In quantum information and computation, the Solovay–Kitaev theorem says that if a set of single-qubit quantum gates generates a dense subgroup of SU(2), then that set can be used to approximate any desired quantum gate with a short sequence of gates that can also be found efficiently. This theorem is considered one of the most significant results in the field of quantum computation and was first announced by Robert M. Solovay in 1995 and independently proven by Alexei Kitaev in 1997. Michael Nielsen and Christopher M. Dawson have noted its importance in the field.
A consequence of this theorem is that a quantum circuit of constant-qubit gates can be approximated to error (in operator norm) by a quantum circuit of gates from a desired finite universal gate set (where c is a constant). By comparison, just knowing that a gate set is universal only implies that constant-qubit gates can be approximated by a finite circuit from the gate set, with no bound on its length. So, the Solovay–Kitaev theorem shows that this approximation can be made surprisingly efficient, thereby justifying that quantum computers need only implement a finite number of gates to gain the full power of quantum computation.
Statement
Let be a finite set of elements in SU(2) containing its own inverses (so implies ) and such that the group they generate is dense in SU(2). Consider some . Then there is a constant such that for any , there is a sequence of gates from of length such that . That is, approximates to operator norm error. Furthermore, there is an efficient algorithm to find such a sequence. More generally, the theorem also holds in SU(d) for any fixed d.
This theorem also holds without the assumption that contains its own inverses, although presently with a larger value of that also increases with the dimension .
Quantitative bounds
The constant can be made to be for any fixed . However, there exist particular gate sets for which we can take , which makes the length of the gate sequence optimal up to a constant factor.
Proof idea
Every known proof of the fully general Solovay–Kitaev theorem proceeds by recursively constructing a gate sequence giving increasingly good approximations to . Suppose we have an approximation such that . Our goal is to find a sequence of gates approximating to error, for . By concatenating this sequence of gates with , we get a sequence of gates such that .
The main idea in the original argument of Solovay and Kitaev is that commutators of elements close to the identity can be approximated "better-than-expected". Specifically, for satisfying and and approximations satisfying and , then
where the big O notation hides higher-order terms. One can naively bound the above expression to be , but the group commutator structure creates substantial error cancellation.
We can use this observation to approximate as a group commutator . This can be done such that both and are close to the identity (since ). So, if we recursively compute gate sequences approximating and to error, we get a gate sequence approximating to the desired better precision with . We can get a base case approximation with constant with an exhaustive search of bounded-length gate sequences.
Proof of Solovay-Kitaev Theorem
Let us choose the initial value so that to be able to apply the iterated “shrinking” lemma. In addition we want to make sure that decreases as we increase . Moreover, we also make sure that is small enough so that .
Since is dense in , we can choose large enough so that is an -net for (and hence for , as well), no matter how small is. Thus, given any , we can choose such that . Let be the “difference” of and . Then
Hence, . By invoking the iterated "shrinking" lemma with , there
exists such that
Similarly let . Then
Thus, and we can invoke the iterated "shrinking" lemma (with
this time) to get such that
If we continue in this way, after k steps we get such that
Thus, we have obtained a sequence of
gates that approximates to accuracy . To determine the value of , we set
and solve for k:
Now we can always choose slightly smaller so that the obtained value
of is an integer. Let so that. Then
Hence for any there is a sequence of gates that
approximates to accuracy .
Solovay-Kitaev algorithm for qubits
Here the main ideas that are used in the SK algorithm have been presented. The SK algorithm may be expressed in nine lines of pseudocode. Each of these
lines are explained in detail below, but present it here in its entirety both for the reader's reference, and to stress the conceptual simplicity of the algorithm:
function Solovay-Kitaev(Gate , depth ) is
if ( == 0)
return Basic Approximation to
else
set = Solovay-Kitaev(,)
set = GC-Decompose()
set = Solovay-Kitaev()
set = Solovay-Kitaev()
return ;
end function
Let us examine each of these lines in detail. The first line:
function Solovay-Kitaev(Gate , depth ) is
indicates that the algorithm is a function with two inputs: an arbitrary single-qubit quantum
gate, , which we desire to approximate, and a non-negative integer, , which controls
the accuracy of the approximation. The function returns a sequence of instructions which
approximates to an accuracy , where is a decreasing function of , so that as gets
larger, the accuracy gets better, with as . is described in detail below.
The Solovay-Kitaev function is recursive, so that to obtain an -approximation to ,
it will call itself to obtain -approximations to certain unitaries. The recursion terminates
at , beyond which no further recursive calls are made:
if ( == 0)
return Basic Approximation to
In order to implement this step it is assumed that a preprocessing stage has been completed
which allows one to find a basic -approximation to arbitrary . Since is a constant, in principle this preprocessing stage may be accomplished simply by enumerating
and storing a large number of instruction sequences from , say up to some sufficiently large
(but fixed) length , and then providing a lookup routine which, given , returns the closest sequence.
At higher levels of recursion, to find an -approximation to , one begins by finding an
-approximation to :
else
set = Solovay-Kitaev(,)
is used as a step towards finding an improved approximation to . Defining ≡ , the next three steps of the algorithm aim to find an -approximation to , where is some improved level of accuracy, i.e., . Finding such an approximation
also enables us to obtain an -approximation to , simply by concatenating exact
sequence of instructions for with -approximating sequence for .
How do we find such an approximation to? First, observe that is within a distance of the identity. This follows from the definition of and the fact that is within a distance of .
Second, decompose as a group commutator of unitary gates and . For any it turns out that this is not obvious and that there is always an infinite set of
choices for and such that . For our purposes it is important that we
find and such that for some constant . We call such a decomposition a balanced group commutator.
set = GC-Decompose()
For practical implementations we will see below that it is useful to have as small as
possible.
The next step is to find instruction sequences which are -approximations to and :
set = Solovay-Kitaev()
set = Solovay-Kitaev()
The group commutator of and turns out to be an ≡ -approximation to , for some small constant . Provided , we see that , and this procedure therefore provides an improved
approximation to , and thus to .
The constant is important as it determines the precision required of the initial approximations. In particular, we see that for this construction to guarantee that we must have .
The algorithm concludes by returning the sequences approximating the group commutator, as well as :
return '';
Summing up, the function Solovay-Kitaev(U, n) returns a sequence which provides an -approximation to the desired unitary . The five constituents in this sequence
are all obtained by calling the function at the th level of recursion.
References
Mathematical theorems
Quantum computing
Quantum information theory | Solovay–Kitaev theorem | [
"Mathematics"
] | 1,762 | [
"Mathematical theorems",
"Mathematical problems",
"nan"
] |
54,500,840 | https://en.wikipedia.org/wiki/Bohr%20model%20of%20the%20chemical%20bond | In addition to the model of the atom, Niels Bohr also proposed a model of the chemical bond.
He proposed this model first in the article "Systems containing several nuclei" - the third and last of the classic series of articles by Bohr, published in November 1913 in Philosophical Magazine.
According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion - the electrons in the ring are at the maximum distance from each other.
Thus, according to this model, the methane molecule is a regular tetrahedron, in which center the carbon nucleus locates, and in the corners - the nucleus of hydrogen. The chemical bond between them forms four two-electron rings, rotating around the lines connecting the center with the corners.
The Bohr model of the chemical bond could not explain the properties of the molecules. Attempts to improve it have been undertaken many times, but have not led to success.
A working theory of chemical bonding was formulated only by quantum mechanics on the basis of the principle of uncertainty and the Pauli exclusion principle. In contrast to the Bohr model of chemical bonding, it turned out that the electron cloud mainly concentrates on the line between the nuclei, providing a Coulomb attraction between them. For many-electron atoms, the valence bond theory, laid down in 1927 by Walter Heitler and Fritz London, was a successful approximation.
References
Bibliography
Chemical bonding
Quantum chemistry
Chemistry theories
Electron | Bohr model of the chemical bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 373 | [
"Electron",
"Quantum chemistry stubs",
"Quantum chemistry",
"Molecular physics",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
"Physical chemistry stubs",
" and optical physics"
] |
54,501,624 | https://en.wikipedia.org/wiki/Factorization%20homology | In algebraic topology and category theory, factorization homology is a variant of topological chiral homology, motivated by an application to topological quantum field theory and cobordism hypothesis in particular. It was introduced by David Ayala, John Francis, and Nick Rozenblyum.
References
External links
Homological algebra | Factorization homology | [
"Mathematics"
] | 65 | [
"Mathematical structures",
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Homological algebra"
] |
54,502,546 | https://en.wikipedia.org/wiki/Chrystal%27s%20equation | In mathematics, Chrystal's equation is a first order nonlinear ordinary differential equation, named after the mathematician George Chrystal, who discussed the singular solution of this equation in 1896. The equation reads as
where are constants, which upon solving for , gives
This equation is a generalization particular cases of Clairaut's equation since it reduces to a form of Clairaut's equation under condition as given below.
Solution
Introducing the transformation gives
Now, the equation is separable, thus
The denominator on the left hand side can be factorized if we solve the roots of the equation and the roots are , therefore
If , the solution is
where is an arbitrary constant. If , () then the solution is
When one of the roots is zero, the equation reduces to a special-case of Clairaut's equation and a parabolic solution is obtained in this case, and the solution is
The above family of parabolas are enveloped by the parabola , therefore this enveloping parabola is a singular solution.
References
Eponymous equations of physics
Ordinary differential equations | Chrystal's equation | [
"Physics"
] | 221 | [
"Eponymous equations of physics",
"Equations of physics"
] |
54,503,331 | https://en.wikipedia.org/wiki/Weyl%27s%20tube%20formula | Weyl's tube formula gives the volume of an object defined as the set of all points within a small distance of a manifold.
Let be an oriented, closed, two-dimensional surface, and let denote the set of all points within a distance of the surface . Then, for sufficiently small, the volume of is
where is the area of the surface and is its Euler characteristic. This expression can be generalized to the case where is a -dimensional submanifold of -dimensional Euclidean space .
References
Manifolds | Weyl's tube formula | [
"Mathematics"
] | 105 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
54,505,172 | https://en.wikipedia.org/wiki/Polynomial%20differential%20form | In algebra, the ring of polynomial differential forms on the standard n-simplex is the differential graded algebra:
Varying n, it determines the simplicial commutative dg algebra:
(each induces the map ).
References
Aldridge Bousfield and V. K. A. M. Gugenheim, §1 and §2 of: On PL De Rham Theory and Rational Homotopy Type, Memoirs of the A. M. S., vol. 179, 1976.
External links
https://ncatlab.org/nlab/show/differential+forms+on+simplices
https://mathoverflow.net/questions/220532/polynomial-differential-forms-on-bg
Differential algebra
Ring theory | Polynomial differential form | [
"Mathematics"
] | 160 | [
"Differential algebra",
"Algebra stubs",
"Ring theory",
"Fields of abstract algebra",
"Algebra"
] |
54,505,993 | https://en.wikipedia.org/wiki/Basta%20%28archaeological%20site%29 | Basta () is a pre-historic archaeological site and village in Ma'an Governorate, Jordan, southeast of Petra. It is named for the nearby contemporary village of Basta. Like the nearby site of Ba'ja, Basta was built in c. and belongs to the PPNB (Pre-Pottery Neolithic B) period. Basta is one of the earliest known places to have a settled population who grew crops and domesticated livestock.
Archeological site
The Basta's settlement dates back to the early periods of human small settlements and the use of agricultural crops as a way to sustain their inhabitants. Along with the crops, also is one of the archeological sites that marks the first use of animal domestication. Due to the relics founded dating before 9000 BC, the place is considered as one of the first places in the world that initiated the process of human settlement in great scale.
The houses in Basta were built on the familiar circular shape, and this design enabled individuals within the same house to live together. They used limestone to build their homes, as this can be known by the height of the walls in some places and the stone partitions whose floors were made with wood. These woods were from trees of the area and imported from other regions.
There are no cemeteries, on the contrary, the people of the ancient village used to bury their dead under the floors of their homes. The archaeologists believes that the intent is to remind successive generations of the relationship of families and individuals to their homes, which later became a basic concept of place and home ownership for farmers and villagers. Of course, this socio-religious concept has its impact on the formation of the city, the first nucleus of civilization at the region.
Being a civilization that predates the invention of pottery, the household items that were found were made of stone and bone from which grinding tools and mills were made, while flint was used to make arrowheads. Animal dolls such as a sitting deer, the head of a bull or a cow, the head of a bear and the head of a ram, were also found, which may have religious meaning.
In their highest point, Basta became a regional center of trade and "industrial" production of handmade tools. Thanks to the domestication, trade and agriculture, Basta reached the population of at least 1000 people, which it made one of the most populated settlements in that time along with the ancient settlement of Beidha, which is located near.
There is no consensus about the decline of Basta, but researchers believes that the fast growth of the settlement, an earthquake and the overconsumption of natural resources of the area were the factors that caused the decline and, consequently, the disappearance of the city around 5000 BC.
See also
Archaeological sites in Jordan
Domestication
Pre-Pottery Neolithic B
References
Bibliography
Further reading
Gebel, Basta II: The Architecture and Stratigraphy. Berlin 2006.
Nissen, Basta I: The Human Ecology. Berlin, 2004.
External links
Photos of Basta at the American Center of Research
Ain Ghazal
Neolithic settlements
7th-millennium BC establishments
Megasites
Pre-Pottery Neolithic B | Basta (archaeological site) | [
"Physics",
"Mathematics"
] | 633 | [
"Quantity",
"Megasites",
"Physical quantities",
"Size"
] |
53,207,054 | https://en.wikipedia.org/wiki/Carbohydrate%20Structure%20Database | Carbohydrate Structure Database (CSDB) is a free curated database and service platform in glycoinformatics, launched in 2005 by a group of Russian scientists from N.D. Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences. CSDB stores published structural, taxonomical, bibliographic and NMR-spectroscopic data on natural carbohydrates and carbohydrate-related molecules.
Overview
The main data stored in CSDB are carbohydrate structures of bacterial, fungal, and plant origin. Each structure is assigned to an organism and is provided with the link(s) to the corresponding scientific publication(s), in which it was described. Apart from structural data, CSDB also stores NMR spectra, information on methods used to decipher a particular structure, and some other data.
CSDB provides access to several carbohydrate-related research tools:
Simulation of 1D and 2D NMR spectra of carbohydrates (GODDESS: glycan-oriented database-driven empirical spectrum simulation).
Automated NMR-based structure elucidation (GRASS: generation, ranking and assignment of saccharide structures).
Statistical analysis of structural feature distribution in glycomes of living organisms
Generation of optimized atomic coordinates for an arbitrary saccharide and subdatabase of conformation maps.
Taxon clustering based on similarities of glycomes (carbohydrate-based tree of life)
Glycosyltransferase subdatabase (GT-explorer)
History and funding
Until 2015, Bacterial Carbohydrate Structure Database (BCSDB) and Plant&Fungal Carbohydrate Structure Database (PFCSDB) databases existed in parallel. In 2015, they were joined into the single Carbohydrate Structure Database (CSDB). The development and maintenance of CSDB have been funded by International Science and Technology Center (2005-2007), Russian Federation President grant program (2005-2006), Russian Foundation for Basic Research (2005-2007,2012-2014,2015-2017,2018-2020), Deutsches Krebsforschungszentrum (short-term in 2006-2010), and Russian Science Foundation (2018-2020).
Data sources and coverage
The main sources of CSDB data are:
Scientific publications indexed in the dedicated citation databases, including NCBI Pubmed and Thomson Reuters Web Of Science (approx. 18000 records).
CCSD (Carbbank ) database (approx. 3000 records).
The data are selected and added to CSDB manually by browsing original scientific publications. The data originating from other databases are subject to error-correction and approval procedures.
As of 2017, the coverage on bacteria and archaea is ca. 80% of carbohydrate structures published in scientific literature The time lag between the publication of relative data and their deposition into CSDB is about 18 months. Plants are covered up to 1997, and fungi up to 2012.
CSDB does not cover data from the animalia domain, except unicellular metazoa. There is a number of dedicated databases on animal carbohydrates, e.g. UniCarbKB or GLYCOSCIENCES.de .
CSDB is reported as one of the biggest projects in glycoinformatics. It is employed in structural studies of natural carbohydrates and in glyco-profiling.
The content of CSDB has been used as a data source in other glycoinformatics projects.
Deposited objects
Molecular structures of glycans, glycopolymers and glycoconjugates: primary structure, aglycon information, polymerization degree and class of molecule. Structural scope includes molecules composed of residues (monosaccharides, alditols, amino acids, fatty acids etc.) linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds, in which at least one residue is a monosaccharide or its derivative.
Bibliography associated with structures: imprint data, keywords, abstracts, IDs in bibliographic databases
Biological context of structures: associated taxon, strain, serogroup, host organism, disease information. The covered domains are: prokaryotes, plants, fungi and selected pathogenic unicellular metazoa. The database contains only glycans originating from these domains or obtained by chemical modification of such glycans.
Assigned NMR spectra and experimental conditions.
Glycosyltransferases associated with taxons: gene and enzyme identifiers, full structures, donor and substrates, methods used to prove enzymatic activity, trustworthiness level.
References to other databases
Other data collected from original publications
Conformation maps of disaccharides derived from molecular dynamics simulations.
Interrelation with other databases
CSDB is cross-linked to other glycomics databases, such as MonosaccharideDB, Glycosciences.DE , NCBI Pubmed, NCBI Taxonomy, NLM catalog, International Classification of Diseases 11, etc. Besides a native notation, CSDB Linear, structures are presented in multiple carbohydrate notations (SNFG, SweetDB, GlycoCT, WURCS, GLYCAM, etc.). CSDB is exportable as a Resource Description Framework (RDF) feed according to the GlycoRDF ontology.
External links
CSDB web site
CSDB usage examples
CSDB technical documentation
CSDB Linear (structure encoding notation)
Carbohydrate databases registered in NAR collection
Carbohydrate databases in the recent decade (lection)
References
Biochemistry databases
Carbohydrates
Glycomics | Carbohydrate Structure Database | [
"Chemistry",
"Biology"
] | 1,210 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Biochemistry databases",
"Organic compounds",
"Glycomics",
"Carbohydrate chemistry",
"Biochemistry",
"Glycobiology"
] |
53,215,263 | https://en.wikipedia.org/wiki/The%20Boring%20Company | The Boring Company (TBC) is an American infrastructure, tunnel construction service, and equipment company founded by Elon Musk. TBC was founded as a subsidiary of SpaceX in 2017, and was spun off as a separate corporation in 2018. TBC has completed one tunneling project that is open to the public, as well as multiple test tunnels.
In 2018, TBC completed one tunnel for testing in Los Angeles County, California.
In 2021, TBC completed the Las Vegas Convention Center (LVCC) Loop, a three-station transportation system with of tunnels. As of April 2024, a segment to Resorts World Las Vegas is also open, and tunnels to Encore and Westgate resorts are being finalized. The system is planned to expand to a total of of tunnels in Las Vegas.
Many other TBC projects in cities across the United States have been announced, but subsequently were cancelled or became inactive due to a lack of activity from the company.
History
Musk announced the idea of the Boring Company in December 2016, and it was officially registered as "TBC – The Boring Company" on January 11, 2017. Musk cited difficulty with Los Angeles traffic, and what he sees as limitations of its two-dimensional transportation network, as his early inspiration for the project. The Boring Company was formed as a SpaceX subsidiary. According to Musk, the company's goal is to enhance tunneling speed enough such that establishing a tunnel network is financially feasible.
In early 2018, the Boring Company was spun out from SpaceX and into a separate corporate entity. Somewhat less than 10% of equity was given to early employees, and over 90% to Elon Musk. Early employees came from a variety of different backgrounds, including those from SpaceX. The company began designing its own tunnel boring machines, and completed several tests in Hawthorne, California. The Hawthorne test tunnel opened to the public on December 18, 2018.
After raising US$113 million from Musk and flamethrower sales during 2018, the Boring Company sold $120 million in stock to venture capital firms in July 2019. By November 2019, Steve Davis had become company president after leading efforts for Musk since 2016. Davis was one of the earliest hires at SpaceX (in 2003) and has twin master's degrees in particle physics and aerospace engineering, as well as degrees in finance and mechanical engineering. In November 2020, TBC announced hiring for positions in Austin, Texas, and by December 2020 had leased two buildings in a industrial complex northeast of Austin, approximately north of Texas Gigafactory.
On April 20, 2022, the company announced an additional $675 million Series C funding round, valuing the company at approximately $5.675 billion. The round was led by Vy Capital and Sequoia Capital, with participation from Valor Equity Partners, Founders Fund, 8VC, Craft Ventures, and DFJ Growth. In 2022, the company was cited by the Texas Commission on Environmental Quality for five violations of Texas environmental regulations.
Sometime before April 2023, the company moved their headquarters and engineering facilities to Bastrop, Texas, approximately east of Texas Gigafactory.
Tunnels connecting different parts of the Las Vegas Convention Center are open, and a tunnel to Resorts World began operating in July 2023. Due to operational expenses, it is probable that the Boring Company is subsidizing the Loop to keep customer prices low. A day pass from Resorts World costs $5, while the LVCVA is paying the Boring Company an additional $4.5 million annually, which equates to $7.50 per ride. In February 2024, OSHA found several safety violations in the Boring Company, including 8 serious violations and allegations that workers have faced chemical burns from sludge while working in the tunnels. The company challenged the ruling; however, an article by Fortune revealed details about the construction of the Las Vegas tunnel, citing numerous employee accounts that described the working conditions as "almost unbearable."
In April 2024, the Boring Company was named among the "Dirty Dozen", the worst workplace safety offenders in the USA, by the National Council of Occupational Safety and Health.
Machines
The first boring machine used by TBC was Godot, a conventional tunnel boring machine (TBM) made by Lovat. TBC then designed their own line of machines called Prufrock. Prufrock 1 was unveiled in 2020, and was used mostly for testing. Engadget reported that the Prufrock 2, which was unveiled in August 2022, could dig up to a mile per week. Prufrock 3 was planned to dig up to seven miles per day, although this was not achieved. Instead in 2024, P3 was able to tunnel 40-46 m/day.
In May 2024, Prufrock 4 was nearly complete. In August, it began testing. Prufrock 5 was in the design stage. Prufrock 4 is 308 feet long. It weighs . It produces up to 4.7 million pounds of thrust. The goal is to triple tunneling speed and improve cooling systems.
Process
TBC claims to be redesigning the entire tunnel boring process to reduce cost, accelerate tunnel completion, improve safety, and reduce site impacts. Innovations include:
Porpoising
Replace tunnel entry and exit excavations by having the TBM "porpoise" in and out of the ground. The TBM is trucked in and placed at an angle to the ground. (Prufrock 2 and 3 required an earthen ramp to set it at the correct angle before beginning to tunnel). It then bores into the ground. It changes angles as it continues boring, eventually returning to the surface and being loaded onto the truck.
In conventional systems, one large excavation is made at the tunnel entrance to allow the TBM to be lowered to the tunnel depth and assembled. A similar excavation is made at the tunnel exit to allow the TBM to be disassembled and lifted out.
Liner truck
TBC moves tunnel lining segments into the tunnel via an all-electric autonomous, wheeled liner truck powered by motors and batteries from Tesla. Conventional systems typically use a diesel rail system, which must be constructed along with the tunnel lining.
Continuous tunneling
TBC is working to install ring liners without stopping tunneling. Conventional systems stop every five feet or so to install another segment of the tunnel lining, and to extend the rail line. The goal is to increase tunneling time/day from 11 hours to 24 hours.
Tunnels
Hawthorne test tunnel
TBC built a high-speed tunnel in 2017 on a route in Hawthorne, California, at the SpaceX headquarters and manufacturing facility. The tunnel roadway has an asphalt surface, a guide-way for autonomous vehicle operation, and supports car trips at speeds of with autonomous control and up to under human control.
Las Vegas Convention Center (LVCC)
Convention Center
In May 2019, the company won a $48.7 million project to shuttle visitors in a loop underneath the LVCC. Boring of the first tunnel, long, began on November 15, 2019, and finished on February 14, 2020, excavating an average of per day. In May 2020, the boring of the second tunnel was completed, for a total of of tunnels. The tunnel opened in October 2021. Standard Tesla vehicles with human drivers are used as shuttles, traveling at about . The service was described by Las Vegas Tourism as "an important step in the development of a game-changing transportation solution in Las Vegas."
Testing with volunteers in late May 2021 showed that the system could transport 4,400 passengers per hour. The system started transporting convention attendees on June 8, 2021. Designed to solve traffic congestion, the tunnel was intended to provide trips of less than two minutes, but has faced a number of traffic jams during busy events in 2021 and 2022.
Private tunnels to convention center
The tunnel to Resorts World Las Vegas opened in July 2022. As of April 2024, Las Vegas strip hotel Encore has a private tunnel underway to allow direct access from the hotel to LVCC.
Vegas Loop
In October 2021, Clark County Commissioners approved a 50-year franchise agreement for a 52-stop, mostly-underground system, a " dual loop system...operating mainly in the Resort Corridor with stations at various resorts and connections to Allegiant Stadium, Brightline West Las Vegas Station, and the University of Nevada, Las Vegas." TBC planned to build five to ten stations during the first year, and then add approximately 16 stations per year thereafter. TBC would be responsible for funding the tunnel, while station costs would be funded by the resort properties and landowners.
In May 2023, TBC was given permission to build the Vegas Loop underground transportation system to 69 stations for a tunnel network of . It would include the existing LVCC Loop and extensions to casinos along the Strip, Harry Reid International Airport, Allegiant Stadium, downtown Las Vegas, and eventually to Los Angeles. TBC claims that once complete, the Vegas Loop would be able to transport more than 90,000 passengers per hour. In March 2024, the Las Vegas Convention and Visitors Authority board of directors voted to extend the existing tunnel, and vowed to address concerns that rose over Occupational Safety and Health Administration (OSHA) violations by TBC, which had resulted in a $100,000 fine.
Projects under discussion
Inquiries and discussions have been held with Boring Company for various projects.
In February 2021, Miami mayor Francis Suarez revealed that Musk had proposed to dig a two-mile tunnel under the Miami River for $30 million, within a six-month timescale, compared with $1 billion over four years estimated by the local transit authority. Much of the savings would be achieved by simplifying ventilation systems and allowing only electric vehicles. As of November 2023, the city is waiting for the Miami Dade Transportation Planning Organization to complete an analysis of the project.
In July 2021, Fort Lauderdale, Florida, accepted a proposal from the Boring Company for a tunnel between downtown and the beach, to be dubbed the "Las Olas Loop." In August 2021, the city was beginning final negotiations with TBC, and Mayor Dean Trantalis estimated the total cost of the round-trip tunnel would be between $90 and $100 million, including stations. As of December 2022, the city suspended efforts to continue the project.
In August 2021, a preliminary concept discussion was held with officials of Cameron County on the potential construction of a tunnel from South Padre Island to Boca Chica Beach in South Texas. If built, the tunnel would be required to pass beneath the Brownsville Ship Channel. It would allow SpaceX's Boca Chica facility to remain accessible if Highway 4, its sole access road was closed.
Inactive and cancelled projects
United States
Washington, DC and Baltimore, Maryland – In 2017, Musk announced plans to build a Hyperloop connecting Washington, DC to Baltimore. This was supplanted in 2018 by a proposal to build a route following the Baltimore–Washington Parkway. The Maryland Transportation Authority officially approved the project. In 2019, a draft Environmental Assessment for the project was completed. As of 2021, the project was no longer listed on the company website.
Chicago – In 2018, the company won a competition to build a high-speed link from downtown Chicago to O'Hare Airport. As of 2021 the plan had been dropped.
Los Angeles – In 2018, TBC proposed to develop a test tunnel on a north–south alignment parallel to Interstate 405 and adjacent to Sepulveda Boulevard. Public opposition and lawsuits led the company to abandon the idea. Also in 2018, the company proposed to build a tunnel called the "Dugout Loop" from Vermont Avenue to Dodger Stadium. , the project had been removed from TBC's website.
San Jose, California – In 2019, a link between San Jose International Airport and Diridon station, was discussed as an alternative to an $800 million traditional rail link. Plans were later dropped.
San Bernardino County, California – In February 2021 the San Bernardino County Transportation Authority (SBCTA) in California approved beginning contract negotiations with TBC to build a nearly tunnel connecting the Ontario airport with the Rancho Cucamonga Metrolink/Future Brightline West train station. However, TBC did not submit a proposal after a third party was involved to study the project impacts. As of 2022, the SBCTA has plans to build the tunnel system using "another company more familiar with the state's bureaucracy to do the Environmental Impact Report."
Australia
In January 2019, Musk responded to an Australian member of parliament regarding a tunnel through the Blue Mountains to the west of Sydney, suggesting costs of $750 million for a tunnel, plus $50 million per station.
Promotional merchandise
In 2018, the company began offering 20,000 "flamethrowers" for preordering. The "flamethrower" was a blow torch shaped to look like a gun and is legal in all U.S. states except Maryland. All 20,000 "flamethrowers" were sold in just a few days. After customs officials said that they would not allow imports of any items called "flamethrowers," Musk announced that he would rename them to "Not-A-Flamethrower" since the devices were in fact akin to roofing torches. Musk announced separate sales of a fire extinguisher, which he described as "overpriced... but this one comes with a cool sticker."
Not-a-Boring Competition student contests
In 2020, TBC released rules for a student tunnel-boring competition. The first competition was held in Las Vegas in September 2021. Officially named the Not-a-Boring Competition, the challenge was to "quickly and accurately drill a tunnel that was -long and -wide."
Applications were received from 400 potential participants. A technical design review left 12 teams that were invited to Las Vegas to demonstrate their engineering solution in a September 2021 competition. The winning team was TUM Boring from Technical University of Munich who managed to excavate a bore while meeting the requisite safety requirements. TUM Boring used a conventional pipe jacking method to build the tunnel, but employed a novel revolving pipe storage design to minimize downtime between pipe segments.
A second competition was held in April 2023. New contest criteria required a -long -diameter, this time with a turn radius. Five teams from four countries—the United States, Germany, United Kingdom, and Switzerland—made the finals and journeyed to Texas to compete. TUM Boring again won with a design that reached a maximum velocity of . Swissloop Tunneling finished second overall and won the innovation award.
Criticism
Civil engineering experts and tunneling industry veterans questioned whether TBC could render tunnels faster and cheaper than competitors. Tunnelling Journal dismissed the company as a "vanity project."
Musk's planned tunnels were criticized for lacking such safety features as emergency exit corridors, ventilation systems, or fire suppression. In addition, the single lane tunnels left it impossible for vehicles to pass one another in the event of collision, mechanical failure, or other traffic obstruction, and instead would shut down the entire tunnel section. The low capacity of TBC tunnels make them inefficient when compared to existing public transit solutions, with only a fraction of the capacity of a conventional rapid-transit subway.
James Moore, director of transportation engineering at the University of Southern California, said that "there are cheaper ways to provide better transportation for large numbers of people," such as managing traffic with tolls. Public transit consultant Jarrett Walker called TBC "wildly hyped," and criticized how the company "dazzled city governments and investors with visions of an efficient subway where you never have to get out of your car, [but turned] out to be a paved road tunnel."
See also
Underground construction
References
External links
.
55 minutes, video of information session on the vision of the Boring Company and the project in Los Angeles, with Q&A.
Elon Musk
2016 establishments in California
American companies established in 2016
Construction and civil engineering companies
Construction equipment manufacturers of the United States
Hyperloop
Privately held companies based in Texas
Subterranean excavating equipment companies
Underground construction companies | The Boring Company | [
"Technology",
"Engineering"
] | 3,308 | [
"Transport systems",
"Civil engineering organizations",
"Construction and civil engineering companies",
"Vacuum systems",
"Hyperloop"
] |
76,004,408 | https://en.wikipedia.org/wiki/Aluminium%20arsenide%20antimonide | Aluminium arsenide antimonide, or AlAsSb (AlAs1-xSbx), is a ternary III-V semiconductor compound. It can be considered as an alloy between aluminium arsenide and aluminium antimonide. The alloy can contain any ratio between arsenic and antimony. AlAsSb refers generally to any composition of the alloy.
Preparation
AlAsSb films have been grown by molecular beam epitaxy and metalorganic chemical vapor deposition on gallium arsenide, gallium antimonide and indium arsenide substrates. It is typically incorporated into layered heterostructures with other III-V compounds.
Structural and Electronic Properties
The room temperature (T = 300 K) bandgap and lattice constant of AlAsSb alloys are between those of pure AlAs (a = 0.566 nm, Eg = 2.16 eV) and AlSb (a = 0.614 nm, Eg = 1.62 eV). Over all compositions, the bandgap is indirect, like it is in pure AlAs and AlSb. AlAsSb shares the same zincblende crystal structure as AlAs and AlSb.
Applications
AlAsSb can be lattice-matched to GaSb, InAs and InP substrates, making it useful for heterostructures grown on these substrates.
AlAsSb is occasionally employed as a wide-bandgap barrier layer in InAsSb-based infrared barrier photodetectors. In these devices, a thin layer of AlAsSb is grown between doped, smaller-bandgap InAsSb layers. These device geometries are frequently referred to as "nbn" or "nbp" photodetectors, indicating a sequence of an n-doped layer, followed by a barrier layer, followed by an n- or p-doped layer. A large discontinuity is introduced into the conduction band minimum by the AlAsSb barrier layer, which restricts the flow of electrons (but not holes) through the photodetector in a manner that reduces the photodetector's dark current and improves its noise characteristics.
References
Antimonides
Aluminium compounds
Arsenides
III-V compounds | Aluminium arsenide antimonide | [
"Chemistry"
] | 459 | [
"III-V compounds",
"Inorganic compounds"
] |
76,013,518 | https://en.wikipedia.org/wiki/George%20N.%20Phillips | George N. Phillips, Jr. is a biochemist, researcher, and academic. He is the Ralph and Dorothy Looney Professor of Biochemistry and Cell Biology at Rice University, where he also serves as Associate Dean for Research at the Wiess School of Natural Sciences and as a professor of chemistry. Additionally, he holds the title of professor emeritus of biochemistry at the University of Wisconsin-Madison.
Phillips' research is primarily centered on protein structure, protein dynamics, and computational biology, with a specific emphasis on understanding the correlation between the dynamics of proteins and their biological functions. He has authored book chapters, and is an editor for the Handbook of Proteins: Structure, Function and Methods Volume 2. He is the recipient of the Arnold O. Beckman Research Award, the American Heart Association's Established Investigator Award, and the Vilas Associate Award.
Phillips is an Elected Fellow of the Biophysical Society, the American Crystallographic Association, and the American Association for the Advancement of Science. He served as president and vice-president of the American Crystallographic Association from 2011 to 2013. He also holds the position of Editor-in-Chief for Structural Dynamics with the AIP Press and serves as an Associate Editor for Critical Reviews in Biochemistry and Molecular Biology.
Education
Phillips obtained his bachelor's degree in Biochemistry and Chemistry from Rice University in 1974 and followed it with a Ph.D. in biochemistry from the same institution in 1976. He also held a Robert A. Welch Predoctoral Fellowship from 1974 to 1976 and received a Postdoctoral Fellowship from the National Institutes of Health in 1977 as well as a Research Fellowship from the Medical Foundation in 1980.
Career
Phillips started his academic career as an assistant professor at the University of Illinois Urbana-Champaign, followed by his appointment as a professor of biochemistry at Rice University in 1987. In 1993, he assumed the position of Rice Scientia Lecturer, subsequently receiving the Robert A. Welch Lecturer appointment in 2001. He joined the University of Wisconsin-Madison in 2000 as a professor of Biochemistry and took on the role of professor emeritus in 2012. He has been serving as a professor of chemistry, as well as the Ralph and Dorothy Looney Professor of Biochemistry and Cell Biology at Rice University.
Research
Phillips has directed his research toward the field of computational biology, primarily exploring protein structure. In the Phillips Lab, his work has involved conducting research on the binding of oxygen and ligands to heme proteins, as well as the development of techniques for analyzing protein and nucleic acid dynamics through diffuse X-ray scattering analysis.
Protein structures
Phillips conducted various studies on protein structures and their functional implications. He examined the structural features of type 6 streptococcal M proteins, highlighting their predominantly alpha-helical coiled-coil, which demonstrates a unique conformation in bacterial surface projections. His research on the crystal structure of tropomyosin filaments proposed a model in which tropomyosin exhibited distinct conformations related to muscle contraction, suggesting a statistical mechanism for regulating muscle function.
In one of his highly cited studies, Phillips, alongside Fan Yang and Larry G. Moss, described the crystal structure of recombinant wild-type green fluorescent protein, unveiling a unique structure referred to as the "ß-can." This study also delved into the protective environment for the fluorophores within the cylinder and its applications in elucidating the effects of GFP mutants.
Phillips has utilized X-ray crystallography and various advanced spectroscopy techniques to provide details about the dynamic structural changes in proteins. He used X-ray crystallography to determine the structure of unstable intermediate caused by photodissociation of CO from myoglobin and provided insights into the dynamics and structural alterations involved in this protein reaction. In addition, his study focused on capturing the structural evolution of the protein on a picosecond timescale used time-resolved X-ray diffraction and mid-infrared spectroscopy on a myoglobin (Mb) mutant (L29F mutant) revealing conformational changes within the protein.
Heme proteins and ligand interactions
Phillips' research on heme proteins and ligand affinity has provided insights into engineering strategies for physiological functions. He explored the impact of His64 in sperm whale myoglobin on ligand affinity, shedding light on structural changes induced by ligand binding and mechanisms of ligand discrimination in myoglobin. By measuring CO binding properties in various mutants and comparing them to mutant myoglobins, he elucidated how mutations influence CO affinity. In his 1994 study, he delved into how heme proteins like myoglobin and hemoglobin differentiate between oxygen (O2) and carbon monoxide (CO) binding at the atomic level. He investigated the role of nitric oxide in physiological functions by examining the kinetics of NO-induced oxidation in myoglobins and hemoglobins revealing insights into protein engineering strategies aimed at mitigating hypertensive events.
Computational biology
Phillips' contributions to computational biology include advanced techniques for interpreting experimental data in complex chemical and biological systems. He focused on the interaction between troponin T (TnT) and tropomyosin, shedding light on the molecular mechanisms in muscle contractions. Additionally, he explored protein dynamics in crystals by using the Gaussian network model (GNM) and a crystallographic model to calculate Cα atom fluctuations in 113 proteins emphasizing the improved results obtained by considering neighboring molecules in the crystal. In a book chapter discussing ongoing advancements in experimental methods for complex chemical and biological systems, he highlighted the growing need for creative approaches and delved into the exploration of Normal Mode Analysis as a technique to address these challenges.
Awards and honors
1982 – Arnold O. Beckman Research Award, University of Illinois
1983 – Established Investigator Award, American Heart Association
2003 – Vilas Associate Award, UW-Madison
Bibliography
Books
Handbook of Proteins: Structure, Function and Methods Volume 2 (2008) ISBN 978-0470060988
Selected articles
Quillin, M. L., Arduini, R. M., Olson, J. S., & Phillips Jr, G. N. (1993). High-resolution crystal structures of distal histidine mutants of sperm whale myoglobin. Journal of molecular biology, 234(1), 140–155.
Springer, B. A., Sligar, S. G., Olson, J. S., & Phillips, G. N. J. (1994). Mechanisms of ligand recognition in myoglobin. Chemical Reviews, 94(3), 699–714.
Eich, R. F., Li, T., Lemon, D. D., Doherty, D. H., Curry, S. R., Aitken, J. F., ... & Olson, J. S. (1996). Mechanism of NO-induced oxidation of myoglobin and hemoglobin. Biochemistry, 35(22), 6976–6983.
Yang, F., Moss, L. G., & Phillips Jr, G. N. (1996). The molecular structure of green fluorescent protein. Nature biotechnology, 14(10), 1246–1251.
Schotte, F., Lim, M., Jackson, T. A., Smirnov, A. V., Soman, J., Olson, J. S., ... & Anfinrud, P. A. (2003). Watching a protein as it functions with 150-ps time-resolved x-ray crystallography. Science, 300(5627), 1944–1947.
References
Biochemists
University of Wisconsin–Madison faculty
Rice University faculty
Rice University alumni
Living people
Year of birth missing (living people) | George N. Phillips | [
"Chemistry",
"Biology"
] | 1,600 | [
"Biochemistry",
"Biochemists"
] |
76,020,377 | https://en.wikipedia.org/wiki/Quantum%20Chemistry%20Program%20Exchange | The Quantum Chemistry Program Exchange (QCPE) was an organization located at Indiana University Bloomington from 1963 to 2007 that was devoted to the distribution of computational chemistry software before electronic file transfer on the internet became a widely available method of software distribution. The QCPE was originally founded by Prof. Harrison Shull and was managed by Richard Counts for most of its existence. Financial support for the QCPE was originally provided by the Air Force Office of Scientific Research until 1969, and funding continued under an interim grant from the National Science Foundation in 1971 until it became financially self-sustaining in 1973.
The QCPE maintained a catalog of software that expanded through regular contributions from chemistry software developers. New software contributions were announced through a quarterly QCPE Newsletter that were eventually formalized into a QCPE Bulletin in 1981, which allowed for software citations to numbered software entries in the Bulletin that announced their release. QCPE members paid for subscriptions to the Newsletter/Bulletin and additionally paid a processing and delivery fee to receive software from the QCPE catalog. The software distribution options expanded alongside technological development, starting from punched cards and magnetic tape drives delivered by mail, before adopting floppy disks and CD-ROMs, and eventually electronic delivery by FTP. The QCPE grew rapidly in its early days, with about 400 members and a catalog of nearly 100 programs after its first 3 years of operation. In the 1980's and early 1990's, the QCPE also organized annual summer workshops to train scientists in the use of its more popular software. At its peak in the mid-1980's, the QCPE had over 2000 members, over 400 programs available, and an annual income near $400,000.
The most visible legacy of the QCPE are the thousands of software citations to the QCPE Bulletin in scientific publications over 4 decades, with a peak of over 1000 per year in the early 1990's. The most popular software in the early days of the QCPE was GAUSSIAN (QCPE #236, #368, #406) before it was removed from the QCPE catalog to become commercial software, and the most popular software in its later years was MOPAC (QCPE #455, #688, #689).
Other popular software distributed by the QCPE included POLYATOM (QCPE #47, #199), CNDO/2 (QCPE #91), AMPAC (QCPE #506), CRYSTAL (QCPE #577), Molden (QCPE #619), and MM2 / MM3 (QCPE #690-#698).
References
Software distribution
Indiana University Bloomington
Organizations established in 1963
Organizations disestablished in 2007
Computational chemistry software
1963 establishments in Indiana
2007 disestablishments in Indiana | Quantum Chemistry Program Exchange | [
"Chemistry"
] | 561 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
64,400,151 | https://en.wikipedia.org/wiki/Doyle%20spiral | In the mathematics of circle packing, a Doyle spiral is a pattern of non-crossing circles in the plane in which each circle is surrounded by a ring of six tangent circles. These patterns contain spiral arms formed by circles linked through opposite points of tangency, with their centers on logarithmic spirals of three different shapes.
Doyle spirals are named after mathematician Peter G. Doyle, who made an important contribution to their mathematical construction in the late 1980s or However, their study in phyllotaxis (the mathematics of plant growth) dates back to the early
Definition
A Doyle spiral is defined to be a certain type of circle packing, consisting of infinitely many circles in the plane, with no two circles having overlapping interiors. In a Doyle spiral, each circle is enclosed by a ring of six other circles. The six surrounding circles are tangent to the central circle and to their two neighbors in the
Properties
Radii
As Doyle the only way to pack circles with the combinatorial structure of a Doyle spiral is to use circles whose radii are also highly Six circles can be packed around a circle of radius if and only if there exist three positive real numbers so that the surrounding circles have radii (in cyclic order)
Only certain triples of numbers come from Doyle spirals; others correspond to systems of circles that eventually overlap each
Arms
In a Doyle spiral, one can group the circles into connecting chains of circles through opposite points of tangency. These have been called arms, following the same terminology used for Within each arm, the circles have radii in a doubly infinite geometric sequence
or a sequence of the same type with common multiplier In most Doyle spirals, the centers of the circles on a single arm lie on a logarithmic spiral, and all of the logarithmic spirals obtained in this way meet at a single central point. Some Doyle spirals instead have concentric circular arms (as in the stained glass window shown) or straight
Counting the arms
The precise shape of any Doyle spiral can be parameterized by three natural numbers, counting the number of arms of each of its three shapes. When one shape of arm occurs infinitely often, its count is defined as 0, rather The smallest arm count equals the difference of the other two arm counts, so any Doyle spiral can be described as being of where and are the two largest counts, in the sorted order
Every pair with determines a Doyle spiral, with its third and smallest arm count equal to . The shape of this spiral is determined uniquely by these counts, up to For a spiral of the radius multipliers are for complex numbers and satisfying the coherence equation and the tangency equations
This implies that the radius multipliers are algebraic numbers. The self-similarities of a spiral centered on the origin form a discrete group generated by and A circle whose center is distance from the central point of the spiral has radius
Exact values of these parameters are known for a few simple cases. In other cases, they can be accurately approximated by a numerical search, and the results of this search can be used to determine numerical values for the sizes and positions of all of the
Symmetry
Doyle spirals have symmetries that combine scaling and rotation around the central point (or translation and rotation, in the case of the regular hexagonal packing of the plane by unit circles), taking any circle of the packing to any other circle. Applying a Möbius transformation to a Doyle spiral preserves the shape and tangencies of its circles. Therefore, a Möbius transformation can produce additional patterns of non-crossing tangent circles, each tangent to six others. These patterns typically have a double-spiral pattern in which the connected sequences of circles spiral out of one center point (the image of the center of the Doyle spiral) and into another point (the image of the point at infinity). However, these do not meet all of the requirements of Doyle spirals: some circles in this pattern will not be surrounded by their six neighboring
Examples and special cases
The most general case of a Doyle spiral has three distinct radius multipliers, all different and three distinct arm counts, all nonzero. An example is Coxeter's loxodromic sequence of tangent circles, a Doyle spiral of type (2,3), with arm counts 1, 2, and 3, and with multipliers and for
where denotes the golden ratio. Within the single spiral arm of tightest curvature, the circles in Coxeter's loxodromic sequence form a sequence whose radii are powers of . Every four consecutive circles in this sequence are
When exactly one of the three arm counts is zero, the arms that it counts are circular, with radius The number of circles in each of these circular arms equals the number of arms of each of the other two types. All the circular arms are concentric, centered where the spiral arms The multipliers for a Doyle spiral of type are and . In the photo of a stained glass church window, the two rings of nine circles belong to a Doyle spiral of this form, of
Straight arms are produced for arm counts In this case, the two spiraling arm types have the same radius multiplier, and are mirror reflections of each other. There are twice as many straight arms as there are spirals of either type. Each straight arm is formed by circles with centers that lie on a ray through the central Because the number of straight arms must be even, the straight arms can be grouped into opposite pairs, with the two rays from each pair meeting to form a line. The multipliers for a Doyle spiral of type are and . The Doyle spiral of type (8,16) from the Popular Science illustration is an example, with eight arms spiraling the same way as the shaded arm, another eight reflected arms, and sixteen rays.
A final special case is the Doyle spiral of type (0,0), a regular hexagonal packing of the plane by unit circles. Its radius multipliers are all one and its arms form parallel families of lines of three different
Applications
The Doyle spirals form a discrete analogue of the exponential function, as part of the more general use of circle packings as discrete analogues of conformal maps. Indeed, patterns closely resembling Doyle spirals (but made of tangent shapes that are not circles) can be obtained by applying the exponential map to a scaled copy of the regular hexagonal circle The three ratios of radii between adjacent circles, fixed throughout the spiral, can be seen as analogous to a characterization of the exponential map as having fixed Doyle spirals have been used to study Kleinian groups, discrete groups of symmetries of hyperbolic space, by embedding these spirals onto the sphere at infinity of hyperbolic space and lifting the symmetries of each spiral to symmetries of the space
Spirals of tangent circles, often with Fibonacci numbers of arms, have been used to model phyllotaxis, the spiral growth patterns characteristic of certain plant species, beginning with the work of Gerrit van Iterson In this context, an arm of the Doyle spiral is called a parastichy and the arm counts of the Doyle spiral are called parastichy numbers. When the two parastichy numbers and are Fibonacci numbers, and either consecutive or separated by only one Fibonacci number, then the third parastichy number will also be a Fibonacci With this application in mind, Arnold Emch in 1910 calculated the positions of circles in Doyle spirals of noting in his work the connections between these spirals, logarithmic spirals, and the exponential For modeling plant growth in this way, spiral packings of tangent circles on surfaces other than the plane, including cylinders and cones, may also be
Spiral packings of circles have also been studied as a decorative motif in
Related patterns
Tangent circles can form spiral patterns whose local structure resembles a square grid rather than a hexagonal grid, which can be continuously transformed into Doyle The space of locally-square spiral packings is infinite-dimensional, unlike Doyle spirals, which can be determined by a constant number of parameters. It is also possible to describe spiraling systems of overlapping circles that cover the plane, rather than non-crossing circles that pack the plane, with each point of the plane covered by at most two circles except for points where three circles meet at angles, and with each circle surrounded by six others. These have many properties in common with the Doyle
The Doyle spiral should not be confused with a different spiral pattern of circles, studied for certain forms of plant growth such as the seed heads of sunflowers. In this pattern, the circles are of unit size rather than growing logarithmically, and are not tangent. Instead of having centers on a logarithmic spiral, they are placed on Fermat's spiral, offset by the golden angle from each other relative to the center of the spiral, where is the
Notes
References
Further reading
External links
Doyle spiral explorer, Robin Houston
Circle packing
Spirals
Plant morphology
Eponyms in geometry | Doyle spiral | [
"Mathematics",
"Biology"
] | 1,845 | [
"Geometry problems",
"Eponyms in geometry",
"Packing problems",
"Plants",
"Plant morphology",
"Circle packing",
"Geometry",
"Mathematical problems"
] |
64,403,830 | https://en.wikipedia.org/wiki/Wing%20engine | A wing engine is a subsidiary engine installed in a motor boat alongside the main engine. The primary purpose of a wing engine is to provide redundancy and safety in the event of failure of the main engine; a secondary benefit assists manoeuvering in port or in a marina.
Wing engine installation
Whereas the main engine will be larger and invariably mounted on the vessel's centreline, the wing engine will be considerably smaller and positioned to one side. A wing engine will typically be either:
a small marine engine that may also serve as a generator when running; or
a diesel generator that may power (typically via a 12v or 24v battery pack) an electric motor that drives its own propeller shaft and propeller.
In either case, the wing engine's propeller will be off-centre. This can give rise to steering difficulties; but this can be used to advantage in port with the main engine as follows: if the main engine has a right-hand propeller, the "prop walk" when in reverse will tend to move the stern to port. In these circumstances, the wing-motor should be arranged to have a propeller to the left (port-side) of the centreline, so as to balance the vessel in astern, or to produce (with the main engine in neutral) a vector thrust to starboard.
Canal boats need very little power in canals, as there is virtually no current (and there are often speed limits). In such canals the wing engine may be used to propel the boat; but when the vessel puts to sea or navigates a fast flowing river, the power of the main engine would be needed. Diesel engines suffer harm if not run under load, so a small wing engine under load should be more efficient in a canal than a main engine operating barely above tick-over.
Examples of wing engine installations
a 10m Vlet used on canals by author Marian Martin had a 120bhp DAF main engine, and an 18bhp Sabb wing engine. Ms Martin was so impressed that in her book she recommends wing engines, albeit with some reservations.
a 27m schooner-rigged Dutch sailing barge, Hosanna, had a large Cummins main engine and a smaller Perkins wing engine. When the Cummins failed, the owners, Bill & Laurel Cooper motored through the French canals to re-engine the boat at Great Yarmouth. So exasperated were they by the tricky steering using just a wing engine for long stretches, that instead of replacing the Cummins with a similar large main engine, they installed two more Perkins engines and propellers. Hosanna now had three similar Perkins engines, one in the centre, and one on either side. In calm canals, just the central engine alone would be used; the other two would be engaged at sea or in fast rivers, or when manoeuvering.
References
Marine engines | Wing engine | [
"Technology"
] | 578 | [
"Marine engines",
"Engines"
] |
57,806,752 | https://en.wikipedia.org/wiki/Kate%20Marvel | Kate Marvel is a climate scientist and science writer based in New York City. She is a senior scientist at Project Drawdown and was formerly an associate research scientist at NASA Goddard Institute for Space Studies and Columbia Engineering's Department of Applied Physics and Mathematics.
Education and early career
Marvel attended the University of California at Berkeley, where she received her Bachelor of Arts degree in physics and astronomy in 2003. She received her PhD in 2008 in theoretical physics from University of Cambridge as a Gates Scholar and member of Trinity College. Following her PhD, she shifted her focus to climate science and energy as a Postdoctoral Science Fellow at the Center for International Security and Cooperation at Stanford University and at the Carnegie Institution for Science in the Department of Global Ecology. She continued that trajectory as a postdoctoral fellow at the Lawrence Livermore National Laboratory before joining the research faculty at NASA Goddard Institute for Space Studies and Columbia University. Marvel left the Goddard Institute at the end of 2022.
Research
Marvel's current research centers on climate modeling to better predict how much the Earth's temperature will rise in the future. This work led Marvel to investigate the effects of cloud cover on modeling rising temperatures, which has proved an important variable in climate models. Clouds can play a double-edged role in mitigating or amplifying the rate of global warming. On one hand, clouds reflect solar energy back into space, serving to cool the planet; on the other, clouds can trap the planet's heat and radiate back onto Earth's surface. While computer models have difficulty simulating the changing patterns of cloud cover, improved satellite data can begin to fill in the gaps.
Marvel has also documented shifting patterns of soil moisture from samples taken around the world, combining them with computer models and archives of tree rings, to model the effects of greenhouse gas production on patterns of global drought. In this study, which was published in the journal Nature in May 2019, Marvel and her colleagues were able to distinguish the contribution of humans from the effects of natural variation of weather and climate. They found three distinct phases of drought in the data: a clear human fingerprint on levels of drought in the first half of the 20th century, followed by a decrease in drought from 1950 to 1975, followed by a final rise in levels of drought in the 1980s and beyond. The mid-century decrease in drought correlated with the rise in aerosol emissions, which contribute to rising levels of smog that may have reflected and blocked sunlight from reaching the Earth, altering patterns of warming. The subsequent rise of drought correlated with the decrease in global air pollution, which occurred in the 1970s and 1980s due to the passage of legislation like the United States Clean Air Act, suggesting that aerosol pollution may have had a moderating effect on drought.
Marvel has also studied practical limitations in renewable energy as a Postdoctoral Scholar at the Carnegie Institution for Science. At the 2017 TED conference, following computer theorist Danny Hillis's talk proposing geoengineering strategies to mitigate global warming, Marvel was brought on stage to share why she believes geoengineering may cause more harm than good in the long run.
Public engagement
Marvel is a science communicator whose efforts center on communicating about the impacts of climate change. She has been a guest on popular science shows like StarTalk and BRIC Arts Media TV, speaking about her expertise in climate change and the need to act on climate. She has also spoken about her path to becoming a scientist for the science-inspired storytelling series, The Story Collider. Marvel has also appeared on the TED Main Stage, giving a talk at the 2017 TED conference about the double-edged effect clouds can have on global warming.
Marvel's writing has been featured in On Being and Nautilus. She was a regular contributor to Scientific American with her column "Hot Planet", which launched in June 2018 and apparently ended in November 2020; the column focused on climate change, covering the science behind global warming, policies, and human efforts in advocacy. Marvel contributed to All We Can Save, a collection of essays authored by women involved in the climate movement.
References
External links
Kate Marvel on Twitter
Year of birth missing (living people)
Living people
21st-century American women scientists
American climatologists
Women climatologists
American science writers
Alumni of the University of Cambridge
University of California, Berkeley alumni
NASA people
Climate communication
21st-century American scientists
Climate change mitigation researchers
American women science writers
21st-century American non-fiction writers
21st-century American women writers
American women non-fiction writers | Kate Marvel | [
"Engineering"
] | 912 | [
"Geoengineering",
"Climate change mitigation researchers"
] |
57,814,827 | https://en.wikipedia.org/wiki/STS%20Lord%20Nelson | STS Lord Nelson was a sail training ship operated by the Jubilee Sailing Trust. She was designed by Colin Mudie and launched on 17 October 1986.
The ship was built by the Jubilee Sailing Trust (JST) and, along with the SV Tenacious, the pair were the only tall ships in the world that are wheelchair accessible throughout. The JST are an international UN accredited charity offering sailing adventures to people of all abilities and backgrounds. She was decommissioned in October 2019.
Design and construction
STS Lord Nelson was commissioned by the Jubilee Sailing Trust, and the build was started in the summer of 1984 at the yard of James W Cook, Wivenhoe, Essex. She was designed by Colin Mudie, and is his design no 342. The ship was launched almost a year after the formal keel laying. After J W Cook went into voluntary liquidation, Lord Nelson was moved to Vosper Thornycroft's yard in Woolston, Southampton. As a result of an industrial dispute at Vospers, Lord Nelson had to move again, this time to Coles Yard in Cowes where the remainder of the work was carried out. She was finally sailed in completed form from Southampton on 17 October 1986.
In service
STS Lord Nelson completed 16,000 accessible voyages during her 33 years at sea with the Jubilee Sailing Trust.
She finished her final voyage on 10 October 2019 to Southampton, and was subsequently moved to Bristol docks for decommissioning.
Disposal
On the 26 April 2021 the Jubilee Sailing Trust announced that they would sell the vessel, by then in a state of significant disrepair. No sale of Lord Nelson was concluded and in August 2022, the ship's owning company, Jubilee Sailing Trust Ltd, was put into administration. With still no sale, the administrators put the ship for auction in June 2023.
References
Further reading
Report on the investigation of Lord Nelson contact with Tower Bridge London River Thames 15 May 2004 assets.publishing.service.gov.uk, Retrieved 2018-12-07
Harry Turner: World's first round-the-world ship crewed by disabled docks in London yachtsandyachting.com, 24 Sep 2014, Retrieved 2018-12-07
1986 ships
Accessible transportation
Disabled boating
Tall ships of the United Kingdom
Individual sailing vessels
Barques
Sail training ships | STS Lord Nelson | [
"Physics"
] | 460 | [
"Physical systems",
"Transport",
"Accessible transportation"
] |
44,464,978 | https://en.wikipedia.org/wiki/Regeneration%20in%20humans | Regeneration in humans is the regrowth of lost tissues or organs in response to injury. This is in contrast to wound healing, or partial regeneration, which involves closing up the injury site with some gradation of scar tissue. Some tissues such as skin, the vas deferens, and large organs including the liver can regrow quite readily, while others have been thought to have little or no capacity for regeneration following an injury.
Numerous tissues and organs have been induced to regenerate. Bladders have been 3D-printed in the lab since 1999. Skin tissue can be regenerated in vivo or in vitro. Other organs and body parts that have been procured to regenerate include: penis, fats, vagina, brain tissue, thymus, and a scaled down human heart. One goal of scientists is to induce full regeneration in more human organs.
There are various techniques that can induce regeneration. By 2016, regeneration of tissue had been induced and operationalized by science. There are four main techniques: regeneration by instrument; regeneration by materials; regeneration by drugs and regeneration by in vitro 3D printing.
History of human tissue
In humans with non-injured tissues, the tissue naturally regenerates over time; by default, new available cells replace expended cells. For example, the body regenerates a full bone within ten years, while non-injured skin tissue is regenerated within two weeks. With injured tissue, the body usually has a different response. This emergency response usually involves building a degree of scar tissue over a time period longer than a regenerative response, as has been proven clinically and via observation. There are many more historical and nuanced understandings about regeneration processes. In full thickness wounds that are under 2mm, regeneration generally occurs before scarring. In 2008, in full thickness wounds over 3mm, it was found that a wound needed inserted in order to induce full tissue regeneration.
Whereas 3rd degree burns heal slowly by scarring, in 2016 it was known that full thickness fractional photothermolysis holes heal without scarring. Up to 40% of full thickness skin can be removed without scarring in an area, in a fractional pattern via coring of tissue.
Some human organs and tissues regenerate rather than simply scar, as a result of injury. These include the liver, fingertips, and endometrium. More information is now known regarding the passive replacement of tissues in the human body, as well as the mechanics of stem cells. Advances in research have enabled the induced regeneration of many more tissues and organs than previously thought possible. The aim for these techniques is to use these techniques in the near future for the purpose of regenerating any tissue type in the human body.
Regeneration techniques
By 2016, regeneration had been operationalised and induced by four main techniques: regeneration by instrument; regeneration by materials; regeneration by 3D printing; and regeneration by drugs. By 2016, regeneration by instrument, regeneration by materials and by regeneration drugs had been generally operationalised in vivo (inside living tissues). Whilst by 2016, regeneration by 3D printing had been generally operationalised by in vitro (inside the lab) in order to be built and prepare tissue for transplantation.
By instrument
A cut by a knife or a scalpel generally scars, though a piercing by a needle does not. In 1976, a 3 by 3 cm scar on a non-diabetic was regenerated by insulin injections and the researchers, highlighting earlier research, argued that the insulin was regenerating the tissue. The anecdotal evidence also highlighted that a syringe was one of two variables that helped bring regeneration of the arm scar. The syringe was injected into the four quadrants three times a day for eighty-two days. After eighty-two days, after many consecutive injections, the scar was resolved and it was noted no scar was observable by the human eye. After seven months the area was checked again and it was once again noted that no scar could be seen.
In 1997, it was proven that wounds created with an instrument that are under 2mm can heal scar free, but larger wounds that are larger than 2mm healed with a scar.
In 2013, it was proven in pig tissue that full thickness micro columns of tissue, less than 0.5mm in diameter could be removed and that the replacement tissue, was regenerative tissue, not scar. The tissue was removed in a fractional pattern, with over 40% of a square area removed; and all of the fractional full thickness holes in the square area healed without scarring. In 2016 this fractional pattern technique was also proven in human tissue. In 2021, more people were paying attention to the possibility of scar free healing alongside new technologies involving instruments.
With materials
Generally, humans can regenerate injured tissues in vivo for limited distances of up to 2mm. The further the wound distance is from 2mm the more the wound regeneration will need inducement. By 2009, via the use of materials, a max induced regeneration could be achieved inside a 1 cm tissue rupture. Bridging the wound, the material allowed cells to cross the wound gap; the material then degraded. This technology was first used inside a broken urethra in 1996. In 2012, using materials, a full urethra was restored in vivo.
Macrophage polarization is a strategy for skin regeneration. Macrophages are differentiated from circulating monocytes. Macrophages display a range of phenotypes varying from the M1, pro-inflammatory type to the M2, pro-regenerative type. Material hydrogels polarise macrophages into the key M2 regenerative phenotype in vitro. In 2017, hydrogels provided full regeneration of skin, with hair follicles, after partial excision of scars in pigs and after full thickness wound incisions in pigs.
By 3D printing
In 2009, the regeneration of hollow organs and tissues with a long diffusion distance, was a little more challenging. Therefore, to regenerate hollow organs and tissues with a long diffusion distance, the tissue had to be regenerated inside the lab, via the use of a 3D printer.
Various tissues that have been regenerated by in vitro 3D printing include:
The first organ ever induced and made in the lab was the bladder, which was created in 1999.
By 2014, there had been various tissues regenerated by the 3D printer and these tissues included: muscle, vagina, penis and the thymus.
In 2014, a conceptual human lung was first bioengineered in the lab. In 2015, the lab robustly tested its technique and regenerated a pig lung. The pig lung was then successfully transplanted into a pig without the use of immunosuppressive drugs.
In 2015, researchers developed a proof of principle biolimb inside a laboratory; they also estimated that it would be at least a decade for any testing of limbs in humans. The limb demonstrated fully functioning skin, muscles, blood vessels and bones.
In April 2019, researchers 3D printed a human heart. The prototype heart was made by human stem cells but only to the size of a rabbit's heart. In 2019, the researchers hoped to one day place a scaled up version of the heart inside humans.
Gradations of complexity
With printing tissues, by 2012, there were four accepted standard levels of regenerative complexity that were acknowledged in various academic institutions:
Level one, flat tissue like skin was the simplest to recreate;
Level two was tubular structures such as blood vessels;
Level three was hollow non-tubular structures;
Level four was solid organs, which were by far the most complex to recreate due to the vascularity.
In 2012, within 60 days it was possible, inside the lab, to grow tissue the size of half a postage stamp to the size of a football field. Most cell types could be grown and expanded outside of the body, with the exception of the liver, nerve and pancreas, as these tissue types need stem cell populations.
With drugs
Lipoatrophy is the localised loss of fat in tissue. It is common in diabetics who use conventional insulin injection treatment. In 1949, a much more pure form of insulin was, instead of causing lipoatrophy, shown to regenerate the localised loss of fat after injections in to diabetics. In 1984, it was shown that different insulin injections have different regenerative responses with regards to creating skin fats in the same person. It was shown in the same body that conventional forms of insulin injections cause lipoatrophy and highly purified insulin injections cause lipohypertrophy. In 1976, the regenerative response was shown to work in a non-diabetic after a 3 x 3 cm lipoatrophic arm scar was treated with pure monocomponent porcine soluble insulin. A syringe injected insulin under the skin equally in the four quadrants of the defect. To layer four units of insulin evenly into the base of the defect, each quadrant of the defect received one unit of insulin three times a day, for eighty-two days. After eighty-two days of consecutive injections the defect regenerated to normal tissue.
In 2016, scientists could transform a skin cell into any other tissue type via the use of drugs.
The technique was noted as safer than genetic reprogramming which, in 2016, was a concern medically. The technique, used a cocktail of chemicals and enabled efficient on site regeneration without any genetic programming. In 2016, it was hoped to one day use this drug to regenerate tissue at the site of tissue injury. In 2017, scientists could turn many cell types (such as brain and heart) into skin.
Research
Scientists found leprosy-causing bacteria viably regenerate and rejuvenate the liver in its armadillos hosts, which may enable novel human therapies based on knowledge or components gained from naturally evolved organisms or capabilities.
Naturally regenerating appendages and organs
Heart
Cardiomyocyte necrosis activates an inflammatory response that serves to clear the injured myocardium from dead cells, and stimulates repair, but may also extend injury. Research suggests that the cell types involved in the process play an important role. Namely monocyte-derived macrophages tend to induce inflammation while inhibiting cardiac regeneration, while tissue resident macrophages may help restoration of tissue structure and function.
Endometrium
The endometrium after the process of breakdown via the menstruation cycle, re-epithelializes swiftly and regenerates. Though tissues with a non-interrupted morphology, like non-injured soft tissue, completely regenerate consistently; the endometrium is the only human tissue that completely regenerates consistently after a disruption and interruption of the morphology. The inner lining of the uterus is the only adult tissue to undergo rapid cyclic shedding and regeneration without scarring, shedding and restoring roughly inside a 7-day window on a monthly basis. All other adult tissues, upon rapid shedding or injury, can scar.
Fingers
In May 1932, L. H. McKim published a report describing the regeneration of an adult digit-tip following amputation. A house surgeon in the Montreal General Hospital underwent amputation of the distal phalanx to stop the spread of an infection. In less than one month following surgery, x-ray analysis showed the regrowth of bone while macroscopic observation showed the regrowth of nail and skin. This is one of the earliest recorded examples of adult human digit-tip regeneration.
Studies in the 1970s showed that children up to the age of 10 or so who lose fingertips in accidents can regrow the tip of the digit within a month provided their wounds are not sealed up with flaps of skin – the de facto treatment in such emergencies. They normally will not have a fingerprint, and if there is any piece of the finger nail left it will grow back as well, usually in a square shape rather than round.
In August 2005, Lee Spievack, then in his early sixties, accidentally sliced off the tip of his right middle finger just above the first phalanx. His brother, Dr. Alan Spievack, was researching regeneration and provided him with powdered extracellular matrix, developed by Dr. Stephen Badylak of the McGowan Institute of Regenerative Medicine. Mr. Spievack covered the wound with the powder, and the tip of his finger re-grew in four weeks. The news was released in 2007. Ben Goldacre has described this as "the missing finger that never was", claiming that fingertips regrow and quoted Simon Kay, professor of hand surgery at the University of Leeds, who from the picture provided by Goldacre described the case as seemingly "an ordinary fingertip injury with quite unremarkable healing"
A similar story was reported by CNN. A woman named Deepa Kulkarni lost the tip of her little finger and was initially told by doctors that nothing could be done. Her personal research and consultation with several specialists including Badylak eventually resulted in her undergoing regenerative therapy and regaining her fingertip.
Kidney
Regenerative capacity of the kidney has been recently explored.
The basic functional and structural unit of the kidney is nephron, which is mainly composed of four components: the glomerulus, tubules, the collecting duct and peritubular capillaries. The regenerative capacity of the mammalian kidney is limited compared to that of lower vertebrates.
In the mammalian kidney, the regeneration of the tubular component following an acute injury is well known. Recently regeneration of the glomerulus has also been documented. Following an acute injury, the proximal tubule is damaged more, and the injured epithelial cells slough off the basement membrane of the nephron. The surviving epithelial cells, however, undergo migration, dedifferentiation, proliferation, and redifferentiation to replenish the epithelial lining of the proximal tubule after injury. Recently, the presence and participation of kidney stem cells in the tubular regeneration has been shown. However, the concept of kidney stem cells is currently emerging. In addition to the surviving tubular epithelial cells and kidney stem cells, the bone marrow stem cells have also been shown to participate in regeneration of the proximal tubule, however, the mechanisms remain controversial. Studies examining the capacity of bone marrow stem cells to differentiate into renal cells are emerging.
Like other organs, the kidney is also known to regenerate completely in lower vertebrates such as fish. Some of the known fish that show remarkable capacity of kidney regeneration are goldfish, skates, rays, and sharks. In these fish, the entire nephron regenerates following injury or partial removal of the kidney.
Liver
The human liver is particularly known for its ability to regenerate, and is capable of doing so from only one quarter of its tissue, due chiefly to the unipotency of hepatocytes. Resection of liver can induce the proliferation of the remaining hepatocytes until the lost mass is restored, where the intensity of the liver's response is directly proportional to the mass resected. For almost 80 years surgical resection of the liver in rodents has been a very useful model to the study of cell proliferation.
Toes
Toes damaged by gangrene and burns in older people can also regrow with the nail and toe print returning after medical treatment for gangrene.
Vas deferens
The vas deferens can grow back together after a vasectomy–thus resulting in vasectomy failure. This occurs due to the fact that the epithelium of the vas deferens, similar to the epithelium of some other human body parts, is capable of regenerating and creating a new tube in the event that the vas deferens is damaged and/or severed. Even when as much as five centimeters, or two inches, of the vas deferens is removed, the vas deferens can still grow back together and become reattached–thus allowing sperm to once again pass and flow through the vas deferens, restoring one's fertility.
Induced regeneration
There are several human tissues that have been successfully or partially induced to regenerate. Many fall under the topic of regenerative medicine, which includes the methods and research conducted with the aim of regenerating the organs and tissues of humans as a result of injury. The major strategies of regenerative medicine include dedifferentiating injury site cells, transplanting stem cells, implanting lab-grown tissues and organs, and implanting bioartificial tissues.
Bladder
In 1999, the bladder was the first regenerated organ to be given to seven patients; as of 2014, these regenerated bladders are still functioning inside the beneficiaries.
Fat
In 1949, purified insulin was shown to regenerate fat in diabetics with lipoatrophy. In 1976, after 82 days of consecutive injections into a scar, purified insulin was shown to safely regenerate fat and completely regenerate skin in a non-diabetic.
During a high-fat diet, and during hair follicle growth, mature adipocytes (fats) are naturally formed in multiple tissues. Fat tissue has been implicated in the inducement of tissue regeneration. Myofibroblasts are the fibroblast responsible for scar and in 2017 it was found that the regeneration of fat transformed myofibroblasts into adipocytes instead of scar tissue. Scientists also identified bone morphogenetic protein (BMP) signalling as important for myofibroblasts transforming into adipocytes for the purpose of skin and fat regeneration.
Heart
Cardiovascular diseases are the leading cause of death worldwide, and have increased proportionally from 25.8% of global deaths in 1990, to 31.5% of deaths in 2013. This is true in all areas of the world except Africa.
In addition, during a typical myocardial infarction or heart attack, an estimated one billion cardiac cells are lost.
The scarring that results is then responsible for greatly increasing the risk of life-threatening abnormal heart rhythms or arrhythmias. Therefore, the ability to naturally regenerate the heart would have an enormous impact on modern healthcare. However, while several animals can regenerate heart damage (e.g. the axolotl), mammalian cardiomyocytes (heart muscle cells) cannot proliferate (multiply) and heart damage causes scarring and fibrosis.
Despite the earlier belief that human cardiomyocytes are not generated later in life, a recent study has found that this is not the case. This study took advantage of the nuclear bomb testing and other radioactive sources during the Atomic Age which introduced carbon-14 into the atmosphere (essentially all of which had decayed up to that point in Earth's history) and therefore into the cells of biologically active inhabitants. They extracted DNA from the myocardium of these research subjects and found that cardiomyocytes do in fact renew at a slowing rate of 1% per year from the age of 25, to 0.45% per year at the age of 75 by comparing the presence of carbon-14 with the stable and abundant carbon-12. This amounts to less than half of the original cardiomyocytes being replaced during the average lifespan. However, serious doubts have been placed on the validity of this research, including the appropriateness of the samples as representative of normally aging hearts.
Further research has been conducted that supports the potential for human cardiac regeneration. Inhibition of p38 MAP kinase was found to induce mitosis in adult mammalian cardiomyocytes, while treatment with FGF1 and p38 MAP kinase inhibitors was found to regenerate the heart, reduce scarring, and improve cardiac function in rats with cardiac injury.
One of the most promising sources of heart regeneration is the use of stem cells. It was demonstrated in mice that there is a resident population of stem cells or cardiac progenitors in the adult heart – this population of stem cells was shown to be reprogrammed to differentiate into cardiomyocytes that replaced those lost during a heart tissue death. In humans specifically, a "cardiac mesenchymal feeder layer" was found in the myocardium that renewed the cells with progenitors that differentiated into mature cardiac cells. What these studies show is that the human heart contains stem cells that could potentially be induced into regenerating the heart when needed, rather than just being used to replace expended cells.
Loss of the myocardium due to disease often leads to heart failure; therefore, it would be useful to be able to take cells from elsewhere in the heart to replenish those lost. This was achieved in 2010 when mature cardiac fibroblasts were reprogrammed directly into cardiomyocyte-like cells. This was done using three transcription factors: GATA4, Mef2c, and Tbx5.
Cardiac fibroblasts make up more than half of all heart cells and are usually not able to conduct contractions (are not cardiogenic), but those reprogrammed were able to contract spontaneously. The significance is that fibroblasts from the damaged heart or from elsewhere, may be a source of functional cardiomyocytes for regeneration.
Simply injecting functioning cardiac cells into a damaged heart is only partially effective. In order to achieve more reliable results, structures composed of the cells need to be produced and then transplanted. Masumoto and his team designed a method of producing sheets of cardiomyocytes and vascular cells from human iPSCs. These sheets were then transplanted onto infarcted hearts of rats, leading to significantly improved cardiac function. These sheets were still found to be present four weeks later. Research has also been conducted into the engineering of heart valves. Tissue-engineered heart valves derived from human cells have been created in vitro and transplanted into a non-human primate model. These showed a promising amount of cellular repopulation even after eight weeks, and succeeded in outperforming currently-used non-biological valves. In 2021, researchers demonstrated a switchable iPSCs-reprogramming-based approach for regeneration of damaged heart without tumor-formation in mice. In April 2019, researchers 3D printed a prototype human heart the size of a rabbit's heart.
Lung
Chronic obstructive pulmonary disease (COPD) is one of the most widespread health threats today. It affects 329 million people worldwide, which makes up nearly 5% of the global population. Having killed over 3 million people in 2012, COPD was the third greatest cause of death. Worse still, due to increasing smoking rates and the aging populations in many countries, the number of deaths as a result of COPD and other chronic lung diseases is predicted to continue increasing. Therefore, developments in the lung's capacity for regeneration is in high demand.
It has been shown that bone marrow-derived cells could be the source of progenitor cells of multiple cell lineages, and a 2004 study suggested that one of these cell types was involved in lung regeneration. Therefore, a potential source of cells for lung regeneration has been found; however, due to advances in inducing stem cells and directing their differentiation, major progress in lung regeneration has consistently featured the use of patient-derived iPSCs and bioscaffolds.
The extracellular matrix is the key to generating entire organs in vitro. It was found that by carefully removing the cells of an entire lung, a "footprint" is left behind that can guide cellular adhesion and differentiation if a population of lung epithelial cells and chondrocytes are added. This has serious applications in regenerative medicine, particularly as a 2012 study successfully purified a population of lung progenitor cells that were derived from embryonic stem cells. These can then be used to re-cellularise a three-dimensional lung tissue scaffold.
A 2010 investigation used the ECM scaffold to produce entire lungs in vitro to be transplanted into living rats. These successfully enabled gas exchange but for short time intervals only. Nevertheless, this was a huge leap towards whole lung regeneration and transplants for humans, which has already taken another step forward with the lung regeneration of a non-human primate.
Cystic fibrosis is another disease of the lungs, which is highly fatal and genetically linked to a mutation in the CFTR gene. Through growing patient-specific lung epithelium in vitro, lung tissue expressing the cystic fibrosis phenotype has been achieved. This is so that modelling and drug testing of the disease pathology can be carried out with the hope of regenerative medical applications.
Penis
Penises have been successfully regenerated in the lab. Penises are harder to regenerate than the skin, bladder and vagina due to their structural complexity.
Spinal nerves
A goal of spinal cord injury research is to promote neuroregeneration, reconnection of damaged neural circuits. The nerves in the spine are a tissue that requires a stem cell population to regenerate. In 2012, a Polish fireman Darek Fidyka, with paraplegia of the spinal cord, underwent a procedure, which involved extracting olfactory ensheathing cells (OECs) from Fidyka's olfactory bulbs, and injecting these stem cells, in vivo, into the site of the previous injury. Fidyka eventually gained feeling, movement and sensation in his limbs, especially on the side where the stem cells were injected; he also reported gaining sexual function. Fidyka can now drive and can now walk some distance aided by a frame. He is believed to be the first person in the world to recover sensory function from a complete severing of the spinal nerves.
Thymus
The thymus gland is one of the first organs to degenerate in normal healthy individuals. Researchers from the University of Edinburgh have succeeded in regenerating a living organ that closely resembles a juvenile thymus in terms of structure and gene expression profile.
Vagina
Between the years 2005 and 2008, four women with vaginal hypoplasia due to Müllerian agenesis were given regenerated vaginas. Up to eight years after the transplants, all organs have normal function and structure.
See also
Cloning
Decellularization
Induced pluripotent stem cell
Life extension
Rejuvenation (aging)
Stem cell treatments
Tissue engineering
References
Further reading
External links
UCI Limb Regeneration Lab
Vertebrate developmental biology
Human biology
Senescence | Regeneration in humans | [
"Chemistry",
"Biology"
] | 5,476 | [
"Senescence",
"Metabolism",
"Human biology",
"Cellular processes"
] |
44,465,987 | https://en.wikipedia.org/wiki/Non-constructive%20algorithm%20existence%20proofs | The vast majority of positive results about computational problems are constructive proofs, i.e., a computational problem is proved to be solvable by showing an algorithm that solves it; a computational problem is shown to be in P by showing an algorithm that solves it in time that is polynomial in the size of the input; etc.
However, there are several non-constructive results, where an algorithm is proved to exist without showing the algorithm itself. Several techniques are used to provide such existence proofs.
Using an unknown finite set
In combinatorial game theory
A simple example of a non-constructive algorithm was published in 1982 by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy, in their book Winning Ways for Your Mathematical Plays. It concerns the game of Sylver Coinage, in which players take turns specifying a positive integer that cannot be expressed as a sum of previously specified values, with a player losing when they are forced to specify the number 1. There exists an algorithm (given in the book as a flow chart) for determining whether a given first move is winning or losing: if it is a prime number greater than three, or one of a finite set of 3-smooth numbers, then it is a winning first move, and otherwise it is losing. However, the finite set is not known.
In graph theory
Non-constructive algorithm proofs for problems in graph theory were studied beginning in 1988 by Michael Fellows and Michael Langston.
A common question in graph theory is whether a certain input graph has a certain property. For example:
Input: a graph G.
Question: Can G be embedded in a 3-dimensional space, such that no two disjoint cycles of G are topologically linked (as in links of a chain)?
There is a highly exponential algorithm that decides whether two cycles embedded in a 3d-space are linked, and one could test all pairs of cycles in the graph, but it is not obvious how to account for all possible embeddings in a 3d-space. Thus, it is a-priori not clear at all if the linkedness problem is decidable.
However, there is a non-constructive proof that shows that linkedness is decidable in polynomial time. The proof relies on the following facts:
The set of graphs for which the answer is "yes" is closed under taking minors. I. e., if a graph G can be embedded linklessly in 3-d space, then every minor of G can also be embedded linklessly.
For every two graphs G and H, it is possible to find in polynomial time whether H is a minor of G.
By Robertson–Seymour theorem, any set of finite graphs contains only a finite number of minor-minimal elements. In particular, the set of "yes" instances has a finite number of minor-minimal elements.
Given an input graph G, the following "algorithm" solves the above problem:
For every minor-minimal element H:
If H is a minor of G then return "yes".
return "no".
The non-constructive part here is the Robertson–Seymour theorem. Although it guarantees that there is a finite number of minor-minimal elements it does not tell us what these elements are. Therefore, we cannot really execute the "algorithm" mentioned above. But, we do know that an algorithm exists and that its runtime is polynomial.
There are many more similar problems whose decidability can be proved in a similar way. In some cases, the knowledge that a problem can be proved in a polynomial time has led researchers to search and find an actual polynomial-time algorithm that solves the problem in an entirely different way. This shows that non-constructive proofs can have constructive outcomes.
The main idea is that a problem can be solved using an algorithm that uses, as a parameter, an unknown set. Although the set is unknown, we know that it must be finite, and thus a polynomial-time algorithm exists.
There are many other combinatorial problems that can be solved with a similar technique.
Counting the algorithms
Sometimes the number of potential algorithms for a given problem is finite. We can count the number of possible algorithms and prove that only a bounded number of them are "bad", so at least one algorithm must be "good".
As an example, consider the following problem.
I select a vector v composed of n elements which are integers between 0 and a certain constant d.
You have to guess v by asking sum queries, which are queries of the form: "what is the sum of the elements with indices i and j?". A sum query can relate to any number of indices from 1 to n.
How many queries do you need? Obviously, n queries are always sufficient, because you can use n queries asking for the "sum" of a single element. But when d is sufficiently small, it is possible to do better. The general idea is as follows.
Every query can be represented as a 1-by-n vector whose elements are all in the set {0,1}. The response to the query is just the dot product of the query vector by v. Every set of k queries can be represented by a k-by-n matrix over {0,1}; the set of responses is the product of the matrix by v.
A matrix M is "good" if it enables us to uniquely identify v. This means that, for every vector v, the product M v is unique. A matrix M is "bad" if there are two different vectors, v and u, such that M v = M u.
Using some algebra, it is possible to bound the number of "bad" matrices. The bound is a function of d and k. Thus, for a sufficiently small d, there must be a "good" matrix with a small k, which corresponds to an efficient algorithm for solving the identification problem.
This proof is non-constructive in two ways: it is not known how to find a good matrix; and even if a good matrix is supplied, it is not known how to efficiently re-construct the vector from the query replies.
There are many more similar problems which can be proved to be solvable in a similar way.
Additional examples
Some computational problems can be shown to be decidable by using the Law of Excluded Middle. Such proofs are usually not very useful in practice, since the problems involved are quite artificial.
An example from Quantum complexity theory (related to Quantum query complexity) is given in.
References
Credits
The references in this page were collected from the following Stack Exchange threads:
See also
Existence theorem#'Pure' existence results
Constructive proof#Non-constructive proofs
Computational complexity theory
Constructivism (mathematics) | Non-constructive algorithm existence proofs | [
"Mathematics"
] | 1,378 | [
"Mathematical logic",
"Constructivism (mathematics)"
] |
44,466,777 | https://en.wikipedia.org/wiki/Theta%20constant | In mathematics, a theta constant or
Thetanullwert' (German for theta zero value; plural Thetanullwerte) is the restriction θm(τ) = θm(τ,0) of a theta function θm(τ,z) with rational characteristic m to z = 0. The variable τ may be a complex number in the upper half-plane in which case the theta constants are modular forms, or more generally may be an element of a Siegel upper half plane in which case the theta constants are Siegel modular forms. The theta function of a lattice is essentially a special case of a theta constant.
Definition
The theta function θm(τ,z) = θa,b(τ,z)is defined by
where
n is a positive integer, called the genus or rank.
m = (a,b) is called the characteristic
a,b are in Rn
τ is a complex n by n matrix with positive definite imaginary part
z is in Cn
t means the transpose of a row vector.
If a,b are in Qn then θa,b(τ,0) is called a theta constant.
Examples
If n = 1 and a and b are both 0 or 1/2, then the functions θa,b(τ,z) are the four Jacobi theta functions, and the functions θa,b(τ,0) are the classical Jacobi theta constants. The theta constant θ1/2,1/2(τ,0) is identically zero, but the other three can be nonzero.
References
Automorphic forms
Modular forms | Theta constant | [
"Mathematics"
] | 331 | [
"Modular forms",
"Number theory"
] |
44,466,971 | https://en.wikipedia.org/wiki/Biased%20random%20walk%20on%20a%20graph | In network science, a biased random walk on a graph is a time path process in which an evolving variable jumps from its current state to one of various potential new states; unlike in a pure random walk, the probabilities of the potential new states are unequal.
Biased random walks on a graph provide an approach for the structural analysis of undirected graphs in order to extract their symmetries when the network is too complex or when it is not large enough to be analyzed by statistical methods. The concept of biased random walks on a graph has attracted the attention of many researchers and data companies over the past decade especially in the transportation and social networks.
Model
There have been written many different representations of the biased random walks on graphs based on the particular purpose of the analysis. A common representation of the mechanism for undirected graphs is as follows:
On an undirected graph, a walker takes a step from the current node, to node Assuming that each node has an attribute the probability of jumping from node to is given by:
where represents the topological weight of the edge going from to
In fact, the steps of the walker are biased by the factor of which may differ from one node to another.
Depending on the network, the attribute can be interpreted differently. It might be implied as the attraction of a person in a social network, it might be betweenness centrality or even it might be explained as an intrinsic characteristic of a node. In case of a fair random walk on graph is one for all the nodes.
In case of shortest paths random walks is the total number of the shortest paths between all pairs of nodes that pass through the node . In fact the walker prefers the nodes with higher betweenness centrality which is defined as below:
Based on the above equation, the recurrence time to a node in the biased walk is given by:
Applications
There are a variety of applications using biased random walks on graphs. Such applications include control of diffusion, advertisement of products on social networks, explaining dispersal and population redistribution of animals and micro-organisms, community detections, wireless networks, and search engines.
See also
Betweenness centrality
Community structure
Kullback–Leibler divergence
Markov chain
Maximal entropy random walk
Random walk closeness centrality
Social network analysis
Travelling salesman problem
References
External links
Gábor Simonyi, "Graph Entropy: A Survey". In Combinatorial Optimization (ed. W. Cook, L. Lovász, and P. Seymour). Providence, RI: Amer. Math. Soc., pp. 399–441, 1995.
Anne-Marie Kermarrec, Erwan Le Merrer, Bruno Sericola, Gilles Trédan, "Evaluating the Quality of a Network Topology through Random Walks" in Gadi Taubenfeld (ed.) Distributed Computing
Network theory
Social networks
Social systems
Social information processing | Biased random walk on a graph | [
"Mathematics"
] | 576 | [
"Network theory",
"Mathematical relations",
"Graph theory"
] |
44,470,364 | https://en.wikipedia.org/wiki/Tufts%20Center%20for%20the%20Study%20of%20Drug%20Development | The Tufts Center for the Study of Drug Development is an independent, academic, non-profit research center at Tufts University in Boston, dedicated to researching drug development. It was established in 1976 by American physician Louis Lasagna. The Center develops and publishes information to help researchers, regulators, and policy makers in areas related to the pharmaceutical and biotechnology industries. In any given year, approximately 55% of Tufts CSDD's operating expenses are supported by grants from the private sector and 45% from the public sector.
Research
The Center studies trends in the pharmaceutical industry, maintaining databases pertaining to investigational new drugs, approved drugs, biopharmaceuticals, fast-tracked drugs, and orphan drugs. The Center provides this information with the aim to improve the efficiency of drug development, foster innovation, and increase patient access to medicines.
Drug development costs
The center has published numerous studies estimating the cost of developing new pharmaceutical drugs. In 2001, researchers from the Center estimated that the cost of doing so was $802 million, and in 2014, they released a study estimating that this amount had risen to nearly $2.6 billion. The 2014 study was criticized by Medecins Sans Frontieres, which said it was unreliable because the industry's research and development spending is not made public. Aaron Carroll of the New York Times also criticized the study, saying it "contains a lot of assumptions that tend to favor the pharmaceutical industry." The center's 2016 estimate, published in the Journal of Health Economics, found the cost to have averaged $2.87 billion (in 2013 dollars).
References
Tufts University
1976 establishments in Massachusetts
Non-profit organizations based in Boston
Organizations established in 1976
Drug discovery | Tufts Center for the Study of Drug Development | [
"Chemistry",
"Biology"
] | 343 | [
"Drug discovery",
"Life sciences industry",
"Medicinal chemistry"
] |
44,470,705 | https://en.wikipedia.org/wiki/Ziegler%20process | In organic chemistry, the Ziegler process (also called the Ziegler-Alfol synthesis) is a method for producing fatty alcohols from ethylene using an organoaluminium compound. The reaction produces linear primary alcohols with an even numbered carbon chain. The process uses an aluminum compound to oligomerize ethylene and allow the resulting alkyl group to be oxygenated. The usually targeted products are fatty alcohols, which are otherwise derived from natural fats and oils. Fatty alcohols are used in food and chemical processing. They are useful due to their amphipathic nature. The synthesis route is named after Karl Ziegler, who described the process in 1955.
Process details
The Ziegler alcohol synthesis involves oligomerization of ethylene using triethylaluminium followed by oxidation. The triethylaluminium is produced by action of aluminium, ethylene, and hydrogen gas. In the production process, two-thirds of the triethylaluminium produced is recycled back into the reactor, and only one-third is used to produce the fatty alcohols. The recycling step is used to produce triethylaluminium at a higher yield and with less time. Triethylaluminium reacts with ethylene to form higher molecular weight trialkylaluminium. The number of equivalents of ethylene n equals the total number of monomer units being grown on the initial ethylene chains, where (n = x + y + z), and x, y, and z are the number of ethylene units per chain. Trialkylaluminium is oxidized with air to form aluminum alkoxides, and finally hydrolyzed to aluminum hydroxide and the desired alcohols.
Al+3ethylene+1.5H2 → Al(C2H5)3
Al(C2H5)3 n-ethylene → Al((CH2CH2)nCH2CH3)3
Al((CH2CH2)nCH2CH3)3+ O2 → Al(O(CH2CH2)nCH2CH3)3
Al(O(CH2CH2)nCH2CH3)3+3H2O → Al(OH)3 + CH3CH2(CH2CH2)nOH
The temperature of the reaction influences the molecular weight of alcohol growth. Temperatures in the range of 60-120°C form higher molecular weight trialkylaluminium while higher temperatures (e.g., 120-150 °C) cause thermal displacement reactions that afford α-olefin chains. Above 150 °C, dimerization of the α-olefins occurs.
Applications
Aluminum hydroxide, the byproduct of the synthesis, can be dehydrated to give aluminium oxide, which, at high purities, has a high commercial value. One modification of the Ziegler process is called the EPAL process. In this process, chain growth is optimized to produce alcohols with narrow molecular weight distribution. Synthesis of other alcohols use Ziegler and the updated EPAL process, such as the transalkylation of styrene to form 2-phenylethanol. Diethylaluminum hydride can be employed in place of triethylaluminium.
See also
Guerbet reaction, a route for the production of branched fatty alcohols
References
Fatty alcohols
Chemical processes | Ziegler process | [
"Chemistry"
] | 712 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
44,476,255 | https://en.wikipedia.org/wiki/Finite%20volume%20method%20for%20three-dimensional%20diffusion%20problem | Finite volume method (FVM) is a numerical method. FVM in computational fluid dynamics is used to solve the partial differential equation which arises from the physical conservation law by using discretisation. Convection is always followed by diffusion and hence where convection is considered we have to consider combine effect of convection and diffusion. But in places where fluid flow plays a non-considerable role we can neglect the convective effect of the flow. In this case we have to consider more simplistic case of only diffusion. The general equation for steady convection-diffusion can be easily derived from the general transport equation for property by deleting transient.
General transport equation is defined as:
…………………………………………….1
Where,
is a conservative form of all fluid flow,
is density,
is a net rate of flow of out of fluid element represents convective term,
is a transient term,
is a rate of change of due to diffusion,
is a rate of increase of due to source.
Due to steady state condition transient term becomes zero and due to absence of convection convective term becomes zero, therefore steady state three- dimensional convection and diffusion equation becomes:
………………………………………………………….2
Therefore,
…………………………………………………………………….3
Flow should also satisfy continuity equation therefore,
………………………………………………………………………………………………………4
To solve the problem we will follow following general steps
Grid formation:
1. Divide the domain into discrete control volume.
2. Place the nodal point between end points defining the physical boundaries. Boundaries/ faces of the control volume are created midway between adjacent nodes.
3. Set up the control volume near the edge of domain such that physical as well as control volume boundaries will coincide with each other.
4. Considering a general nodal point P accompanied by six neighboring nodal point ‘E’ (which represent east), ‘W’ (which represent west), ‘N’ (which represent north), ‘S’ (which represent south), ‘T’ (which represent Top), ‘B’ (which represent bottom). In the considered control volume east side face is referred by ‘e’, west side face is referred by ‘w’, north side face is referred by ‘n’, south side face is referred by ‘s’, top side face is referred by ‘t’, bottom side face is referred by ‘b’.
5. Now the distance between nodes W and P, between nodes P and E, between nodes P and N, between nodes S and P, between nodes P and T, between nodes B and P are denoted as respectively.
Discretisation:
On integration of equation 3 in one dimension over the general control volume gives:
[
Now using central differencing method we can rewrite above equation as
[
This can be rearranged to give the discretised equation for interior nodes:
Where
Solution of equation:
1. For solving the one- dimensional convection- diffusion problem we have to express equation (8) at all the grid nodes.
2. Now obtained set of algebraic equations is then solved to obtain the distribution of the transported property .
See also
Finite volume method
Computational fluid dynamics
Finite volume method for one-dimensional steady state diffusion
Convection
Control volume
Central differencing scheme
External links
http://mathworld.wolfram.com/FiniteVolumeMethod.html
The finite volume method by R. Eymard, T Gallouët and R. Herbin, update of the article published in Handbook of Numerical Analysis, 2000
https://web.archive.org/web/20140210101323/http://s6.aeromech.usyd.edu.au/aero/cvanalysis/integral_approach.pdf
http://www.phy.davidson.edu/fachome/dmb/py200/centraldiff.htm
http://opencourses.emu.edu.tr/course/view.php?id=27&lang=en
References
Computational fluid dynamics
Mathematical problems | Finite volume method for three-dimensional diffusion problem | [
"Physics",
"Chemistry",
"Mathematics"
] | 907 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Mathematical problems",
"Computational physics"
] |
44,476,398 | https://en.wikipedia.org/wiki/Nitrogen-15%20nuclear%20magnetic%20resonance%20spectroscopy | Nitrogen-15 nuclear magnetic resonance spectroscopy (nitrogen-15 NMR spectroscopy, or just simply 15N NMR) is a version of nuclear magnetic resonance spectroscopy that examines samples containing the 15N nucleus. 15N NMR differs in several ways from the more common 13C and 1H NMR. To circumvent the difficulties associated with measurement of the quadrupolar, spin-1 14N nuclide, 15N NMR is employed in samples for detection since it has a ground-state spin of ½. Since14N is 99.64% abundant, incorporation of 15N into samples often requires novel synthetic techniques.
Nitrogen-15 is frequently used in nuclear magnetic resonance spectroscopy (NMR), because unlike the more abundant nitrogen-14, that has an integer nuclear spin and thus a quadrupole moment, 15N has a fractional nuclear spin of one-half, which offers advantages for NMR like narrower line width. Proteins can be isotopically labeled by cultivating them in a medium containing nitrogen-15 as the only source of nitrogen. In addition, nitrogen-15 is used to label proteins in quantitative proteomics (e.g. SILAC).
Implementation
15N NMR has complications not encountered in 1H and 13C NMR spectroscopy. The 0.36% natural abundance of 15N results in a major sensitivity penalty. Sensitivity is made worse by its low gyromagnetic ratio (γ = −27.126 × 106 T−1s−1), which is 10.14% that of 1H. The signal-to-noise ratio for 1H is about 300-fold greater than 15N at the same magnetic field strength.
Physical properties
The physical properties of 15N are quite different from other nuclei. Its properties along with several common nuclei are summarized in the below table.
From these data, one can see that at full enrichment, 15N is about one tenth (-27.126/267.522) as sensitive as 1H.
Chemical shift trends
The International Union of Pure and Applied Chemistry (IUPAC) recommends using CH3NO2 as the experimental standard; however in practice many spectroscopists utilize pressurized NH3(l) instead. For 15N, chemical shifts referenced with NH3(l) are 380.5 ppm upfield from CH3NO2 (δNH3 = δCH3NO2 + 380.5 ppm). Chemical shifts for 15N are somewhat erratic but typically they span a range of -400 ppm to 1100 ppm with respect to CH3NO2. Below is a summary of 15N chemical shifts for common organic groups referenced with respect to NH3, whose chemical shift is assigned 0 ppm.
Gyromagnetic ratio
Unlike most nuclei, the gyromagnetic ratio for 15N is negative. With the spin precession phenomenon, the sign of γ determines the sense (clockwise vs counterclockwise) of precession. Most common nuclei have positive gyromagnetic ratios such as 1H and 13C.
Applications
Tautomerization
15N NMR is used in a wide array of areas from biological to inorganic techniques. A famous application in organic synthesis is to utilize 15N to monitor tautomerization equilibria in heteroaromatics because of the dramatic change in 15N shifts between tautomers.
Protein NMR
15N NMR is also extremely valuable in protein NMR investigations. Most notably, the introduction of three-dimensional experiments with 15N lifts the ambiguity in 13C–13C two-dimensional experiments. In solid-state nuclear magnetic resonance (ssNMR), for example, 15N is most commonly utilized in NCACX, NCOCX, and CANcoCX pulse sequences.
Investigation of nitrogen-containing heterocycles
15N NMR is the most effective method for investigation of structure of heterocycles with a high content of nitrogen atoms (tetrazoles, triazines and their annelated analogs). 15N labeling followed by analysis of 13C–15N and 1H–15N couplings may be used for establishing structures and chemical transformations of nitrogen heterocycles.
INEPT
Insensitive nuclei enhanced by polarization transfer (INEPT) is a signal resolution enhancement method. Because 15N has a gyromagnetic ratio that is small in magnitude, the resolution is quite poor. A common pulse sequence which dramatically improves the resolution for 15N is INEPT. The INEPT is an elegant solution in most cases because it increases the Boltzmann polarization and lowers T1 values (thus scans are shorter). Additionally, INEPT can accommodate negative gyromagnetic ratios, whereas the common nuclear Overhauser effect (NOE) cannot.
See also
Heteronuclear single quantum coherence spectroscopy (HSQC)
Two-dimensional nuclear magnetic resonance spectroscopy
Triple-resonance nuclear magnetic resonance spectroscopy
References
Nuclear magnetic resonance | Nitrogen-15 nuclear magnetic resonance spectroscopy | [
"Physics",
"Chemistry"
] | 1,018 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
62,105,397 | https://en.wikipedia.org/wiki/Lofoten%20Declaration | The Lofoten Declaration, drafted in August 2017, is an international manifesto calling for the end of hydrocarbon exploration and further expansion of fossil fuel reserves for climate change mitigation. It calls for fossil fuel divestment and phase-out of use with a just transition to a low-carbon economy. A diverse group of signatories has signed the declaration, affirming demands for early leadership in efforts from the economies that have benefited the most from fossil fuel extraction. The Declaration was named for the Lofoten archipelago where public concern has successfully prevented offshore development of petroleum reserves.
Signed by 600 organizations spanning 76 countries, the Declaration is believed to have helped influence the government of Norway to divest from investment in exploration and production.
The Lofoten Declaration also helped mobilize efforts for a global treaty on a managed decline of fossil fuel production, such as the Fossil Fuel Non-Proliferation Treaty Initiative.
References
Climate action plans
Emissions reduction
Climate change policy
Ethical investment
Low-carbon economy
Sustainable energy | Lofoten Declaration | [
"Chemistry"
] | 202 | [
"Greenhouse gases",
"Emissions reduction"
] |
62,112,877 | https://en.wikipedia.org/wiki/Dependent%20random%20choice | In mathematics, dependent random choice is a probabilistic technique that shows how to find a large set of vertices in a dense graph such that every small subset of vertices has many common neighbors. It is a useful tool to embed a graph into another graph with many edges. Thus it has its application in extremal graph theory, additive combinatorics and Ramsey theory.
Statement of theorem
Let , and suppose:
Every graph on vertices with at least edges contains a subset of vertices with such that for all with , has at least common neighbors.
Proof
The basic idea is to choose the set of vertices randomly. However, instead of choosing each vertex uniformly at random, the procedure randomly chooses a list of vertices first and then chooses common neighbors as the set of vertices. The hope is that in this way, the chosen set would be more likely to have more common neighbors.
Formally, let be a list of vertices chosen uniformly at random from with replacement (allowing repetition). Let be the common neighborhood of . The expected value of isFor every -element subset of , contains if and only if is contained in the common neighborhood of , which occurs with probability An is bad if it has less than common neighbors. Then for each fixed -element subset of , it is contained in with probability less than . Therefore by linearity of expectation, To eliminate bad subsets, we exclude one element in each bad subset. The number of remaining elements is at least , whose expected value is at least Consequently, there exists a such that there are at least elements in remaining after getting rid of all bad -element subsets. The set of the remaining elements expresses the desired properties.
Applications
Turán numbers of a bipartite graph
Dependent random choice can help find the Turán number. Using appropriate parameters, if is a bipartite graph in which all vertices in have degree at most , then the extremal number where only depends on .
Formally, with , let be a sufficiently large constant such that If then
and so the assumption of dependent random choice holds. Hence, for each graph with at least edges, there exists a vertex subset of size satisfying that every -subset of has at least common neighbors. By embedding into by embedding into arbitrarily and then embedding the vertices in one by one, then for each vertex in , it has at most neighbors in , which shows that their images in have at least common neighbors. Thus can be embedded into one of the common neighbors while avoiding collisions.
This can be generalized to degenerate graphs using a variation of dependent random choice.
Embedding a 1-subdivision of a complete graph
DRC can be applied directly to show that if is a graph on vertices and edges, then contains a 1-subdivision of a complete graph with vertices. This can be shown in a similar way to the above proof of the bound on Turán number of a bipartite graph.
Indeed, if we set , we have (since )and so the DRC assumption holds. Since a 1-subdivision of the complete graph on vertices is a bipartite graph with parts of size and where every vertex in the second part has degree two, the embedding argument in the proof of the bound on Turán number of a bipartite graph produces the desired result.
Variation
A stronger version finds two subsets of vertices in a dense graph so that every small subset of vertices in has a lot of common neighbors in .
Formally, let be some positive integers with , and let be some real number. Suppose that the following constraints hold:
Then every graph on vertices with at least edges contains two subsets of vertices so that any vertices in have at least common neighbors in .
Extremal number of a degenerate bipartite graph
Using this stronger statement, one can upper bound the extremal number of -degenerate bipartite graphs: for each -degenerate bipartite graph with at most vertices, the extremal number is at most
Ramsey number of a degenerate bipartite graph
This statement can be also applied to obtain an upper bound of the Ramsey number of a degenerate bipartite graphs. If is a fixed integer, then for every bipartite -degenerate bipartite graph on vertices, the Ramsey number is of the order
References
Further reading
Dependent Random Choice - MIT Math
Extremal graph theory
Probabilistic arguments | Dependent random choice | [
"Mathematics"
] | 894 | [
"Mathematical relations",
"Graph theory",
"Extremal graph theory"
] |
73,098,022 | https://en.wikipedia.org/wiki/Mechanics%20of%20Advanced%20Composite%20Structures | Mechanics of Advanced Composite Structures is a biannual peer-reviewed open-access scientific journal published by Semnan University. The editor-in-chief is Abdoulhossein Fereidoon (Semnan University). The journal covers all aspects of research on composite structures. It was established in 2014 and is abstracted and indexed in Scopus.
References
External links
Academic journals established in 2014
Quarterly journals
English-language journals
Creative Commons Attribution-licensed journals
Materials science journals | Mechanics of Advanced Composite Structures | [
"Materials_science",
"Engineering"
] | 101 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
73,098,119 | https://en.wikipedia.org/wiki/Stem%20cell%20laws%20and%20policy%20in%20Iran | Iran's flexible approach towards stem-cell research is linked to the Shia tradition being flexible enough to allow for ESCs science; the second is that the approval of ESCs research was made easier by permissive laws governing other areas of biomedicine, such as new assisted reproductive technologies; and the third is that Iran's lack of a public discussion of bioscience affects how its ESCs research policy is seen.
In 2002 a fatwa was issued by the Supreme Leader of Iran regarding the permissibility of "destruction of residual embryos from the in vitro fertilization cycle for the purpose of obtaining stem cells for research purposes" as accreditation for the country's ESCs scientific community. Following this positive fatwa, the stem cell department of the Royan Institute in Tehran was established in the same year to establish the ESCs lines and to develop techniques to differentiate these lineages into various mature cell types including cardiomyocytes, B cells, and neurons.
Cultural and sociological context
In the case of Iran, the introduction of the Islamic system appears to have forced religious scholars to assume an unprecedented role of responsibility and engagement. Social Planning and Public Health. The large-scale crises may partially explain why religious scholars quoted Maslahat and Istihsan in their decisions on medicine and health problems rather than looking at those problems in isolation or in the theoretical sense as it happened in the past.
The financial burden of devastating diseases is also at the heart of hESC research decisions in Iran. This may have given Shia scholars a boost to reconsider the degenerative and public health implications of terminal disorders or economic hardships causing serious and long-lasting illnesses for individuals, families, and society. The eight-year Iran-Iraq war has left the country with a large disabled community, due in part to spinal cord injuries, which has been an intense motivation for Iran to start many cell therapy research projects.
Even in developing countries (e.g. Iran), home cell therapy and regenerative medicine are cost-effective solutions for the growing number of patients with chronic diseases including diabetes, heart disease, and hepatitis blood diseases such as thalassemia, which are relatively common.
Sanctions
Although Iran has a liberal domestic regulatory environment and its scientists are well-funded, the country cannot import scientific equipment and materials that most stem cell scientists use on a daily basis. This is largely due to trade sanctions imposed on Iran by other countries, including the United States and the European Community, which ban the export of certain scientific equipment to Iran and require other special export permits.
See also
Stem cell laws and policy in China
Stem cell laws
Stem cell controversy
Stem cell laws and policy in the United States
References
Biotechnology law
Medical law | Stem cell laws and policy in Iran | [
"Biology"
] | 547 | [
"Biotechnology law"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.